problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_30341 | rasdani/github-patches | git_diff | pyinstaller__pyinstaller-7411 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Multiprocessing "spawn" not thread-safe on Linux.
<!--
Welcome to the PyInstaller issue tracker! Before creating an issue, please heed the following:
1. This tracker should only be used to report bugs and request features / enhancements to PyInstaller
- For questions and general support, use the discussions forum.
2. Use the search function before creating a new issue. Duplicates will be closed and directed to
the original discussion.
3. When making a bug report, make sure you provide all required information. The easier it is for
maintainers to reproduce, the faster it'll be fixed.
-->
<!-- +++ ONLY TEXT +++ DO NOT POST IMAGES +++ -->
## Description of the issue
When using multiprocessing with the "spawn" method on Linux, processes sometimes fail to start with the message: `FileNotFoundError: [Errno 2] No such file or directory: '/tmp/_MEIOchafX/multiprocessing_bug.py'` This happens if different threads try to launch processes concurrently. It would appear that the "spawn" method is not thread-safe when used with freeze support.
As mentioned below, this bug does not manifest when built with `--onedir`.
[debug.log](https://github.com/pyinstaller/pyinstaller/files/10560051/debug.log) contains import and bootloader logging for a failure case.
Q: Does this happen with "fork"?
A: No
Q: Does this happen when running from source?
A: No, only when packaged as a pyinstaller executable with `--onefile`.
Q: Does this happen on Windows?
A: Unknown
### Context information (for bug reports)
* Output of `pyinstaller --version`: ```5.7.0```
* Version of Python: Python 3.10.6
* Platform: Ubuntu 22.04.1 LTS
* How you installed Python: apt
* Did you also try this on another platform?
* Ubuntu 18.04.6 LTS, pyinstaller 4.7, Python 3.7 - Bug is present
* WSL2 pyinstaller 4.7, Python 3.7 - Bug is present
* try the latest development version, using the following command:
```shell
pip install https://github.com/pyinstaller/pyinstaller/archive/develop.zip
```
* follow *all* the instructions in our "If Things Go Wrong" Guide
(https://github.com/pyinstaller/pyinstaller/wiki/If-Things-Go-Wrong) and
### Make sure [everything is packaged correctly](https://github.com/pyinstaller/pyinstaller/wiki/How-to-Report-Bugs#make-sure-everything-is-packaged-correctly)
* [x] start with clean installation
* [x] use the latest development version
* [x] Run your frozen program **from a command window (shell)** — instead of double-clicking on it
* [x] Package your program in **--onedir mode** - **BUG DOES NOT MANIFEST**
* [x] Package **without UPX**, say: use the option `--noupx` or set `upx=False` in your .spec-file - - **BUG DOES NOT MANIFEST**
* [x] Repackage you application in **verbose/debug mode**. For this, pass the option `--debug` to `pyi-makespec` or `pyinstaller` or use `EXE(..., debug=1, ...)` in your .spec file.
### A minimal example program which shows the error
```python
import multiprocessing
import sys
from threading import Thread
DEFAULT_N = 3
def main():
try:
n = int(sys.argv[1])
except IndexError:
n=DEFAULT_N
threads = []
for i in range(n):
threads.append(Thread(target=foo, args=(i, )))
for i in range(n):
threads[i].start()
for i in range(n):
threads[i].join()
def foo(i):
multiprocessing_context = multiprocessing.get_context(method="spawn")
q = multiprocessing_context.Queue()
p = multiprocessing_context.Process(target=bar, args=(q, i), daemon=True)
p.start()
p.join()
def bar(q, i):
q.put('hello')
print(f"{i} Added to queue")
if __name__ == "__main__":
multiprocessing.freeze_support()
main()
```
### Stacktrace / full error message
Note: If you can't reproduce the bug, try increasing the parameter from 2 to 5 (or higher).
```
$> dist/multiprocessing_bug 2
0 Added to queue
Traceback (most recent call last):
File "multiprocessing_bug.py", line 34, in <module>
multiprocessing.freeze_support()
File "PyInstaller/hooks/rthooks/pyi_rth_multiprocessing.py", line 49, in _freeze_support
File "multiprocessing/spawn.py", line 116, in spawn_main
File "multiprocessing/spawn.py", line 125, in _main
File "multiprocessing/spawn.py", line 236, in prepare
File "multiprocessing/spawn.py", line 287, in _fixup_main_from_path
File "runpy.py", line 288, in run_path
File "runpy.py", line 252, in _get_code_from_file
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/_MEIOchafX/multiprocessing_bug.py'
[8216] Failed to execute script 'multiprocessing_bug' due to unhandled exception!
```
### Workaround
As shown below, adding a lock around the call to `process.start()` seems to resolve the issue.
```python
import multiprocessing
import sys
from threading import Thread, Lock
DEFAULT_N = 3
def main():
try:
n = int(sys.argv[1])
except IndexError:
n=DEFAULT_N
threads = []
for i in range(n):
threads.append(Thread(target=foo, args=(i, )))
for i in range(n):
threads[i].start()
for i in range(n):
threads[i].join()
lock = Lock()
def foo(i):
multiprocessing_context = multiprocessing.get_context(method="spawn")
q = multiprocessing_context.Queue()
p = multiprocessing_context.Process(target=bar, args=(q, i), daemon=True)
with lock:
p.start()
p.join()
def bar(q, i):
q.put('hello')
print(f"{i} Added to queue")
if __name__ == "__main__":
multiprocessing.freeze_support()
main()
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PyInstaller/hooks/rthooks/pyi_rth_multiprocessing.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2017-2023, PyInstaller Development Team.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: Apache-2.0
10 #-----------------------------------------------------------------------------
11
12 import multiprocessing
13 import multiprocessing.spawn as spawn
14 # 'spawn' multiprocessing needs some adjustments on osx
15 import os
16 import sys
17 from subprocess import _args_from_interpreter_flags
18
19 # prevent spawn from trying to read __main__ in from the main script
20 multiprocessing.process.ORIGINAL_DIR = None
21
22
23 def _freeze_support():
24 # We want to catch the two processes that are spawned by the multiprocessing code:
25 # - the semaphore tracker, which cleans up named semaphores in the spawn multiprocessing mode
26 # - the fork server, which keeps track of worker processes in forkserver mode.
27 # both of these processes are started by spawning a new copy of the running executable, passing it the flags from
28 # _args_from_interpreter_flags and then "-c" and an import statement.
29 # Look for those flags and the import statement, then exec() the code ourselves.
30
31 if (
32 len(sys.argv) >= 2 and sys.argv[-2] == '-c' and sys.argv[-1].startswith((
33 'from multiprocessing.semaphore_tracker import main', # Py<3.8
34 'from multiprocessing.resource_tracker import main', # Py>=3.8
35 'from multiprocessing.forkserver import main'
36 )) and set(sys.argv[1:-2]) == set(_args_from_interpreter_flags())
37 ):
38 exec(sys.argv[-1])
39 sys.exit()
40
41 if spawn.is_forking(sys.argv):
42 kwds = {}
43 for arg in sys.argv[2:]:
44 name, value = arg.split('=')
45 if value == 'None':
46 kwds[name] = None
47 else:
48 kwds[name] = int(value)
49 spawn.spawn_main(**kwds)
50 sys.exit()
51
52
53 multiprocessing.freeze_support = spawn.freeze_support = _freeze_support
54
55 # Bootloader unsets _MEIPASS2 for child processes to allow running PyInstaller binaries inside pyinstaller binaries.
56 # This is ok for mac or unix with fork() system call. But on Windows we need to overcome missing fork() function.
57
58 if sys.platform.startswith('win'):
59 import multiprocessing.popen_spawn_win32 as forking
60 else:
61 import multiprocessing.popen_fork as forking
62 import multiprocessing.popen_spawn_posix as spawning
63
64
65 # Mix-in to re-set _MEIPASS2 from sys._MEIPASS.
66 class FrozenSupportMixIn:
67 def __init__(self, *args, **kw):
68 if hasattr(sys, 'frozen'):
69 # We have to set original _MEIPASS2 value from sys._MEIPASS to get --onefile mode working.
70 os.putenv('_MEIPASS2', sys._MEIPASS) # @UndefinedVariable
71 try:
72 super().__init__(*args, **kw)
73 finally:
74 if hasattr(sys, 'frozen'):
75 # On some platforms (e.g. AIX) 'os.unsetenv()' is not available. In those cases we cannot delete the
76 # variable but only set it to the empty string. The bootloader can handle this case.
77 if hasattr(os, 'unsetenv'):
78 os.unsetenv('_MEIPASS2')
79 else:
80 os.putenv('_MEIPASS2', '')
81
82
83 # Patch forking.Popen to re-set _MEIPASS2 from sys._MEIPASS.
84 class _Popen(FrozenSupportMixIn, forking.Popen):
85 pass
86
87
88 forking.Popen = _Popen
89
90 if not sys.platform.startswith('win'):
91 # Patch spawning.Popen to re-set _MEIPASS2 from sys._MEIPASS.
92 class _Spawning_Popen(FrozenSupportMixIn, spawning.Popen):
93 pass
94
95 spawning.Popen = _Spawning_Popen
96
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/PyInstaller/hooks/rthooks/pyi_rth_multiprocessing.py b/PyInstaller/hooks/rthooks/pyi_rth_multiprocessing.py
--- a/PyInstaller/hooks/rthooks/pyi_rth_multiprocessing.py
+++ b/PyInstaller/hooks/rthooks/pyi_rth_multiprocessing.py
@@ -9,6 +9,7 @@
# SPDX-License-Identifier: Apache-2.0
#-----------------------------------------------------------------------------
+import threading
import multiprocessing
import multiprocessing.spawn as spawn
# 'spawn' multiprocessing needs some adjustments on osx
@@ -64,14 +65,17 @@
# Mix-in to re-set _MEIPASS2 from sys._MEIPASS.
class FrozenSupportMixIn:
+ _lock = threading.Lock()
+
def __init__(self, *args, **kw):
- if hasattr(sys, 'frozen'):
+ # The whole code block needs be executed under a lock to prevent race conditions between `os.putenv` and
+ # `os.unsetenv` calls when processes are spawned concurrently from multiple threads. See #7410.
+ with self._lock:
# We have to set original _MEIPASS2 value from sys._MEIPASS to get --onefile mode working.
os.putenv('_MEIPASS2', sys._MEIPASS) # @UndefinedVariable
- try:
- super().__init__(*args, **kw)
- finally:
- if hasattr(sys, 'frozen'):
+ try:
+ super().__init__(*args, **kw)
+ finally:
# On some platforms (e.g. AIX) 'os.unsetenv()' is not available. In those cases we cannot delete the
# variable but only set it to the empty string. The bootloader can handle this case.
if hasattr(os, 'unsetenv'):
| {"golden_diff": "diff --git a/PyInstaller/hooks/rthooks/pyi_rth_multiprocessing.py b/PyInstaller/hooks/rthooks/pyi_rth_multiprocessing.py\n--- a/PyInstaller/hooks/rthooks/pyi_rth_multiprocessing.py\n+++ b/PyInstaller/hooks/rthooks/pyi_rth_multiprocessing.py\n@@ -9,6 +9,7 @@\n # SPDX-License-Identifier: Apache-2.0\n #-----------------------------------------------------------------------------\n \n+import threading\n import multiprocessing\n import multiprocessing.spawn as spawn\n # 'spawn' multiprocessing needs some adjustments on osx\n@@ -64,14 +65,17 @@\n \n # Mix-in to re-set _MEIPASS2 from sys._MEIPASS.\n class FrozenSupportMixIn:\n+ _lock = threading.Lock()\n+\n def __init__(self, *args, **kw):\n- if hasattr(sys, 'frozen'):\n+ # The whole code block needs be executed under a lock to prevent race conditions between `os.putenv` and\n+ # `os.unsetenv` calls when processes are spawned concurrently from multiple threads. See #7410.\n+ with self._lock:\n # We have to set original _MEIPASS2 value from sys._MEIPASS to get --onefile mode working.\n os.putenv('_MEIPASS2', sys._MEIPASS) # @UndefinedVariable\n- try:\n- super().__init__(*args, **kw)\n- finally:\n- if hasattr(sys, 'frozen'):\n+ try:\n+ super().__init__(*args, **kw)\n+ finally:\n # On some platforms (e.g. AIX) 'os.unsetenv()' is not available. In those cases we cannot delete the\n # variable but only set it to the empty string. The bootloader can handle this case.\n if hasattr(os, 'unsetenv'):\n", "issue": "Multiprocessing \"spawn\" not thread-safe on Linux.\n<!--\r\nWelcome to the PyInstaller issue tracker! Before creating an issue, please heed the following:\r\n\r\n1. This tracker should only be used to report bugs and request features / enhancements to PyInstaller\r\n - For questions and general support, use the discussions forum.\r\n2. Use the search function before creating a new issue. Duplicates will be closed and directed to\r\n the original discussion.\r\n3. When making a bug report, make sure you provide all required information. The easier it is for\r\n maintainers to reproduce, the faster it'll be fixed.\r\n-->\r\n\r\n<!-- +++ ONLY TEXT +++ DO NOT POST IMAGES +++ -->\r\n\r\n## Description of the issue\r\n\r\nWhen using multiprocessing with the \"spawn\" method on Linux, processes sometimes fail to start with the message: `FileNotFoundError: [Errno 2] No such file or directory: '/tmp/_MEIOchafX/multiprocessing_bug.py'` This happens if different threads try to launch processes concurrently. It would appear that the \"spawn\" method is not thread-safe when used with freeze support.\r\n\r\nAs mentioned below, this bug does not manifest when built with `--onedir`.\r\n\r\n[debug.log](https://github.com/pyinstaller/pyinstaller/files/10560051/debug.log) contains import and bootloader logging for a failure case. \r\n\r\n\r\nQ: Does this happen with \"fork\"? \r\nA: No\r\n\r\nQ: Does this happen when running from source?\r\nA: No, only when packaged as a pyinstaller executable with `--onefile`.\r\n\r\nQ: Does this happen on Windows?\r\nA: Unknown\r\n\r\n### Context information (for bug reports)\r\n\r\n* Output of `pyinstaller --version`: ```5.7.0```\r\n* Version of Python: Python 3.10.6\r\n* Platform: Ubuntu 22.04.1 LTS\r\n* How you installed Python: apt\r\n* Did you also try this on another platform?\r\n * Ubuntu 18.04.6 LTS, pyinstaller 4.7, Python 3.7 - Bug is present\r\n * WSL2 pyinstaller 4.7, Python 3.7 - Bug is present\r\n\r\n\r\n* try the latest development version, using the following command:\r\n\r\n```shell\r\npip install https://github.com/pyinstaller/pyinstaller/archive/develop.zip\r\n```\r\n\r\n* follow *all* the instructions in our \"If Things Go Wrong\" Guide\r\n (https://github.com/pyinstaller/pyinstaller/wiki/If-Things-Go-Wrong) and\r\n\r\n### Make sure [everything is packaged correctly](https://github.com/pyinstaller/pyinstaller/wiki/How-to-Report-Bugs#make-sure-everything-is-packaged-correctly)\r\n\r\n * [x] start with clean installation\r\n * [x] use the latest development version\r\n * [x] Run your frozen program **from a command window (shell)** \u2014 instead of double-clicking on it\r\n * [x] Package your program in **--onedir mode** - **BUG DOES NOT MANIFEST**\r\n * [x] Package **without UPX**, say: use the option `--noupx` or set `upx=False` in your .spec-file - - **BUG DOES NOT MANIFEST**\r\n * [x] Repackage you application in **verbose/debug mode**. For this, pass the option `--debug` to `pyi-makespec` or `pyinstaller` or use `EXE(..., debug=1, ...)` in your .spec file.\r\n\r\n\r\n### A minimal example program which shows the error\r\n\r\n```python\r\nimport multiprocessing\r\nimport sys\r\nfrom threading import Thread\r\n\r\nDEFAULT_N = 3\r\n\r\ndef main():\r\n try:\r\n n = int(sys.argv[1])\r\n except IndexError:\r\n n=DEFAULT_N\r\n\r\n threads = []\r\n for i in range(n):\r\n threads.append(Thread(target=foo, args=(i, )))\r\n for i in range(n):\r\n threads[i].start()\r\n for i in range(n):\r\n threads[i].join()\r\n\r\ndef foo(i):\r\n multiprocessing_context = multiprocessing.get_context(method=\"spawn\")\r\n q = multiprocessing_context.Queue()\r\n p = multiprocessing_context.Process(target=bar, args=(q, i), daemon=True)\r\n p.start()\r\n p.join()\r\n\r\n\r\ndef bar(q, i):\r\n q.put('hello')\r\n print(f\"{i} Added to queue\")\r\n\r\nif __name__ == \"__main__\":\r\n multiprocessing.freeze_support()\r\n main()\r\n\r\n```\r\n\r\n### Stacktrace / full error message\r\n\r\nNote: If you can't reproduce the bug, try increasing the parameter from 2 to 5 (or higher).\r\n\r\n```\r\n$> dist/multiprocessing_bug 2\r\n0 Added to queue\r\nTraceback (most recent call last):\r\n File \"multiprocessing_bug.py\", line 34, in <module>\r\n multiprocessing.freeze_support()\r\n File \"PyInstaller/hooks/rthooks/pyi_rth_multiprocessing.py\", line 49, in _freeze_support\r\n File \"multiprocessing/spawn.py\", line 116, in spawn_main\r\n File \"multiprocessing/spawn.py\", line 125, in _main\r\n File \"multiprocessing/spawn.py\", line 236, in prepare\r\n File \"multiprocessing/spawn.py\", line 287, in _fixup_main_from_path\r\n File \"runpy.py\", line 288, in run_path\r\n File \"runpy.py\", line 252, in _get_code_from_file\r\nFileNotFoundError: [Errno 2] No such file or directory: '/tmp/_MEIOchafX/multiprocessing_bug.py'\r\n[8216] Failed to execute script 'multiprocessing_bug' due to unhandled exception!\r\n\r\n```\r\n\r\n### Workaround\r\n\r\nAs shown below, adding a lock around the call to `process.start()` seems to resolve the issue.\r\n\r\n```python\r\nimport multiprocessing\r\nimport sys\r\nfrom threading import Thread, Lock\r\n\r\nDEFAULT_N = 3\r\n\r\ndef main():\r\n try:\r\n n = int(sys.argv[1])\r\n except IndexError:\r\n n=DEFAULT_N\r\n\r\n threads = []\r\n for i in range(n):\r\n threads.append(Thread(target=foo, args=(i, )))\r\n for i in range(n):\r\n threads[i].start()\r\n for i in range(n):\r\n threads[i].join()\r\n\r\nlock = Lock()\r\ndef foo(i):\r\n multiprocessing_context = multiprocessing.get_context(method=\"spawn\")\r\n q = multiprocessing_context.Queue()\r\n p = multiprocessing_context.Process(target=bar, args=(q, i), daemon=True)\r\n\r\n with lock:\r\n p.start()\r\n\r\n p.join()\r\n\r\n\r\ndef bar(q, i):\r\n q.put('hello')\r\n print(f\"{i} Added to queue\")\r\n\r\nif __name__ == \"__main__\":\r\n multiprocessing.freeze_support()\r\n main()\r\n\r\n```\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2017-2023, PyInstaller Development Team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: Apache-2.0\n#-----------------------------------------------------------------------------\n\nimport multiprocessing\nimport multiprocessing.spawn as spawn\n# 'spawn' multiprocessing needs some adjustments on osx\nimport os\nimport sys\nfrom subprocess import _args_from_interpreter_flags\n\n# prevent spawn from trying to read __main__ in from the main script\nmultiprocessing.process.ORIGINAL_DIR = None\n\n\ndef _freeze_support():\n # We want to catch the two processes that are spawned by the multiprocessing code:\n # - the semaphore tracker, which cleans up named semaphores in the spawn multiprocessing mode\n # - the fork server, which keeps track of worker processes in forkserver mode.\n # both of these processes are started by spawning a new copy of the running executable, passing it the flags from\n # _args_from_interpreter_flags and then \"-c\" and an import statement.\n # Look for those flags and the import statement, then exec() the code ourselves.\n\n if (\n len(sys.argv) >= 2 and sys.argv[-2] == '-c' and sys.argv[-1].startswith((\n 'from multiprocessing.semaphore_tracker import main', # Py<3.8\n 'from multiprocessing.resource_tracker import main', # Py>=3.8\n 'from multiprocessing.forkserver import main'\n )) and set(sys.argv[1:-2]) == set(_args_from_interpreter_flags())\n ):\n exec(sys.argv[-1])\n sys.exit()\n\n if spawn.is_forking(sys.argv):\n kwds = {}\n for arg in sys.argv[2:]:\n name, value = arg.split('=')\n if value == 'None':\n kwds[name] = None\n else:\n kwds[name] = int(value)\n spawn.spawn_main(**kwds)\n sys.exit()\n\n\nmultiprocessing.freeze_support = spawn.freeze_support = _freeze_support\n\n# Bootloader unsets _MEIPASS2 for child processes to allow running PyInstaller binaries inside pyinstaller binaries.\n# This is ok for mac or unix with fork() system call. But on Windows we need to overcome missing fork() function.\n\nif sys.platform.startswith('win'):\n import multiprocessing.popen_spawn_win32 as forking\nelse:\n import multiprocessing.popen_fork as forking\n import multiprocessing.popen_spawn_posix as spawning\n\n\n# Mix-in to re-set _MEIPASS2 from sys._MEIPASS.\nclass FrozenSupportMixIn:\n def __init__(self, *args, **kw):\n if hasattr(sys, 'frozen'):\n # We have to set original _MEIPASS2 value from sys._MEIPASS to get --onefile mode working.\n os.putenv('_MEIPASS2', sys._MEIPASS) # @UndefinedVariable\n try:\n super().__init__(*args, **kw)\n finally:\n if hasattr(sys, 'frozen'):\n # On some platforms (e.g. AIX) 'os.unsetenv()' is not available. In those cases we cannot delete the\n # variable but only set it to the empty string. The bootloader can handle this case.\n if hasattr(os, 'unsetenv'):\n os.unsetenv('_MEIPASS2')\n else:\n os.putenv('_MEIPASS2', '')\n\n\n# Patch forking.Popen to re-set _MEIPASS2 from sys._MEIPASS.\nclass _Popen(FrozenSupportMixIn, forking.Popen):\n pass\n\n\nforking.Popen = _Popen\n\nif not sys.platform.startswith('win'):\n # Patch spawning.Popen to re-set _MEIPASS2 from sys._MEIPASS.\n class _Spawning_Popen(FrozenSupportMixIn, spawning.Popen):\n pass\n\n spawning.Popen = _Spawning_Popen\n", "path": "PyInstaller/hooks/rthooks/pyi_rth_multiprocessing.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2017-2023, PyInstaller Development Team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: Apache-2.0\n#-----------------------------------------------------------------------------\n\nimport threading\nimport multiprocessing\nimport multiprocessing.spawn as spawn\n# 'spawn' multiprocessing needs some adjustments on osx\nimport os\nimport sys\nfrom subprocess import _args_from_interpreter_flags\n\n# prevent spawn from trying to read __main__ in from the main script\nmultiprocessing.process.ORIGINAL_DIR = None\n\n\ndef _freeze_support():\n # We want to catch the two processes that are spawned by the multiprocessing code:\n # - the semaphore tracker, which cleans up named semaphores in the spawn multiprocessing mode\n # - the fork server, which keeps track of worker processes in forkserver mode.\n # both of these processes are started by spawning a new copy of the running executable, passing it the flags from\n # _args_from_interpreter_flags and then \"-c\" and an import statement.\n # Look for those flags and the import statement, then exec() the code ourselves.\n\n if (\n len(sys.argv) >= 2 and sys.argv[-2] == '-c' and sys.argv[-1].startswith((\n 'from multiprocessing.semaphore_tracker import main', # Py<3.8\n 'from multiprocessing.resource_tracker import main', # Py>=3.8\n 'from multiprocessing.forkserver import main'\n )) and set(sys.argv[1:-2]) == set(_args_from_interpreter_flags())\n ):\n exec(sys.argv[-1])\n sys.exit()\n\n if spawn.is_forking(sys.argv):\n kwds = {}\n for arg in sys.argv[2:]:\n name, value = arg.split('=')\n if value == 'None':\n kwds[name] = None\n else:\n kwds[name] = int(value)\n spawn.spawn_main(**kwds)\n sys.exit()\n\n\nmultiprocessing.freeze_support = spawn.freeze_support = _freeze_support\n\n# Bootloader unsets _MEIPASS2 for child processes to allow running PyInstaller binaries inside pyinstaller binaries.\n# This is ok for mac or unix with fork() system call. But on Windows we need to overcome missing fork() function.\n\nif sys.platform.startswith('win'):\n import multiprocessing.popen_spawn_win32 as forking\nelse:\n import multiprocessing.popen_fork as forking\n import multiprocessing.popen_spawn_posix as spawning\n\n\n# Mix-in to re-set _MEIPASS2 from sys._MEIPASS.\nclass FrozenSupportMixIn:\n _lock = threading.Lock()\n\n def __init__(self, *args, **kw):\n # The whole code block needs be executed under a lock to prevent race conditions between `os.putenv` and\n # `os.unsetenv` calls when processes are spawned concurrently from multiple threads. See #7410.\n with self._lock:\n # We have to set original _MEIPASS2 value from sys._MEIPASS to get --onefile mode working.\n os.putenv('_MEIPASS2', sys._MEIPASS) # @UndefinedVariable\n try:\n super().__init__(*args, **kw)\n finally:\n # On some platforms (e.g. AIX) 'os.unsetenv()' is not available. In those cases we cannot delete the\n # variable but only set it to the empty string. The bootloader can handle this case.\n if hasattr(os, 'unsetenv'):\n os.unsetenv('_MEIPASS2')\n else:\n os.putenv('_MEIPASS2', '')\n\n\n# Patch forking.Popen to re-set _MEIPASS2 from sys._MEIPASS.\nclass _Popen(FrozenSupportMixIn, forking.Popen):\n pass\n\n\nforking.Popen = _Popen\n\nif not sys.platform.startswith('win'):\n # Patch spawning.Popen to re-set _MEIPASS2 from sys._MEIPASS.\n class _Spawning_Popen(FrozenSupportMixIn, spawning.Popen):\n pass\n\n spawning.Popen = _Spawning_Popen\n", "path": "PyInstaller/hooks/rthooks/pyi_rth_multiprocessing.py"}]} | 2,763 | 402 |
gh_patches_debug_13193 | rasdani/github-patches | git_diff | opensearch-project__opensearch-build-499 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make plugin integtest.sh run against non-snapshot build
The plugin integtest.sh picks up the opensearch version provided in build.gradle, which is 1.1.0-SNAPSHOT. Since the release candidates are non snapshot built artifacts, make this configurable in integ test job
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bundle-workflow/src/paths/script_finder.py`
Content:
```
1 # SPDX-License-Identifier: Apache-2.0
2 #
3 # The OpenSearch Contributors require contributions made to
4 # this file be licensed under the Apache-2.0 license or a
5 # compatible open source license.
6
7 import os
8
9
10 class ScriptFinder:
11 class ScriptNotFoundError(Exception):
12 def __init__(self, kind, paths):
13 self.kind = kind
14 self.paths = paths
15 super().__init__(f"Could not find {kind} script. Looked in {paths}.")
16
17 component_scripts_path = os.path.realpath(
18 os.path.join(
19 os.path.dirname(os.path.abspath(__file__)), "../../scripts/components"
20 )
21 )
22
23 default_scripts_path = os.path.realpath(
24 os.path.join(
25 os.path.dirname(os.path.abspath(__file__)), "../../scripts/default"
26 )
27 )
28
29 """
30 ScriptFinder is a helper that abstracts away the details of where to look for build, test and install scripts.
31
32 For build.sh and integtest.sh scripts, given a component name and a checked-out Git repository,
33 it will look in the following locations, in order:
34 * Root of the Git repository
35 * /scripts/<script-name> in the Git repository
36 * <component_scripts_path>/<component_name>/<script-name>
37 * <default_scripts_path>/<script-name>
38
39 For install.sh scripts, given a component name, it will look in the following locations, in order:
40 * <component_scripts_path>/<component_name>/<script-name>
41 * <default_scripts_path>/<script-name>
42 """
43
44 @classmethod
45 def __find_script(cls, name, paths):
46 script = next(filter(lambda path: os.path.exists(path), paths), None)
47 if script is None:
48 raise ScriptFinder.ScriptNotFoundError(name, paths)
49 return script
50
51 @classmethod
52 def find_build_script(cls, component_name, git_dir):
53 paths = [
54 os.path.realpath(os.path.join(git_dir, "build.sh")),
55 os.path.realpath(os.path.join(git_dir, "scripts/build.sh")),
56 os.path.realpath(
57 os.path.join(cls.component_scripts_path, component_name, "build.sh")
58 ),
59 os.path.realpath(os.path.join(cls.default_scripts_path, "build.sh")),
60 ]
61
62 return cls.__find_script("build.sh", paths)
63
64 @classmethod
65 def find_integ_test_script(cls, component_name, git_dir):
66 paths = [
67 os.path.realpath(os.path.join(git_dir, "integtest.sh")),
68 os.path.realpath(os.path.join(git_dir, "scripts/integtest.sh")),
69 os.path.realpath(
70 os.path.join(cls.component_scripts_path, component_name, "integtest.sh")
71 ),
72 os.path.realpath(os.path.join(cls.default_scripts_path, "integtest.sh")),
73 ]
74
75 return cls.__find_script("integtest.sh", paths)
76
77 @classmethod
78 def find_install_script(cls, component_name):
79 paths = [
80 os.path.realpath(
81 os.path.join(cls.component_scripts_path, component_name, "install.sh")
82 ),
83 os.path.realpath(os.path.join(cls.default_scripts_path, "install.sh")),
84 ]
85
86 return cls.__find_script("install.sh", paths)
87
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bundle-workflow/src/paths/script_finder.py b/bundle-workflow/src/paths/script_finder.py
--- a/bundle-workflow/src/paths/script_finder.py
+++ b/bundle-workflow/src/paths/script_finder.py
@@ -64,8 +64,9 @@
@classmethod
def find_integ_test_script(cls, component_name, git_dir):
paths = [
- os.path.realpath(os.path.join(git_dir, "integtest.sh")),
- os.path.realpath(os.path.join(git_dir, "scripts/integtest.sh")),
+ # TODO: Uncomment this after the integtest.sh tool is removed from plugin repos. See issue #497
+ # os.path.realpath(os.path.join(git_dir, "integtest.sh")),
+ # os.path.realpath(os.path.join(git_dir, "scripts/integtest.sh")),
os.path.realpath(
os.path.join(cls.component_scripts_path, component_name, "integtest.sh")
),
| {"golden_diff": "diff --git a/bundle-workflow/src/paths/script_finder.py b/bundle-workflow/src/paths/script_finder.py\n--- a/bundle-workflow/src/paths/script_finder.py\n+++ b/bundle-workflow/src/paths/script_finder.py\n@@ -64,8 +64,9 @@\n @classmethod\n def find_integ_test_script(cls, component_name, git_dir):\n paths = [\n- os.path.realpath(os.path.join(git_dir, \"integtest.sh\")),\n- os.path.realpath(os.path.join(git_dir, \"scripts/integtest.sh\")),\n+ # TODO: Uncomment this after the integtest.sh tool is removed from plugin repos. See issue #497\n+ # os.path.realpath(os.path.join(git_dir, \"integtest.sh\")),\n+ # os.path.realpath(os.path.join(git_dir, \"scripts/integtest.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"integtest.sh\")\n ),\n", "issue": "Make plugin integtest.sh run against non-snapshot build\nThe plugin integtest.sh picks up the opensearch version provided in build.gradle, which is 1.1.0-SNAPSHOT. Since the release candidates are non snapshot built artifacts, make this configurable in integ test job\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\n\n\nclass ScriptFinder:\n class ScriptNotFoundError(Exception):\n def __init__(self, kind, paths):\n self.kind = kind\n self.paths = paths\n super().__init__(f\"Could not find {kind} script. Looked in {paths}.\")\n\n component_scripts_path = os.path.realpath(\n os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"../../scripts/components\"\n )\n )\n\n default_scripts_path = os.path.realpath(\n os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"../../scripts/default\"\n )\n )\n\n \"\"\"\n ScriptFinder is a helper that abstracts away the details of where to look for build, test and install scripts.\n\n For build.sh and integtest.sh scripts, given a component name and a checked-out Git repository,\n it will look in the following locations, in order:\n * Root of the Git repository\n * /scripts/<script-name> in the Git repository\n * <component_scripts_path>/<component_name>/<script-name>\n * <default_scripts_path>/<script-name>\n\n For install.sh scripts, given a component name, it will look in the following locations, in order:\n * <component_scripts_path>/<component_name>/<script-name>\n * <default_scripts_path>/<script-name>\n \"\"\"\n\n @classmethod\n def __find_script(cls, name, paths):\n script = next(filter(lambda path: os.path.exists(path), paths), None)\n if script is None:\n raise ScriptFinder.ScriptNotFoundError(name, paths)\n return script\n\n @classmethod\n def find_build_script(cls, component_name, git_dir):\n paths = [\n os.path.realpath(os.path.join(git_dir, \"build.sh\")),\n os.path.realpath(os.path.join(git_dir, \"scripts/build.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"build.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"build.sh\")),\n ]\n\n return cls.__find_script(\"build.sh\", paths)\n\n @classmethod\n def find_integ_test_script(cls, component_name, git_dir):\n paths = [\n os.path.realpath(os.path.join(git_dir, \"integtest.sh\")),\n os.path.realpath(os.path.join(git_dir, \"scripts/integtest.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"integtest.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"integtest.sh\")),\n ]\n\n return cls.__find_script(\"integtest.sh\", paths)\n\n @classmethod\n def find_install_script(cls, component_name):\n paths = [\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"install.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"install.sh\")),\n ]\n\n return cls.__find_script(\"install.sh\", paths)\n", "path": "bundle-workflow/src/paths/script_finder.py"}], "after_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\n\n\nclass ScriptFinder:\n class ScriptNotFoundError(Exception):\n def __init__(self, kind, paths):\n self.kind = kind\n self.paths = paths\n super().__init__(f\"Could not find {kind} script. Looked in {paths}.\")\n\n component_scripts_path = os.path.realpath(\n os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"../../scripts/components\"\n )\n )\n\n default_scripts_path = os.path.realpath(\n os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"../../scripts/default\"\n )\n )\n\n \"\"\"\n ScriptFinder is a helper that abstracts away the details of where to look for build, test and install scripts.\n\n For build.sh and integtest.sh scripts, given a component name and a checked-out Git repository,\n it will look in the following locations, in order:\n * Root of the Git repository\n * /scripts/<script-name> in the Git repository\n * <component_scripts_path>/<component_name>/<script-name>\n * <default_scripts_path>/<script-name>\n\n For install.sh scripts, given a component name, it will look in the following locations, in order:\n * <component_scripts_path>/<component_name>/<script-name>\n * <default_scripts_path>/<script-name>\n \"\"\"\n\n @classmethod\n def __find_script(cls, name, paths):\n script = next(filter(lambda path: os.path.exists(path), paths), None)\n if script is None:\n raise ScriptFinder.ScriptNotFoundError(name, paths)\n return script\n\n @classmethod\n def find_build_script(cls, component_name, git_dir):\n paths = [\n os.path.realpath(os.path.join(git_dir, \"build.sh\")),\n os.path.realpath(os.path.join(git_dir, \"scripts/build.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"build.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"build.sh\")),\n ]\n\n return cls.__find_script(\"build.sh\", paths)\n\n @classmethod\n def find_integ_test_script(cls, component_name, git_dir):\n paths = [\n # TODO: Uncomment this after the integtest.sh tool is removed from plugin repos. See issue #497\n # os.path.realpath(os.path.join(git_dir, \"integtest.sh\")),\n # os.path.realpath(os.path.join(git_dir, \"scripts/integtest.sh\")),\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"integtest.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"integtest.sh\")),\n ]\n\n return cls.__find_script(\"integtest.sh\", paths)\n\n @classmethod\n def find_install_script(cls, component_name):\n paths = [\n os.path.realpath(\n os.path.join(cls.component_scripts_path, component_name, \"install.sh\")\n ),\n os.path.realpath(os.path.join(cls.default_scripts_path, \"install.sh\")),\n ]\n\n return cls.__find_script(\"install.sh\", paths)\n", "path": "bundle-workflow/src/paths/script_finder.py"}]} | 1,165 | 214 |
gh_patches_debug_1486 | rasdani/github-patches | git_diff | certbot__certbot-8776 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix lint and mypy with Python < 3.8
In https://github.com/certbot/certbot/pull/8748, we made a change that causes our lint and mypy tests to need to be run on Python 3.8+ to pass. See https://github.com/certbot/certbot/pull/8748#issuecomment-808790093 for the discussion of the problem here.
I don't think we should do this. Certbot supports Python 3.6+ and I think it could cause a particularly bad experience for new devs that don't happen to know they need Python 3.8+. This change also broke our development Dockerfile as can be seen at https://dev.azure.com/certbot/certbot/_build/results?buildId=3742&view=logs&j=bea2d267-f41e-5b33-7b51-a88065a8cbb0&t=0dc90756-6888-5ee6-5a6a-5855e6b9ae76&l=1873. Instead, I think we should change our approach here so the tests work on all versions of Python we support. I'm open to other ideas, but the two ideas I had for this are:
1. Just declare a runtime dependency on `typing-extensions`.
2. Add `typing-extensions` as a dev/test dependency and try to import it, but use similar fallback code to what we current have if it's not available.
What do you think @adferrand? Are you interested in working on this?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `certbot/setup.py`
Content:
```
1 import codecs
2 from distutils.version import LooseVersion
3 import os
4 import re
5 import sys
6
7 from setuptools import __version__ as setuptools_version
8 from setuptools import find_packages
9 from setuptools import setup
10
11 min_setuptools_version='39.0.1'
12 # This conditional isn't necessary, but it provides better error messages to
13 # people who try to install this package with older versions of setuptools.
14 if LooseVersion(setuptools_version) < LooseVersion(min_setuptools_version):
15 raise RuntimeError(f'setuptools {min_setuptools_version}+ is required')
16
17 # Workaround for https://bugs.python.org/issue8876, see
18 # https://bugs.python.org/issue8876#msg208792
19 # This can be removed when using Python 2.7.9 or later:
20 # https://hg.python.org/cpython/raw-file/v2.7.9/Misc/NEWS
21 if os.path.abspath(__file__).split(os.path.sep)[1] == 'vagrant':
22 del os.link
23
24
25 def read_file(filename, encoding='utf8'):
26 """Read unicode from given file."""
27 with codecs.open(filename, encoding=encoding) as fd:
28 return fd.read()
29
30
31 here = os.path.abspath(os.path.dirname(__file__))
32
33 # read version number (and other metadata) from package init
34 init_fn = os.path.join(here, 'certbot', '__init__.py')
35 meta = dict(re.findall(r"""__([a-z]+)__ = '([^']+)""", read_file(init_fn)))
36
37 readme = read_file(os.path.join(here, 'README.rst'))
38 version = meta['version']
39
40 # This package relies on PyOpenSSL and requests, however, it isn't specified
41 # here to avoid masking the more specific request requirements in acme. See
42 # https://github.com/pypa/pip/issues/988 for more info.
43 install_requires = [
44 'acme>=1.8.0',
45 # We technically need ConfigArgParse 0.10.0 for Python 2.6 support, but
46 # saying so here causes a runtime error against our temporary fork of 0.9.3
47 # in which we added 2.6 support (see #2243), so we relax the requirement.
48 'ConfigArgParse>=0.9.3',
49 'configobj>=5.0.6',
50 'cryptography>=2.1.4',
51 'distro>=1.0.1',
52 # 1.1.0+ is required to avoid the warnings described at
53 # https://github.com/certbot/josepy/issues/13.
54 'josepy>=1.1.0',
55 'parsedatetime>=2.4',
56 'pyrfc3339',
57 'pytz',
58 # This dependency needs to be added using environment markers to avoid its
59 # installation on Linux.
60 'pywin32>=300 ; sys_platform == "win32"',
61 f'setuptools>={min_setuptools_version}',
62 'zope.component',
63 'zope.interface',
64 ]
65
66 dev_extras = [
67 'astroid',
68 'azure-devops',
69 'coverage',
70 'ipdb',
71 'mypy',
72 'PyGithub',
73 # 1.1.0+ is required for poetry to use the poetry-core library for the
74 # build system declared in tools/pinning/pyproject.toml.
75 'poetry>=1.1.0',
76 'pylint',
77 'pytest',
78 'pytest-cov',
79 'pytest-xdist',
80 'tox',
81 'twine',
82 'wheel',
83 ]
84
85 docs_extras = [
86 # If you have Sphinx<1.5.1, you need docutils<0.13.1
87 # https://github.com/sphinx-doc/sphinx/issues/3212
88 'repoze.sphinx.autointerface',
89 'Sphinx>=1.2', # Annotation support
90 'sphinx_rtd_theme',
91 ]
92
93 setup(
94 name='certbot',
95 version=version,
96 description="ACME client",
97 long_description=readme,
98 url='https://github.com/letsencrypt/letsencrypt',
99 author="Certbot Project",
100 author_email='[email protected]',
101 license='Apache License 2.0',
102 python_requires='>=3.6',
103 classifiers=[
104 'Development Status :: 5 - Production/Stable',
105 'Environment :: Console',
106 'Environment :: Console :: Curses',
107 'Intended Audience :: System Administrators',
108 'License :: OSI Approved :: Apache Software License',
109 'Operating System :: POSIX :: Linux',
110 'Programming Language :: Python',
111 'Programming Language :: Python :: 3',
112 'Programming Language :: Python :: 3.6',
113 'Programming Language :: Python :: 3.7',
114 'Programming Language :: Python :: 3.8',
115 'Programming Language :: Python :: 3.9',
116 'Topic :: Internet :: WWW/HTTP',
117 'Topic :: Security',
118 'Topic :: System :: Installation/Setup',
119 'Topic :: System :: Networking',
120 'Topic :: System :: Systems Administration',
121 'Topic :: Utilities',
122 ],
123
124 packages=find_packages(exclude=['docs', 'examples', 'tests', 'venv']),
125 include_package_data=True,
126
127 install_requires=install_requires,
128 extras_require={
129 'dev': dev_extras,
130 'docs': docs_extras,
131 },
132
133 entry_points={
134 'console_scripts': [
135 'certbot = certbot.main:main',
136 ],
137 'certbot.plugins': [
138 'manual = certbot._internal.plugins.manual:Authenticator',
139 'null = certbot._internal.plugins.null:Installer',
140 'standalone = certbot._internal.plugins.standalone:Authenticator',
141 'webroot = certbot._internal.plugins.webroot:Authenticator',
142 ],
143 },
144 )
145
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/certbot/setup.py b/certbot/setup.py
--- a/certbot/setup.py
+++ b/certbot/setup.py
@@ -77,6 +77,9 @@
'pytest',
'pytest-cov',
'pytest-xdist',
+ # typing-extensions is required to import typing.Protocol and make the mypy checks
+ # pass (along with pylint about non-existent objects) on Python 3.6 & 3.7
+ 'typing-extensions',
'tox',
'twine',
'wheel',
| {"golden_diff": "diff --git a/certbot/setup.py b/certbot/setup.py\n--- a/certbot/setup.py\n+++ b/certbot/setup.py\n@@ -77,6 +77,9 @@\n 'pytest',\n 'pytest-cov',\n 'pytest-xdist',\n+ # typing-extensions is required to import typing.Protocol and make the mypy checks\n+ # pass (along with pylint about non-existent objects) on Python 3.6 & 3.7\n+ 'typing-extensions',\n 'tox',\n 'twine',\n 'wheel',\n", "issue": "Fix lint and mypy with Python < 3.8\nIn https://github.com/certbot/certbot/pull/8748, we made a change that causes our lint and mypy tests to need to be run on Python 3.8+ to pass. See https://github.com/certbot/certbot/pull/8748#issuecomment-808790093 for the discussion of the problem here.\r\n\r\nI don't think we should do this. Certbot supports Python 3.6+ and I think it could cause a particularly bad experience for new devs that don't happen to know they need Python 3.8+. This change also broke our development Dockerfile as can be seen at https://dev.azure.com/certbot/certbot/_build/results?buildId=3742&view=logs&j=bea2d267-f41e-5b33-7b51-a88065a8cbb0&t=0dc90756-6888-5ee6-5a6a-5855e6b9ae76&l=1873. Instead, I think we should change our approach here so the tests work on all versions of Python we support. I'm open to other ideas, but the two ideas I had for this are:\r\n\r\n1. Just declare a runtime dependency on `typing-extensions`.\r\n2. Add `typing-extensions` as a dev/test dependency and try to import it, but use similar fallback code to what we current have if it's not available.\r\n\r\nWhat do you think @adferrand? Are you interested in working on this?\n", "before_files": [{"content": "import codecs\nfrom distutils.version import LooseVersion\nimport os\nimport re\nimport sys\n\nfrom setuptools import __version__ as setuptools_version\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nmin_setuptools_version='39.0.1'\n# This conditional isn't necessary, but it provides better error messages to\n# people who try to install this package with older versions of setuptools.\nif LooseVersion(setuptools_version) < LooseVersion(min_setuptools_version):\n raise RuntimeError(f'setuptools {min_setuptools_version}+ is required')\n\n# Workaround for https://bugs.python.org/issue8876, see\n# https://bugs.python.org/issue8876#msg208792\n# This can be removed when using Python 2.7.9 or later:\n# https://hg.python.org/cpython/raw-file/v2.7.9/Misc/NEWS\nif os.path.abspath(__file__).split(os.path.sep)[1] == 'vagrant':\n del os.link\n\n\ndef read_file(filename, encoding='utf8'):\n \"\"\"Read unicode from given file.\"\"\"\n with codecs.open(filename, encoding=encoding) as fd:\n return fd.read()\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n# read version number (and other metadata) from package init\ninit_fn = os.path.join(here, 'certbot', '__init__.py')\nmeta = dict(re.findall(r\"\"\"__([a-z]+)__ = '([^']+)\"\"\", read_file(init_fn)))\n\nreadme = read_file(os.path.join(here, 'README.rst'))\nversion = meta['version']\n\n# This package relies on PyOpenSSL and requests, however, it isn't specified\n# here to avoid masking the more specific request requirements in acme. See\n# https://github.com/pypa/pip/issues/988 for more info.\ninstall_requires = [\n 'acme>=1.8.0',\n # We technically need ConfigArgParse 0.10.0 for Python 2.6 support, but\n # saying so here causes a runtime error against our temporary fork of 0.9.3\n # in which we added 2.6 support (see #2243), so we relax the requirement.\n 'ConfigArgParse>=0.9.3',\n 'configobj>=5.0.6',\n 'cryptography>=2.1.4',\n 'distro>=1.0.1',\n # 1.1.0+ is required to avoid the warnings described at\n # https://github.com/certbot/josepy/issues/13.\n 'josepy>=1.1.0',\n 'parsedatetime>=2.4',\n 'pyrfc3339',\n 'pytz',\n # This dependency needs to be added using environment markers to avoid its\n # installation on Linux.\n 'pywin32>=300 ; sys_platform == \"win32\"',\n f'setuptools>={min_setuptools_version}',\n 'zope.component',\n 'zope.interface',\n]\n\ndev_extras = [\n 'astroid',\n 'azure-devops',\n 'coverage',\n 'ipdb',\n 'mypy',\n 'PyGithub',\n # 1.1.0+ is required for poetry to use the poetry-core library for the\n # build system declared in tools/pinning/pyproject.toml.\n 'poetry>=1.1.0',\n 'pylint',\n 'pytest',\n 'pytest-cov',\n 'pytest-xdist',\n 'tox',\n 'twine',\n 'wheel',\n]\n\ndocs_extras = [\n # If you have Sphinx<1.5.1, you need docutils<0.13.1\n # https://github.com/sphinx-doc/sphinx/issues/3212\n 'repoze.sphinx.autointerface',\n 'Sphinx>=1.2', # Annotation support\n 'sphinx_rtd_theme',\n]\n\nsetup(\n name='certbot',\n version=version,\n description=\"ACME client\",\n long_description=readme,\n url='https://github.com/letsencrypt/letsencrypt',\n author=\"Certbot Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n python_requires='>=3.6',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Console :: Curses',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n 'Topic :: System :: Installation/Setup',\n 'Topic :: System :: Networking',\n 'Topic :: System :: Systems Administration',\n 'Topic :: Utilities',\n ],\n\n packages=find_packages(exclude=['docs', 'examples', 'tests', 'venv']),\n include_package_data=True,\n\n install_requires=install_requires,\n extras_require={\n 'dev': dev_extras,\n 'docs': docs_extras,\n },\n\n entry_points={\n 'console_scripts': [\n 'certbot = certbot.main:main',\n ],\n 'certbot.plugins': [\n 'manual = certbot._internal.plugins.manual:Authenticator',\n 'null = certbot._internal.plugins.null:Installer',\n 'standalone = certbot._internal.plugins.standalone:Authenticator',\n 'webroot = certbot._internal.plugins.webroot:Authenticator',\n ],\n },\n)\n", "path": "certbot/setup.py"}], "after_files": [{"content": "import codecs\nfrom distutils.version import LooseVersion\nimport os\nimport re\nimport sys\n\nfrom setuptools import __version__ as setuptools_version\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nmin_setuptools_version='39.0.1'\n# This conditional isn't necessary, but it provides better error messages to\n# people who try to install this package with older versions of setuptools.\nif LooseVersion(setuptools_version) < LooseVersion(min_setuptools_version):\n raise RuntimeError(f'setuptools {min_setuptools_version}+ is required')\n\n# Workaround for https://bugs.python.org/issue8876, see\n# https://bugs.python.org/issue8876#msg208792\n# This can be removed when using Python 2.7.9 or later:\n# https://hg.python.org/cpython/raw-file/v2.7.9/Misc/NEWS\nif os.path.abspath(__file__).split(os.path.sep)[1] == 'vagrant':\n del os.link\n\n\ndef read_file(filename, encoding='utf8'):\n \"\"\"Read unicode from given file.\"\"\"\n with codecs.open(filename, encoding=encoding) as fd:\n return fd.read()\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n# read version number (and other metadata) from package init\ninit_fn = os.path.join(here, 'certbot', '__init__.py')\nmeta = dict(re.findall(r\"\"\"__([a-z]+)__ = '([^']+)\"\"\", read_file(init_fn)))\n\nreadme = read_file(os.path.join(here, 'README.rst'))\nversion = meta['version']\n\n# This package relies on PyOpenSSL and requests, however, it isn't specified\n# here to avoid masking the more specific request requirements in acme. See\n# https://github.com/pypa/pip/issues/988 for more info.\ninstall_requires = [\n 'acme>=1.8.0',\n # We technically need ConfigArgParse 0.10.0 for Python 2.6 support, but\n # saying so here causes a runtime error against our temporary fork of 0.9.3\n # in which we added 2.6 support (see #2243), so we relax the requirement.\n 'ConfigArgParse>=0.9.3',\n 'configobj>=5.0.6',\n 'cryptography>=2.1.4',\n 'distro>=1.0.1',\n # 1.1.0+ is required to avoid the warnings described at\n # https://github.com/certbot/josepy/issues/13.\n 'josepy>=1.1.0',\n 'parsedatetime>=2.4',\n 'pyrfc3339',\n 'pytz',\n # This dependency needs to be added using environment markers to avoid its\n # installation on Linux.\n 'pywin32>=300 ; sys_platform == \"win32\"',\n f'setuptools>={min_setuptools_version}',\n 'zope.component',\n 'zope.interface',\n]\n\ndev_extras = [\n 'astroid',\n 'azure-devops',\n 'coverage',\n 'ipdb',\n 'mypy',\n 'PyGithub',\n # 1.1.0+ is required for poetry to use the poetry-core library for the\n # build system declared in tools/pinning/pyproject.toml.\n 'poetry>=1.1.0',\n 'pylint',\n 'pytest',\n 'pytest-cov',\n 'pytest-xdist',\n # typing-extensions is required to import typing.Protocol and make the mypy checks\n # pass (along with pylint about non-existent objects) on Python 3.6 & 3.7\n 'typing-extensions',\n 'tox',\n 'twine',\n 'wheel',\n]\n\ndocs_extras = [\n # If you have Sphinx<1.5.1, you need docutils<0.13.1\n # https://github.com/sphinx-doc/sphinx/issues/3212\n 'repoze.sphinx.autointerface',\n 'Sphinx>=1.2', # Annotation support\n 'sphinx_rtd_theme',\n]\n\nsetup(\n name='certbot',\n version=version,\n description=\"ACME client\",\n long_description=readme,\n url='https://github.com/letsencrypt/letsencrypt',\n author=\"Certbot Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n python_requires='>=3.6',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Console :: Curses',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n 'Topic :: System :: Installation/Setup',\n 'Topic :: System :: Networking',\n 'Topic :: System :: Systems Administration',\n 'Topic :: Utilities',\n ],\n\n packages=find_packages(exclude=['docs', 'examples', 'tests', 'venv']),\n include_package_data=True,\n\n install_requires=install_requires,\n extras_require={\n 'dev': dev_extras,\n 'docs': docs_extras,\n },\n\n entry_points={\n 'console_scripts': [\n 'certbot = certbot.main:main',\n ],\n 'certbot.plugins': [\n 'manual = certbot._internal.plugins.manual:Authenticator',\n 'null = certbot._internal.plugins.null:Installer',\n 'standalone = certbot._internal.plugins.standalone:Authenticator',\n 'webroot = certbot._internal.plugins.webroot:Authenticator',\n ],\n },\n)\n", "path": "certbot/setup.py"}]} | 2,224 | 125 |
gh_patches_debug_32163 | rasdani/github-patches | git_diff | opsdroid__opsdroid-1183 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add Google Style Docstrings
We should implement Google Style Docstrings to every function, method, class in opsdroid. This style will support existing documentation and will help in the future by generating documentation automatically.
This consists in a bit of effort so this issue can be worked by more than one contributor, just make sure that everyone knows what you are working on in order to avoid other contributors spending time on something that you are working on.
If you are unfamiliar with the Google Style Docstrings I'd recommend that you check these resources:
- [Sphix 1.8.0+ - Google Style Docstrings](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html)
Docstrings that need to be updated:
- main.py
- [x] configure_lang
- [ ] configure_log
- [ ] get_logging_level
- [ ] check_dependencies
- [ ] print_version
- [ ] print_example_config
- [ ] edit_files
- [x] welcome_message
- ~~helper.py~~
- [x] get_opsdroid
- [x] del_rw
- [x] move_config_to_appdir
- memory.py
- [x] Memory
- [x] get
- [x] put
- [x] _get_from_database
- [x] _put_to_database
- message.py
- [x] Message
- [x] __init__
- [x] _thinking_delay
- [x] _typing delay
- [x] respond
- [x] react
- web.py
- [ ] Web
- [x] get_port
- [x] get_host
- [x] get_ssl_context
- [ ] start
- [ ] build_response
- [ ] web_index_handler
- [ ] web_stats_handler
- matchers.py
- [ ] match_regex
- [ ] match_apiai_action
- [ ] match_apiai_intent
- [ ] match_dialogflow_action
- [ ] match_dialogflow_intent
- [ ] match_luisai_intent
- [ ] match_rasanlu
- [ ] match_recastai
- [ ] match_witai
- [ ] match_crontab
- [ ] match_webhook
- [ ] match_always
- core.py
- [ ] OpsDroid
- [ ] default_connector
- [ ] exit
- [ ] critical
- [ ] call_stop
- [ ] disconnect
- [ ] stop
- [ ] load
- [ ] start_loop
- [x] setup_skills
- [ ] train_parsers
- [ ] start_connector_tasks
- [ ] start_database
- [ ] run_skill
- [ ] get_ranked_skills
- [ ] parse
- loader.py
- [ ] Loader
- [x] import_module_from_spec
- [x] import_module
- [x] check_cache
- [x] build_module_import_path
- [x] build_module_install_path
- [x] git_clone
- [x] git_pull
- [x] pip_install_deps
- [x] create_default_config
- [x] load_config_file
- [ ] envvar_constructor
- [ ] include_constructor
- [x] setup_modules_directory
- [x] load_modules_from_config
- [x] _load_modules
- [x] _install_module
- [x] _update_module
- [ ] _install_git_module
- [x] _install_local_module
---- ORIGINAL POST ----
I've been wondering about this for a while now and I would like to know if we should replace/update all the docstrings in opsdroid with the Google Style doc strings.
I think this could help new and old contributors to contribute and commit to opsdroid since the Google Style docstrings give more information about every method/function and specifies clearly what sort of input the function/method expects, what will it return and what will be raised (if applicable).
The downsize of this style is that the length of every .py file will increase due to the doc strings, but since most IDE's allow you to hide those fields it shouldn't be too bad.
Here is a good example of Google Style Doc strings: [Sphix 1.8.0+ - Google Style Docstrings](http://www.sphinx-doc.org/en/master/ext/example_google.html)
I would like to know what you all think about this idea and if its worth spending time on it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opsdroid/cli/utils.py`
Content:
```
1 """Utilities for the opsdroid CLI commands."""
2
3 import click
4 import gettext
5 import os
6 import logging
7 import subprocess
8 import sys
9 import time
10 import warnings
11
12 from opsdroid.const import (
13 DEFAULT_LOG_FILENAME,
14 LOCALE_DIR,
15 DEFAULT_LANGUAGE,
16 DEFAULT_CONFIG_PATH,
17 )
18
19 _LOGGER = logging.getLogger("opsdroid")
20
21
22 def edit_files(ctx, param, value):
23 """Open config/log file with favourite editor."""
24 if value == "config":
25 file = DEFAULT_CONFIG_PATH
26 if ctx.command.name == "cli":
27 warn_deprecated_cli_option(
28 "The flag -e/--edit-files has been deprecated. "
29 "Please run `opsdroid config edit` instead."
30 )
31 elif value == "log":
32 file = DEFAULT_LOG_FILENAME
33 if ctx.command.name == "cli":
34 warn_deprecated_cli_option(
35 "The flag -l/--view-log has been deprecated. "
36 "Please run `opsdroid logs` instead."
37 )
38 else:
39 return
40
41 editor = os.environ.get("EDITOR", "vi")
42 if editor == "vi":
43 click.echo(
44 "You are about to edit a file in vim. \n"
45 "Read the tutorial on vim at: https://bit.ly/2HRvvrB"
46 )
47 time.sleep(3)
48
49 subprocess.run([editor, file])
50 ctx.exit(0)
51
52
53 def warn_deprecated_cli_option(text):
54 """Warn users that the cli option they have used is deprecated."""
55 print(f"Warning: {text}")
56 warnings.warn(text, DeprecationWarning)
57
58
59 def configure_lang(config):
60 """Configure app language based on user config.
61
62 Args:
63 config: Language Configuration and it uses ISO 639-1 code.
64 for more info https://en.m.wikipedia.org/wiki/List_of_ISO_639-1_codes
65
66
67 """
68 lang_code = config.get("lang", DEFAULT_LANGUAGE)
69 if lang_code != DEFAULT_LANGUAGE:
70 lang = gettext.translation("opsdroid", LOCALE_DIR, (lang_code,), fallback=True)
71 lang.install()
72
73
74 def check_dependencies():
75 """Check for system dependencies required by opsdroid."""
76 if sys.version_info.major < 3 or sys.version_info.minor < 6:
77 logging.critical(_("Whoops! opsdroid requires python 3.6 or above."))
78 sys.exit(1)
79
80
81 def welcome_message(config):
82 """Add welcome message if set to true in configuration.
83
84 Args:
85 config: config loaded by Loader
86
87 Raises:
88 KeyError: If 'welcome-message' key is not found in configuration file
89
90 """
91 try:
92 if config["welcome-message"]:
93 _LOGGER.info("=" * 40)
94 _LOGGER.info(
95 _(
96 "You can customise your opsdroid by modifying "
97 "your configuration.yaml"
98 )
99 )
100 _LOGGER.info(
101 _("Read more at: " "http://opsdroid.readthedocs.io/#configuration")
102 )
103 _LOGGER.info(_("Watch the Get Started Videos at: " "http://bit.ly/2fnC0Fh"))
104 _LOGGER.info(
105 _(
106 "Install Opsdroid Desktop at: \n"
107 "https://github.com/opsdroid/opsdroid-desktop/"
108 "releases"
109 )
110 )
111 _LOGGER.info("=" * 40)
112 except KeyError:
113 _LOGGER.warning(
114 _("'welcome-message: true/false' is missing in configuration.yaml")
115 )
116
```
Path: `opsdroid/cli/config.py`
Content:
```
1 """The config subcommand for opsdroid cli."""
2
3 import click
4
5 from opsdroid.cli.utils import edit_files, warn_deprecated_cli_option
6 from opsdroid.const import EXAMPLE_CONFIG_FILE
7
8
9 def print_example_config(ctx, param, value):
10 """[Deprecated] Print out the example config."""
11 if not value or ctx.resilient_parsing:
12 return
13 if ctx.command.name == "cli":
14 warn_deprecated_cli_option(
15 "The flag --gen-config has been deprecated. "
16 "Please run `opsdroid config gen` instead."
17 )
18 with open(EXAMPLE_CONFIG_FILE, "r") as conf:
19 click.echo(conf.read())
20 ctx.exit(0)
21
22
23 @click.group()
24 def config():
25 """Subcommands related to opsdroid configuration."""
26
27
28 @config.command()
29 @click.pass_context
30 def gen(ctx):
31 """Print out the example config."""
32 print_example_config(ctx, None, True)
33
34
35 @config.command()
36 @click.pass_context
37 def edit(ctx):
38 """Print out the example config."""
39 edit_files(ctx, None, "config")
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/opsdroid/cli/config.py b/opsdroid/cli/config.py
--- a/opsdroid/cli/config.py
+++ b/opsdroid/cli/config.py
@@ -7,7 +7,19 @@
def print_example_config(ctx, param, value):
- """[Deprecated] Print out the example config."""
+ """[Deprecated] Print out the example config.
+
+ Args:
+ ctx (:obj:`click.Context`): The current click cli context.
+ param (dict): a dictionary of all parameters pass to the click
+ context when invoking this function as a callback.
+ value (bool): the value of this parameter after invocation.
+ Defaults to False, set to True when this flag is called.
+
+ Returns:
+ int: the exit code. Always returns 0 in this case.
+
+ """
if not value or ctx.resilient_parsing:
return
if ctx.command.name == "cli":
diff --git a/opsdroid/cli/utils.py b/opsdroid/cli/utils.py
--- a/opsdroid/cli/utils.py
+++ b/opsdroid/cli/utils.py
@@ -20,7 +20,21 @@
def edit_files(ctx, param, value):
- """Open config/log file with favourite editor."""
+ """Open config/log file with favourite editor.
+
+ Args:
+ ctx (:obj:`click.Context`): The current click cli context.
+ param (dict): a dictionary of all parameters pass to the click
+ context when invoking this function as a callback.
+ value (string): the value of this parameter after invocation.
+ It is either "config" or "log" depending on the program
+ calling this function.
+
+ Returns:
+ int: the exit code. Always returns 0 in this case.
+
+ """
+
if value == "config":
file = DEFAULT_CONFIG_PATH
if ctx.command.name == "cli":
@@ -72,7 +86,13 @@
def check_dependencies():
- """Check for system dependencies required by opsdroid."""
+ """Check for system dependencies required by opsdroid.
+
+ Returns:
+ int: the exit code. Returns 1 if the Python version installed is
+ below 3.6.
+
+ """
if sys.version_info.major < 3 or sys.version_info.minor < 6:
logging.critical(_("Whoops! opsdroid requires python 3.6 or above."))
sys.exit(1)
| {"golden_diff": "diff --git a/opsdroid/cli/config.py b/opsdroid/cli/config.py\n--- a/opsdroid/cli/config.py\n+++ b/opsdroid/cli/config.py\n@@ -7,7 +7,19 @@\n \n \n def print_example_config(ctx, param, value):\n- \"\"\"[Deprecated] Print out the example config.\"\"\"\n+ \"\"\"[Deprecated] Print out the example config.\n+\n+ Args:\n+ ctx (:obj:`click.Context`): The current click cli context.\n+ param (dict): a dictionary of all parameters pass to the click\n+ context when invoking this function as a callback.\n+ value (bool): the value of this parameter after invocation.\n+ Defaults to False, set to True when this flag is called.\n+\n+ Returns:\n+ int: the exit code. Always returns 0 in this case.\n+\n+ \"\"\"\n if not value or ctx.resilient_parsing:\n return\n if ctx.command.name == \"cli\":\ndiff --git a/opsdroid/cli/utils.py b/opsdroid/cli/utils.py\n--- a/opsdroid/cli/utils.py\n+++ b/opsdroid/cli/utils.py\n@@ -20,7 +20,21 @@\n \n \n def edit_files(ctx, param, value):\n- \"\"\"Open config/log file with favourite editor.\"\"\"\n+ \"\"\"Open config/log file with favourite editor.\n+\n+ Args:\n+ ctx (:obj:`click.Context`): The current click cli context.\n+ param (dict): a dictionary of all parameters pass to the click\n+ context when invoking this function as a callback.\n+ value (string): the value of this parameter after invocation.\n+ It is either \"config\" or \"log\" depending on the program\n+ calling this function.\n+\n+ Returns:\n+ int: the exit code. Always returns 0 in this case.\n+\n+ \"\"\"\n+\n if value == \"config\":\n file = DEFAULT_CONFIG_PATH\n if ctx.command.name == \"cli\":\n@@ -72,7 +86,13 @@\n \n \n def check_dependencies():\n- \"\"\"Check for system dependencies required by opsdroid.\"\"\"\n+ \"\"\"Check for system dependencies required by opsdroid.\n+\n+ Returns:\n+ int: the exit code. Returns 1 if the Python version installed is\n+ below 3.6.\n+\n+ \"\"\"\n if sys.version_info.major < 3 or sys.version_info.minor < 6:\n logging.critical(_(\"Whoops! opsdroid requires python 3.6 or above.\"))\n sys.exit(1)\n", "issue": "Add Google Style Docstrings\nWe should implement Google Style Docstrings to every function, method, class in opsdroid. This style will support existing documentation and will help in the future by generating documentation automatically.\r\n\r\nThis consists in a bit of effort so this issue can be worked by more than one contributor, just make sure that everyone knows what you are working on in order to avoid other contributors spending time on something that you are working on.\r\n\r\nIf you are unfamiliar with the Google Style Docstrings I'd recommend that you check these resources:\r\n\r\n - [Sphix 1.8.0+ - Google Style Docstrings](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html)\r\n\r\n\r\n\r\nDocstrings that need to be updated:\r\n\r\n- main.py\r\n - [x] configure_lang\r\n - [ ] configure_log\r\n - [ ] get_logging_level\r\n - [ ] check_dependencies\r\n - [ ] print_version\r\n - [ ] print_example_config\r\n - [ ] edit_files\r\n - [x] welcome_message\r\n- ~~helper.py~~\r\n - [x] get_opsdroid\r\n - [x] del_rw\r\n - [x] move_config_to_appdir\r\n- memory.py\r\n - [x] Memory\r\n - [x] get\r\n - [x] put\r\n - [x] _get_from_database\r\n - [x] _put_to_database\r\n- message.py\r\n - [x] Message\r\n - [x] __init__\r\n - [x] _thinking_delay\r\n - [x] _typing delay\r\n - [x] respond\r\n - [x] react\r\n- web.py\r\n - [ ] Web\r\n - [x] get_port\r\n - [x] get_host\r\n - [x] get_ssl_context\r\n - [ ] start\r\n - [ ] build_response\r\n - [ ] web_index_handler\r\n - [ ] web_stats_handler\r\n- matchers.py\r\n - [ ] match_regex\r\n - [ ] match_apiai_action\r\n - [ ] match_apiai_intent\r\n - [ ] match_dialogflow_action\r\n - [ ] match_dialogflow_intent\r\n - [ ] match_luisai_intent\r\n - [ ] match_rasanlu\r\n - [ ] match_recastai\r\n - [ ] match_witai\r\n - [ ] match_crontab\r\n - [ ] match_webhook\r\n - [ ] match_always\r\n- core.py\r\n - [ ] OpsDroid\r\n - [ ] default_connector\r\n - [ ] exit\r\n - [ ] critical\r\n - [ ] call_stop\r\n - [ ] disconnect\r\n - [ ] stop\r\n - [ ] load\r\n - [ ] start_loop\r\n - [x] setup_skills\r\n - [ ] train_parsers\r\n - [ ] start_connector_tasks\r\n - [ ] start_database\r\n - [ ] run_skill\r\n - [ ] get_ranked_skills\r\n - [ ] parse\r\n- loader.py\r\n - [ ] Loader\r\n - [x] import_module_from_spec\r\n - [x] import_module\r\n - [x] check_cache\r\n - [x] build_module_import_path\r\n - [x] build_module_install_path\r\n - [x] git_clone\r\n - [x] git_pull\r\n - [x] pip_install_deps\r\n - [x] create_default_config\r\n - [x] load_config_file\r\n - [ ] envvar_constructor\r\n - [ ] include_constructor\r\n - [x] setup_modules_directory\r\n - [x] load_modules_from_config\r\n - [x] _load_modules\r\n - [x] _install_module\r\n - [x] _update_module\r\n - [ ] _install_git_module\r\n - [x] _install_local_module\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n---- ORIGINAL POST ---- \r\nI've been wondering about this for a while now and I would like to know if we should replace/update all the docstrings in opsdroid with the Google Style doc strings. \r\n\r\nI think this could help new and old contributors to contribute and commit to opsdroid since the Google Style docstrings give more information about every method/function and specifies clearly what sort of input the function/method expects, what will it return and what will be raised (if applicable).\r\n\r\nThe downsize of this style is that the length of every .py file will increase due to the doc strings, but since most IDE's allow you to hide those fields it shouldn't be too bad.\r\n\r\nHere is a good example of Google Style Doc strings: [Sphix 1.8.0+ - Google Style Docstrings](http://www.sphinx-doc.org/en/master/ext/example_google.html)\r\n\r\nI would like to know what you all think about this idea and if its worth spending time on it.\n", "before_files": [{"content": "\"\"\"Utilities for the opsdroid CLI commands.\"\"\"\n\nimport click\nimport gettext\nimport os\nimport logging\nimport subprocess\nimport sys\nimport time\nimport warnings\n\nfrom opsdroid.const import (\n DEFAULT_LOG_FILENAME,\n LOCALE_DIR,\n DEFAULT_LANGUAGE,\n DEFAULT_CONFIG_PATH,\n)\n\n_LOGGER = logging.getLogger(\"opsdroid\")\n\n\ndef edit_files(ctx, param, value):\n \"\"\"Open config/log file with favourite editor.\"\"\"\n if value == \"config\":\n file = DEFAULT_CONFIG_PATH\n if ctx.command.name == \"cli\":\n warn_deprecated_cli_option(\n \"The flag -e/--edit-files has been deprecated. \"\n \"Please run `opsdroid config edit` instead.\"\n )\n elif value == \"log\":\n file = DEFAULT_LOG_FILENAME\n if ctx.command.name == \"cli\":\n warn_deprecated_cli_option(\n \"The flag -l/--view-log has been deprecated. \"\n \"Please run `opsdroid logs` instead.\"\n )\n else:\n return\n\n editor = os.environ.get(\"EDITOR\", \"vi\")\n if editor == \"vi\":\n click.echo(\n \"You are about to edit a file in vim. \\n\"\n \"Read the tutorial on vim at: https://bit.ly/2HRvvrB\"\n )\n time.sleep(3)\n\n subprocess.run([editor, file])\n ctx.exit(0)\n\n\ndef warn_deprecated_cli_option(text):\n \"\"\"Warn users that the cli option they have used is deprecated.\"\"\"\n print(f\"Warning: {text}\")\n warnings.warn(text, DeprecationWarning)\n\n\ndef configure_lang(config):\n \"\"\"Configure app language based on user config.\n\n Args:\n config: Language Configuration and it uses ISO 639-1 code.\n for more info https://en.m.wikipedia.org/wiki/List_of_ISO_639-1_codes\n\n\n \"\"\"\n lang_code = config.get(\"lang\", DEFAULT_LANGUAGE)\n if lang_code != DEFAULT_LANGUAGE:\n lang = gettext.translation(\"opsdroid\", LOCALE_DIR, (lang_code,), fallback=True)\n lang.install()\n\n\ndef check_dependencies():\n \"\"\"Check for system dependencies required by opsdroid.\"\"\"\n if sys.version_info.major < 3 or sys.version_info.minor < 6:\n logging.critical(_(\"Whoops! opsdroid requires python 3.6 or above.\"))\n sys.exit(1)\n\n\ndef welcome_message(config):\n \"\"\"Add welcome message if set to true in configuration.\n\n Args:\n config: config loaded by Loader\n\n Raises:\n KeyError: If 'welcome-message' key is not found in configuration file\n\n \"\"\"\n try:\n if config[\"welcome-message\"]:\n _LOGGER.info(\"=\" * 40)\n _LOGGER.info(\n _(\n \"You can customise your opsdroid by modifying \"\n \"your configuration.yaml\"\n )\n )\n _LOGGER.info(\n _(\"Read more at: \" \"http://opsdroid.readthedocs.io/#configuration\")\n )\n _LOGGER.info(_(\"Watch the Get Started Videos at: \" \"http://bit.ly/2fnC0Fh\"))\n _LOGGER.info(\n _(\n \"Install Opsdroid Desktop at: \\n\"\n \"https://github.com/opsdroid/opsdroid-desktop/\"\n \"releases\"\n )\n )\n _LOGGER.info(\"=\" * 40)\n except KeyError:\n _LOGGER.warning(\n _(\"'welcome-message: true/false' is missing in configuration.yaml\")\n )\n", "path": "opsdroid/cli/utils.py"}, {"content": "\"\"\"The config subcommand for opsdroid cli.\"\"\"\n\nimport click\n\nfrom opsdroid.cli.utils import edit_files, warn_deprecated_cli_option\nfrom opsdroid.const import EXAMPLE_CONFIG_FILE\n\n\ndef print_example_config(ctx, param, value):\n \"\"\"[Deprecated] Print out the example config.\"\"\"\n if not value or ctx.resilient_parsing:\n return\n if ctx.command.name == \"cli\":\n warn_deprecated_cli_option(\n \"The flag --gen-config has been deprecated. \"\n \"Please run `opsdroid config gen` instead.\"\n )\n with open(EXAMPLE_CONFIG_FILE, \"r\") as conf:\n click.echo(conf.read())\n ctx.exit(0)\n\n\[email protected]()\ndef config():\n \"\"\"Subcommands related to opsdroid configuration.\"\"\"\n\n\[email protected]()\[email protected]_context\ndef gen(ctx):\n \"\"\"Print out the example config.\"\"\"\n print_example_config(ctx, None, True)\n\n\[email protected]()\[email protected]_context\ndef edit(ctx):\n \"\"\"Print out the example config.\"\"\"\n edit_files(ctx, None, \"config\")\n", "path": "opsdroid/cli/config.py"}], "after_files": [{"content": "\"\"\"Utilities for the opsdroid CLI commands.\"\"\"\n\nimport click\nimport gettext\nimport os\nimport logging\nimport subprocess\nimport sys\nimport time\nimport warnings\n\nfrom opsdroid.const import (\n DEFAULT_LOG_FILENAME,\n LOCALE_DIR,\n DEFAULT_LANGUAGE,\n DEFAULT_CONFIG_PATH,\n)\n\n_LOGGER = logging.getLogger(\"opsdroid\")\n\n\ndef edit_files(ctx, param, value):\n \"\"\"Open config/log file with favourite editor.\n\n Args:\n ctx (:obj:`click.Context`): The current click cli context.\n param (dict): a dictionary of all parameters pass to the click\n context when invoking this function as a callback.\n value (string): the value of this parameter after invocation.\n It is either \"config\" or \"log\" depending on the program\n calling this function.\n\n Returns:\n int: the exit code. Always returns 0 in this case.\n\n \"\"\"\n\n if value == \"config\":\n file = DEFAULT_CONFIG_PATH\n if ctx.command.name == \"cli\":\n warn_deprecated_cli_option(\n \"The flag -e/--edit-files has been deprecated. \"\n \"Please run `opsdroid config edit` instead.\"\n )\n elif value == \"log\":\n file = DEFAULT_LOG_FILENAME\n if ctx.command.name == \"cli\":\n warn_deprecated_cli_option(\n \"The flag -l/--view-log has been deprecated. \"\n \"Please run `opsdroid logs` instead.\"\n )\n else:\n return\n\n editor = os.environ.get(\"EDITOR\", \"vi\")\n if editor == \"vi\":\n click.echo(\n \"You are about to edit a file in vim. \\n\"\n \"Read the tutorial on vim at: https://bit.ly/2HRvvrB\"\n )\n time.sleep(3)\n\n subprocess.run([editor, file])\n ctx.exit(0)\n\n\ndef warn_deprecated_cli_option(text):\n \"\"\"Warn users that the cli option they have used is deprecated.\"\"\"\n print(f\"Warning: {text}\")\n warnings.warn(text, DeprecationWarning)\n\n\ndef configure_lang(config):\n \"\"\"Configure app language based on user config.\n\n Args:\n config: Language Configuration and it uses ISO 639-1 code.\n for more info https://en.m.wikipedia.org/wiki/List_of_ISO_639-1_codes\n\n\n \"\"\"\n lang_code = config.get(\"lang\", DEFAULT_LANGUAGE)\n if lang_code != DEFAULT_LANGUAGE:\n lang = gettext.translation(\"opsdroid\", LOCALE_DIR, (lang_code,), fallback=True)\n lang.install()\n\n\ndef check_dependencies():\n \"\"\"Check for system dependencies required by opsdroid.\n\n Returns:\n int: the exit code. Returns 1 if the Python version installed is\n below 3.6.\n\n \"\"\"\n if sys.version_info.major < 3 or sys.version_info.minor < 6:\n logging.critical(_(\"Whoops! opsdroid requires python 3.6 or above.\"))\n sys.exit(1)\n\n\ndef welcome_message(config):\n \"\"\"Add welcome message if set to true in configuration.\n\n Args:\n config: config loaded by Loader\n\n Raises:\n KeyError: If 'welcome-message' key is not found in configuration file\n\n \"\"\"\n try:\n if config[\"welcome-message\"]:\n _LOGGER.info(\"=\" * 40)\n _LOGGER.info(\n _(\n \"You can customise your opsdroid by modifying \"\n \"your configuration.yaml\"\n )\n )\n _LOGGER.info(\n _(\"Read more at: \" \"http://opsdroid.readthedocs.io/#configuration\")\n )\n _LOGGER.info(_(\"Watch the Get Started Videos at: \" \"http://bit.ly/2fnC0Fh\"))\n _LOGGER.info(\n _(\n \"Install Opsdroid Desktop at: \\n\"\n \"https://github.com/opsdroid/opsdroid-desktop/\"\n \"releases\"\n )\n )\n _LOGGER.info(\"=\" * 40)\n except KeyError:\n _LOGGER.warning(\n _(\"'welcome-message: true/false' is missing in configuration.yaml\")\n )\n", "path": "opsdroid/cli/utils.py"}, {"content": "\"\"\"The config subcommand for opsdroid cli.\"\"\"\n\nimport click\n\nfrom opsdroid.cli.utils import edit_files, warn_deprecated_cli_option\nfrom opsdroid.const import EXAMPLE_CONFIG_FILE\n\n\ndef print_example_config(ctx, param, value):\n \"\"\"[Deprecated] Print out the example config.\n\n Args:\n ctx (:obj:`click.Context`): The current click cli context.\n param (dict): a dictionary of all parameters pass to the click\n context when invoking this function as a callback.\n value (bool): the value of this parameter after invocation.\n Defaults to False, set to True when this flag is called.\n\n Returns:\n int: the exit code. Always returns 0 in this case.\n\n \"\"\"\n if not value or ctx.resilient_parsing:\n return\n if ctx.command.name == \"cli\":\n warn_deprecated_cli_option(\n \"The flag --gen-config has been deprecated. \"\n \"Please run `opsdroid config gen` instead.\"\n )\n with open(EXAMPLE_CONFIG_FILE, \"r\") as conf:\n click.echo(conf.read())\n ctx.exit(0)\n\n\[email protected]()\ndef config():\n \"\"\"Subcommands related to opsdroid configuration.\"\"\"\n\n\[email protected]()\[email protected]_context\ndef gen(ctx):\n \"\"\"Print out the example config.\"\"\"\n print_example_config(ctx, None, True)\n\n\[email protected]()\[email protected]_context\ndef edit(ctx):\n \"\"\"Print out the example config.\"\"\"\n edit_files(ctx, None, \"config\")\n", "path": "opsdroid/cli/config.py"}]} | 2,614 | 556 |
gh_patches_debug_19840 | rasdani/github-patches | git_diff | hydroshare__hydroshare-5100 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
hsapi /scimeta/elements endpoint errors with no authentication
**Description of the bug**
/scimeta/elements hsapi endpoint responds with a system error when no authentication credentials are provided.
Steps to reproduce the bug:
1. Go to https://www.hydroshare.org/hsapi/resource/16b27156f2b1483099f09d3b688e43fb/scimeta/elements/
1. See error
**Expected behavior**
API endpoints should handle requests that do not provide authentication. Unauthenticated requests that make publicly accessible requests should provide a response. Unauthenticated requests that make privately accessible requests should respond with a 403.
**Additional information**
Environment:
Request Method: GET
Request URL: http://localhost:8000/hsapi/resource/068d77bb3f4c442194e9f08056aca1cf/scimeta/elements/
Django Version: 1.11.29
Python Version: 3.6.9
Installed Applications:
('mezzanine.boot',
'test_without_migrations',
'autocomplete_light',
'django.contrib.auth',
'oauth2_provider',
'corsheaders',
'django.contrib.contenttypes',
'django.contrib.redirects',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.sitemaps',
'django.contrib.gis',
'django.contrib.postgres',
'inplaceeditform',
'django_nose',
'django_irods',
'drf_yasg',
'theme',
'theme.blog_mods',
'heartbeat',
'mezzanine.conf',
'mezzanine.core',
'mezzanine.generic',
'mezzanine.blog',
'mezzanine.forms',
'mezzanine.pages',
'mezzanine.galleries',
'crispy_forms',
'mezzanine.accounts',
'haystack',
'rest_framework',
'robots',
'hs_core',
'hs_access_control',
'hs_labels',
'hs_metrics',
'irods_browser_app',
'hs_geo_raster_resource',
'ref_ts',
'hs_app_timeseries',
'widget_tweaks',
'hs_app_netCDF',
'hs_model_program',
'hs_modelinstance',
'hs_tools_resource',
'hs_swat_modelinstance',
'hs_geographic_feature_resource',
'hs_script_resource',
'hs_sitemap',
'hs_collection_resource',
'hs_modflow_modelinstance',
'hs_tracking',
'hs_file_types',
'hs_composite_resource',
'hs_rest_api',
'hs_dictionary',
'hs_odm2',
'security',
'markdown',
'hs_communities',
'hs_discover',
'debug_toolbar',
'filebrowser_safe',
'grappelli_safe',
'django.contrib.admin',
'django.contrib.staticfiles',
'django_comments')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'corsheaders.middleware.CorsMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'mezzanine.core.request.CurrentRequestMiddleware',
'mezzanine.core.middleware.RedirectFallbackMiddleware',
'mezzanine.core.middleware.AdminLoginInterfaceSelectorMiddleware',
'mezzanine.core.middleware.SitePermissionMiddleware',
'mezzanine.pages.middleware.PageMiddleware',
'hs_core.robots.RobotFilter',
'hs_tracking.middleware.Tracking',
'debug_toolbar.middleware.DebugToolbarMiddleware')
Traceback:
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/exception.py" in inner
41. response = get_response(request)
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
217. response = self.process_exception_by_middleware(e, request)
File "/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py" in _get_response
215. response = response.render()
File "/usr/local/lib/python3.6/site-packages/django/template/response.py" in render
107. self.content = self.rendered_content
File "/usr/local/lib/python3.6/site-packages/rest_framework/response.py" in rendered_content
70. ret = renderer.render(self.data, accepted_media_type, context)
File "/usr/local/lib/python3.6/site-packages/rest_framework/renderers.py" in render
724. context = self.get_context(data, accepted_media_type, renderer_context)
File "/usr/local/lib/python3.6/site-packages/rest_framework/renderers.py" in get_context
656. raw_data_put_form = self.get_raw_data_form(data, view, 'PUT', request)
File "/usr/local/lib/python3.6/site-packages/rest_framework/renderers.py" in get_raw_data_form
552. serializer = view.get_serializer(instance=instance)
File "/usr/local/lib/python3.6/site-packages/rest_framework/generics.py" in get_serializer
110. return serializer_class(*args, **kwargs)
Exception Type: TypeError at /hsapi/resource/068d77bb3f4c442194e9f08056aca1cf/scimeta/elements/
Exception Value: 'CoreMetaDataSerializer' object is not callable
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hs_core/views/resource_metadata_rest_api.py`
Content:
```
1 import logging
2
3 from django.http import QueryDict
4
5 from rest_framework.response import Response
6 from rest_framework.exceptions import ValidationError
7 from rest_framework import status
8 from rest_framework import generics
9 from rest_framework import serializers
10
11 from hs_core import hydroshare
12 from hs_core.models import Contributor, CoreMetaData, Coverage, Creator, Date, \
13 Format, FundingAgency, Identifier, Subject, Relation, GeospatialRelation
14 from hs_core.views import utils as view_utils
15 from hs_core.views.utils import ACTION_TO_AUTHORIZE
16
17 logger = logging.getLogger(__name__)
18
19
20 class Identifiers(serializers.DictField):
21 child = serializers.CharField()
22
23
24 class PartySerializer(serializers.Serializer):
25 name = serializers.CharField()
26 hydroshare_user_id = serializers.IntegerField(required=False)
27 organization = serializers.CharField(required=False)
28 email = serializers.EmailField(required=False)
29 address = serializers.CharField(required=False)
30 phone = serializers.CharField(required=False)
31 homepage = serializers.URLField(required=False)
32 identifiers = Identifiers(required=False)
33
34 class Meta:
35 model = Creator
36 fields = {'name', 'hydroshare_user_id', 'organization', 'email',
37 'address', 'phone', 'homepage', 'identifiers'}
38
39
40 class CreatorSerializer(PartySerializer):
41 order = serializers.IntegerField(required=False)
42
43 class Meta:
44 model = Contributor
45
46
47 class DateSerializer(serializers.Serializer):
48 # term = 'Date'
49 type = serializers.CharField(required=False)
50 start_date = serializers.DateTimeField(required=False)
51 end_date = serializers.DateTimeField(required=False)
52
53 class Meta:
54 model = Date
55
56
57 class CoverageSerializer(serializers.Serializer):
58 type = serializers.CharField(required=False)
59 value = serializers.SerializerMethodField(required=False)
60
61 class Meta:
62 model = Coverage
63
64 def get_value(self, obj):
65 return obj.value
66
67
68 class FormatSerializer(serializers.Serializer):
69 value = serializers.CharField(required=False)
70
71 class Meta:
72 model = Format
73
74
75 class FundingAgencySerializer(serializers.Serializer):
76 agency_name = serializers.CharField()
77 award_title = serializers.CharField(required=False)
78 award_number = serializers.CharField(required=False)
79 agency_url = serializers.URLField(required=False)
80
81 class Meta:
82 model = FundingAgency
83
84
85 class IdentifierSerializer(serializers.Serializer):
86 name = serializers.CharField(required=False)
87 url = serializers.URLField(required=False)
88
89 class Meta:
90 model = Identifier
91
92
93 class SubjectSerializer(serializers.Serializer):
94 value = serializers.CharField(required=False)
95
96 class Meta:
97 model = Subject
98
99
100 class RelationSerializer(serializers.Serializer):
101 type = serializers.CharField(required=False)
102 value = serializers.CharField(required=False)
103
104 class Meta:
105 model = Relation
106
107
108 class GeospatialRelationSerializer(RelationSerializer):
109 type = serializers.CharField(required=False)
110 value = serializers.CharField(required=False)
111 text = serializers.CharField(required=False)
112
113 class Meta:
114 model = GeospatialRelation
115
116
117 class CoreMetaDataSerializer(serializers.Serializer):
118 title = serializers.CharField(required=False)
119 creators = CreatorSerializer(required=False, many=True)
120 contributors = PartySerializer(required=False, many=True)
121 coverages = CoverageSerializer(required=False, many=True)
122 dates = DateSerializer(required=False, many=True)
123 description = serializers.CharField(required=False)
124 formats = FormatSerializer(required=False, many=True)
125 funding_agencies = FundingAgencySerializer(required=False, many=True)
126 identifiers = IdentifierSerializer(required=False, many=True)
127 language = serializers.CharField(required=False)
128 rights = serializers.CharField(required=False)
129 type = serializers.CharField(required=False)
130 publisher = serializers.CharField(required=False)
131 subjects = SubjectSerializer(required=False, many=True)
132 relations = RelationSerializer(required=False, many=True)
133 geospatialrelations = GeospatialRelationSerializer(required=False, many=True)
134
135 class Meta:
136 model = CoreMetaData
137
138
139 class MetadataElementsRetrieveUpdate(generics.RetrieveUpdateDestroyAPIView):
140 """
141 Retrieve resource science (Dublin Core) metadata
142
143 REST URL: /hsapi/resource/{pk}/scimeta/elements/
144 HTTP method: GET
145
146 :type pk: str
147 :param pk: id of the resource
148 :return: resource science metadata as JSON document
149 :rtype: str
150 :raises:
151 NotFound: return json format: {'detail': 'No resource was found for resource id:pk'}
152 PermissionDenied: return json format: {'detail': 'You do not have permission to perform
153 this action.'}
154
155 REST URL: /hsapi/resource/{pk}/scimeta/elements/
156 HTTP method: PUT
157
158 :type pk: str
159 :param pk: id of the resource
160 :type request: JSON formatted string
161 :param request: resource metadata
162 :return: updated resource science metadata as JSON document
163 :rtype: str
164 :raises:
165 NotFound: return json format: {'detail': 'No resource was found for resource id':pk}
166 PermissionDenied: return json format: {'detail': 'You do not have permission to perform
167 this action.'}
168 ValidationError: return json format: {parameter-1': ['error message-1'],
169 'parameter-2': ['error message-2'], .. }
170 """
171 ACCEPT_FORMATS = ('application/json',)
172
173 allowed_methods = ('GET', 'PUT')
174
175 # Overwritten by resource types with extended metadata
176 serializer_class = CoreMetaDataSerializer
177
178 def get(self, request, pk):
179 view_utils.authorize(request, pk, needed_permission=ACTION_TO_AUTHORIZE.VIEW_METADATA)
180 resource = hydroshare.get_resource_by_shortkey(shortkey=pk)
181 serializer = resource.metadata.serializer
182 self.serializer_class = resource.metadata.serializer
183 return Response(data=serializer.data, status=status.HTTP_200_OK)
184
185 def put(self, request, pk):
186 # Update science metadata
187 resource, _, _ = view_utils.authorize(
188 request, pk,
189 needed_permission=ACTION_TO_AUTHORIZE.EDIT_RESOURCE)
190
191 metadata = []
192 put_data = request.data.copy()
193
194 # convert the QueryDict to dict
195 if isinstance(put_data, QueryDict):
196 put_data = put_data.dict()
197 try:
198 resource.metadata.parse_for_bulk_update(put_data, metadata)
199 hydroshare.update_science_metadata(pk=pk, metadata=metadata, user=request.user)
200 except Exception as ex:
201 error_msg = {
202 'resource': "Resource metadata update failed: %s, %s"
203 % (ex.__class__, str(ex))
204 }
205 raise ValidationError(detail=error_msg)
206
207 resource = hydroshare.get_resource_by_shortkey(shortkey=pk)
208 serializer = resource.metadata.serializer
209 self.serializer_class = serializer
210 return Response(data=serializer.data, status=status.HTTP_202_ACCEPTED)
211
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hs_core/views/resource_metadata_rest_api.py b/hs_core/views/resource_metadata_rest_api.py
--- a/hs_core/views/resource_metadata_rest_api.py
+++ b/hs_core/views/resource_metadata_rest_api.py
@@ -179,7 +179,6 @@
view_utils.authorize(request, pk, needed_permission=ACTION_TO_AUTHORIZE.VIEW_METADATA)
resource = hydroshare.get_resource_by_shortkey(shortkey=pk)
serializer = resource.metadata.serializer
- self.serializer_class = resource.metadata.serializer
return Response(data=serializer.data, status=status.HTTP_200_OK)
def put(self, request, pk):
@@ -206,5 +205,4 @@
resource = hydroshare.get_resource_by_shortkey(shortkey=pk)
serializer = resource.metadata.serializer
- self.serializer_class = serializer
return Response(data=serializer.data, status=status.HTTP_202_ACCEPTED)
| {"golden_diff": "diff --git a/hs_core/views/resource_metadata_rest_api.py b/hs_core/views/resource_metadata_rest_api.py\n--- a/hs_core/views/resource_metadata_rest_api.py\n+++ b/hs_core/views/resource_metadata_rest_api.py\n@@ -179,7 +179,6 @@\n view_utils.authorize(request, pk, needed_permission=ACTION_TO_AUTHORIZE.VIEW_METADATA)\n resource = hydroshare.get_resource_by_shortkey(shortkey=pk)\n serializer = resource.metadata.serializer\n- self.serializer_class = resource.metadata.serializer\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n \n def put(self, request, pk):\n@@ -206,5 +205,4 @@\n \n resource = hydroshare.get_resource_by_shortkey(shortkey=pk)\n serializer = resource.metadata.serializer\n- self.serializer_class = serializer\n return Response(data=serializer.data, status=status.HTTP_202_ACCEPTED)\n", "issue": "hsapi /scimeta/elements endpoint errors with no authentication\n**Description of the bug**\r\n/scimeta/elements hsapi endpoint responds with a system error when no authentication credentials are provided.\r\n\r\nSteps to reproduce the bug:\r\n1. Go to https://www.hydroshare.org/hsapi/resource/16b27156f2b1483099f09d3b688e43fb/scimeta/elements/\r\n1. See error\r\n\r\n**Expected behavior**\r\nAPI endpoints should handle requests that do not provide authentication. Unauthenticated requests that make publicly accessible requests should provide a response. Unauthenticated requests that make privately accessible requests should respond with a 403.\r\n\r\n**Additional information**\r\nEnvironment:\r\n\r\n\r\nRequest Method: GET\r\nRequest URL: http://localhost:8000/hsapi/resource/068d77bb3f4c442194e9f08056aca1cf/scimeta/elements/\r\n\r\nDjango Version: 1.11.29\r\nPython Version: 3.6.9\r\nInstalled Applications:\r\n('mezzanine.boot',\r\n 'test_without_migrations',\r\n 'autocomplete_light',\r\n 'django.contrib.auth',\r\n 'oauth2_provider',\r\n 'corsheaders',\r\n 'django.contrib.contenttypes',\r\n 'django.contrib.redirects',\r\n 'django.contrib.sessions',\r\n 'django.contrib.sites',\r\n 'django.contrib.sitemaps',\r\n 'django.contrib.gis',\r\n 'django.contrib.postgres',\r\n 'inplaceeditform',\r\n 'django_nose',\r\n 'django_irods',\r\n 'drf_yasg',\r\n 'theme',\r\n 'theme.blog_mods',\r\n 'heartbeat',\r\n 'mezzanine.conf',\r\n 'mezzanine.core',\r\n 'mezzanine.generic',\r\n 'mezzanine.blog',\r\n 'mezzanine.forms',\r\n 'mezzanine.pages',\r\n 'mezzanine.galleries',\r\n 'crispy_forms',\r\n 'mezzanine.accounts',\r\n 'haystack',\r\n 'rest_framework',\r\n 'robots',\r\n 'hs_core',\r\n 'hs_access_control',\r\n 'hs_labels',\r\n 'hs_metrics',\r\n 'irods_browser_app',\r\n 'hs_geo_raster_resource',\r\n 'ref_ts',\r\n 'hs_app_timeseries',\r\n 'widget_tweaks',\r\n 'hs_app_netCDF',\r\n 'hs_model_program',\r\n 'hs_modelinstance',\r\n 'hs_tools_resource',\r\n 'hs_swat_modelinstance',\r\n 'hs_geographic_feature_resource',\r\n 'hs_script_resource',\r\n 'hs_sitemap',\r\n 'hs_collection_resource',\r\n 'hs_modflow_modelinstance',\r\n 'hs_tracking',\r\n 'hs_file_types',\r\n 'hs_composite_resource',\r\n 'hs_rest_api',\r\n 'hs_dictionary',\r\n 'hs_odm2',\r\n 'security',\r\n 'markdown',\r\n 'hs_communities',\r\n 'hs_discover',\r\n 'debug_toolbar',\r\n 'filebrowser_safe',\r\n 'grappelli_safe',\r\n 'django.contrib.admin',\r\n 'django.contrib.staticfiles',\r\n 'django_comments')\r\nInstalled Middleware:\r\n('django.contrib.sessions.middleware.SessionMiddleware',\r\n 'django.middleware.locale.LocaleMiddleware',\r\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\r\n 'corsheaders.middleware.CorsMiddleware',\r\n 'django.middleware.common.CommonMiddleware',\r\n 'django.middleware.csrf.CsrfViewMiddleware',\r\n 'django.contrib.messages.middleware.MessageMiddleware',\r\n 'mezzanine.core.request.CurrentRequestMiddleware',\r\n 'mezzanine.core.middleware.RedirectFallbackMiddleware',\r\n 'mezzanine.core.middleware.AdminLoginInterfaceSelectorMiddleware',\r\n 'mezzanine.core.middleware.SitePermissionMiddleware',\r\n 'mezzanine.pages.middleware.PageMiddleware',\r\n 'hs_core.robots.RobotFilter',\r\n 'hs_tracking.middleware.Tracking',\r\n 'debug_toolbar.middleware.DebugToolbarMiddleware')\r\n\r\n\r\n\r\nTraceback:\r\n\r\nFile \"/usr/local/lib/python3.6/site-packages/django/core/handlers/exception.py\" in inner\r\n 41. response = get_response(request)\r\n\r\nFile \"/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py\" in _get_response\r\n 217. response = self.process_exception_by_middleware(e, request)\r\n\r\nFile \"/usr/local/lib/python3.6/site-packages/django/core/handlers/base.py\" in _get_response\r\n 215. response = response.render()\r\n\r\nFile \"/usr/local/lib/python3.6/site-packages/django/template/response.py\" in render\r\n 107. self.content = self.rendered_content\r\n\r\nFile \"/usr/local/lib/python3.6/site-packages/rest_framework/response.py\" in rendered_content\r\n 70. ret = renderer.render(self.data, accepted_media_type, context)\r\n\r\nFile \"/usr/local/lib/python3.6/site-packages/rest_framework/renderers.py\" in render\r\n 724. context = self.get_context(data, accepted_media_type, renderer_context)\r\n\r\nFile \"/usr/local/lib/python3.6/site-packages/rest_framework/renderers.py\" in get_context\r\n 656. raw_data_put_form = self.get_raw_data_form(data, view, 'PUT', request)\r\n\r\nFile \"/usr/local/lib/python3.6/site-packages/rest_framework/renderers.py\" in get_raw_data_form\r\n 552. serializer = view.get_serializer(instance=instance)\r\n\r\nFile \"/usr/local/lib/python3.6/site-packages/rest_framework/generics.py\" in get_serializer\r\n 110. return serializer_class(*args, **kwargs)\r\n\r\nException Type: TypeError at /hsapi/resource/068d77bb3f4c442194e9f08056aca1cf/scimeta/elements/\r\nException Value: 'CoreMetaDataSerializer' object is not callable\r\n\r\n\n", "before_files": [{"content": "import logging\n\nfrom django.http import QueryDict\n\nfrom rest_framework.response import Response\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework import status\nfrom rest_framework import generics\nfrom rest_framework import serializers\n\nfrom hs_core import hydroshare\nfrom hs_core.models import Contributor, CoreMetaData, Coverage, Creator, Date, \\\n Format, FundingAgency, Identifier, Subject, Relation, GeospatialRelation\nfrom hs_core.views import utils as view_utils\nfrom hs_core.views.utils import ACTION_TO_AUTHORIZE\n\nlogger = logging.getLogger(__name__)\n\n\nclass Identifiers(serializers.DictField):\n child = serializers.CharField()\n\n\nclass PartySerializer(serializers.Serializer):\n name = serializers.CharField()\n hydroshare_user_id = serializers.IntegerField(required=False)\n organization = serializers.CharField(required=False)\n email = serializers.EmailField(required=False)\n address = serializers.CharField(required=False)\n phone = serializers.CharField(required=False)\n homepage = serializers.URLField(required=False)\n identifiers = Identifiers(required=False)\n\n class Meta:\n model = Creator\n fields = {'name', 'hydroshare_user_id', 'organization', 'email',\n 'address', 'phone', 'homepage', 'identifiers'}\n\n\nclass CreatorSerializer(PartySerializer):\n order = serializers.IntegerField(required=False)\n\n class Meta:\n model = Contributor\n\n\nclass DateSerializer(serializers.Serializer):\n # term = 'Date'\n type = serializers.CharField(required=False)\n start_date = serializers.DateTimeField(required=False)\n end_date = serializers.DateTimeField(required=False)\n\n class Meta:\n model = Date\n\n\nclass CoverageSerializer(serializers.Serializer):\n type = serializers.CharField(required=False)\n value = serializers.SerializerMethodField(required=False)\n\n class Meta:\n model = Coverage\n\n def get_value(self, obj):\n return obj.value\n\n\nclass FormatSerializer(serializers.Serializer):\n value = serializers.CharField(required=False)\n\n class Meta:\n model = Format\n\n\nclass FundingAgencySerializer(serializers.Serializer):\n agency_name = serializers.CharField()\n award_title = serializers.CharField(required=False)\n award_number = serializers.CharField(required=False)\n agency_url = serializers.URLField(required=False)\n\n class Meta:\n model = FundingAgency\n\n\nclass IdentifierSerializer(serializers.Serializer):\n name = serializers.CharField(required=False)\n url = serializers.URLField(required=False)\n\n class Meta:\n model = Identifier\n\n\nclass SubjectSerializer(serializers.Serializer):\n value = serializers.CharField(required=False)\n\n class Meta:\n model = Subject\n\n\nclass RelationSerializer(serializers.Serializer):\n type = serializers.CharField(required=False)\n value = serializers.CharField(required=False)\n\n class Meta:\n model = Relation\n\n\nclass GeospatialRelationSerializer(RelationSerializer):\n type = serializers.CharField(required=False)\n value = serializers.CharField(required=False)\n text = serializers.CharField(required=False)\n\n class Meta:\n model = GeospatialRelation\n\n\nclass CoreMetaDataSerializer(serializers.Serializer):\n title = serializers.CharField(required=False)\n creators = CreatorSerializer(required=False, many=True)\n contributors = PartySerializer(required=False, many=True)\n coverages = CoverageSerializer(required=False, many=True)\n dates = DateSerializer(required=False, many=True)\n description = serializers.CharField(required=False)\n formats = FormatSerializer(required=False, many=True)\n funding_agencies = FundingAgencySerializer(required=False, many=True)\n identifiers = IdentifierSerializer(required=False, many=True)\n language = serializers.CharField(required=False)\n rights = serializers.CharField(required=False)\n type = serializers.CharField(required=False)\n publisher = serializers.CharField(required=False)\n subjects = SubjectSerializer(required=False, many=True)\n relations = RelationSerializer(required=False, many=True)\n geospatialrelations = GeospatialRelationSerializer(required=False, many=True)\n\n class Meta:\n model = CoreMetaData\n\n\nclass MetadataElementsRetrieveUpdate(generics.RetrieveUpdateDestroyAPIView):\n \"\"\"\n Retrieve resource science (Dublin Core) metadata\n\n REST URL: /hsapi/resource/{pk}/scimeta/elements/\n HTTP method: GET\n\n :type pk: str\n :param pk: id of the resource\n :return: resource science metadata as JSON document\n :rtype: str\n :raises:\n NotFound: return json format: {'detail': 'No resource was found for resource id:pk'}\n PermissionDenied: return json format: {'detail': 'You do not have permission to perform\n this action.'}\n\n REST URL: /hsapi/resource/{pk}/scimeta/elements/\n HTTP method: PUT\n\n :type pk: str\n :param pk: id of the resource\n :type request: JSON formatted string\n :param request: resource metadata\n :return: updated resource science metadata as JSON document\n :rtype: str\n :raises:\n NotFound: return json format: {'detail': 'No resource was found for resource id':pk}\n PermissionDenied: return json format: {'detail': 'You do not have permission to perform\n this action.'}\n ValidationError: return json format: {parameter-1': ['error message-1'],\n 'parameter-2': ['error message-2'], .. }\n \"\"\"\n ACCEPT_FORMATS = ('application/json',)\n\n allowed_methods = ('GET', 'PUT')\n\n # Overwritten by resource types with extended metadata\n serializer_class = CoreMetaDataSerializer\n\n def get(self, request, pk):\n view_utils.authorize(request, pk, needed_permission=ACTION_TO_AUTHORIZE.VIEW_METADATA)\n resource = hydroshare.get_resource_by_shortkey(shortkey=pk)\n serializer = resource.metadata.serializer\n self.serializer_class = resource.metadata.serializer\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n def put(self, request, pk):\n # Update science metadata\n resource, _, _ = view_utils.authorize(\n request, pk,\n needed_permission=ACTION_TO_AUTHORIZE.EDIT_RESOURCE)\n\n metadata = []\n put_data = request.data.copy()\n\n # convert the QueryDict to dict\n if isinstance(put_data, QueryDict):\n put_data = put_data.dict()\n try:\n resource.metadata.parse_for_bulk_update(put_data, metadata)\n hydroshare.update_science_metadata(pk=pk, metadata=metadata, user=request.user)\n except Exception as ex:\n error_msg = {\n 'resource': \"Resource metadata update failed: %s, %s\"\n % (ex.__class__, str(ex))\n }\n raise ValidationError(detail=error_msg)\n\n resource = hydroshare.get_resource_by_shortkey(shortkey=pk)\n serializer = resource.metadata.serializer\n self.serializer_class = serializer\n return Response(data=serializer.data, status=status.HTTP_202_ACCEPTED)\n", "path": "hs_core/views/resource_metadata_rest_api.py"}], "after_files": [{"content": "import logging\n\nfrom django.http import QueryDict\n\nfrom rest_framework.response import Response\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework import status\nfrom rest_framework import generics\nfrom rest_framework import serializers\n\nfrom hs_core import hydroshare\nfrom hs_core.models import Contributor, CoreMetaData, Coverage, Creator, Date, \\\n Format, FundingAgency, Identifier, Subject, Relation, GeospatialRelation\nfrom hs_core.views import utils as view_utils\nfrom hs_core.views.utils import ACTION_TO_AUTHORIZE\n\nlogger = logging.getLogger(__name__)\n\n\nclass Identifiers(serializers.DictField):\n child = serializers.CharField()\n\n\nclass PartySerializer(serializers.Serializer):\n name = serializers.CharField()\n hydroshare_user_id = serializers.IntegerField(required=False)\n organization = serializers.CharField(required=False)\n email = serializers.EmailField(required=False)\n address = serializers.CharField(required=False)\n phone = serializers.CharField(required=False)\n homepage = serializers.URLField(required=False)\n identifiers = Identifiers(required=False)\n\n class Meta:\n model = Creator\n fields = {'name', 'hydroshare_user_id', 'organization', 'email',\n 'address', 'phone', 'homepage', 'identifiers'}\n\n\nclass CreatorSerializer(PartySerializer):\n order = serializers.IntegerField(required=False)\n\n class Meta:\n model = Contributor\n\n\nclass DateSerializer(serializers.Serializer):\n # term = 'Date'\n type = serializers.CharField(required=False)\n start_date = serializers.DateTimeField(required=False)\n end_date = serializers.DateTimeField(required=False)\n\n class Meta:\n model = Date\n\n\nclass CoverageSerializer(serializers.Serializer):\n type = serializers.CharField(required=False)\n value = serializers.SerializerMethodField(required=False)\n\n class Meta:\n model = Coverage\n\n def get_value(self, obj):\n return obj.value\n\n\nclass FormatSerializer(serializers.Serializer):\n value = serializers.CharField(required=False)\n\n class Meta:\n model = Format\n\n\nclass FundingAgencySerializer(serializers.Serializer):\n agency_name = serializers.CharField()\n award_title = serializers.CharField(required=False)\n award_number = serializers.CharField(required=False)\n agency_url = serializers.URLField(required=False)\n\n class Meta:\n model = FundingAgency\n\n\nclass IdentifierSerializer(serializers.Serializer):\n name = serializers.CharField(required=False)\n url = serializers.URLField(required=False)\n\n class Meta:\n model = Identifier\n\n\nclass SubjectSerializer(serializers.Serializer):\n value = serializers.CharField(required=False)\n\n class Meta:\n model = Subject\n\n\nclass RelationSerializer(serializers.Serializer):\n type = serializers.CharField(required=False)\n value = serializers.CharField(required=False)\n\n class Meta:\n model = Relation\n\n\nclass GeospatialRelationSerializer(RelationSerializer):\n type = serializers.CharField(required=False)\n value = serializers.CharField(required=False)\n text = serializers.CharField(required=False)\n\n class Meta:\n model = GeospatialRelation\n\n\nclass CoreMetaDataSerializer(serializers.Serializer):\n title = serializers.CharField(required=False)\n creators = CreatorSerializer(required=False, many=True)\n contributors = PartySerializer(required=False, many=True)\n coverages = CoverageSerializer(required=False, many=True)\n dates = DateSerializer(required=False, many=True)\n description = serializers.CharField(required=False)\n formats = FormatSerializer(required=False, many=True)\n funding_agencies = FundingAgencySerializer(required=False, many=True)\n identifiers = IdentifierSerializer(required=False, many=True)\n language = serializers.CharField(required=False)\n rights = serializers.CharField(required=False)\n type = serializers.CharField(required=False)\n publisher = serializers.CharField(required=False)\n subjects = SubjectSerializer(required=False, many=True)\n relations = RelationSerializer(required=False, many=True)\n geospatialrelations = GeospatialRelationSerializer(required=False, many=True)\n\n class Meta:\n model = CoreMetaData\n\n\nclass MetadataElementsRetrieveUpdate(generics.RetrieveUpdateDestroyAPIView):\n \"\"\"\n Retrieve resource science (Dublin Core) metadata\n\n REST URL: /hsapi/resource/{pk}/scimeta/elements/\n HTTP method: GET\n\n :type pk: str\n :param pk: id of the resource\n :return: resource science metadata as JSON document\n :rtype: str\n :raises:\n NotFound: return json format: {'detail': 'No resource was found for resource id:pk'}\n PermissionDenied: return json format: {'detail': 'You do not have permission to perform\n this action.'}\n\n REST URL: /hsapi/resource/{pk}/scimeta/elements/\n HTTP method: PUT\n\n :type pk: str\n :param pk: id of the resource\n :type request: JSON formatted string\n :param request: resource metadata\n :return: updated resource science metadata as JSON document\n :rtype: str\n :raises:\n NotFound: return json format: {'detail': 'No resource was found for resource id':pk}\n PermissionDenied: return json format: {'detail': 'You do not have permission to perform\n this action.'}\n ValidationError: return json format: {parameter-1': ['error message-1'],\n 'parameter-2': ['error message-2'], .. }\n \"\"\"\n ACCEPT_FORMATS = ('application/json',)\n\n allowed_methods = ('GET', 'PUT')\n\n # Overwritten by resource types with extended metadata\n serializer_class = CoreMetaDataSerializer\n\n def get(self, request, pk):\n view_utils.authorize(request, pk, needed_permission=ACTION_TO_AUTHORIZE.VIEW_METADATA)\n resource = hydroshare.get_resource_by_shortkey(shortkey=pk)\n serializer = resource.metadata.serializer\n return Response(data=serializer.data, status=status.HTTP_200_OK)\n\n def put(self, request, pk):\n # Update science metadata\n resource, _, _ = view_utils.authorize(\n request, pk,\n needed_permission=ACTION_TO_AUTHORIZE.EDIT_RESOURCE)\n\n metadata = []\n put_data = request.data.copy()\n\n # convert the QueryDict to dict\n if isinstance(put_data, QueryDict):\n put_data = put_data.dict()\n try:\n resource.metadata.parse_for_bulk_update(put_data, metadata)\n hydroshare.update_science_metadata(pk=pk, metadata=metadata, user=request.user)\n except Exception as ex:\n error_msg = {\n 'resource': \"Resource metadata update failed: %s, %s\"\n % (ex.__class__, str(ex))\n }\n raise ValidationError(detail=error_msg)\n\n resource = hydroshare.get_resource_by_shortkey(shortkey=pk)\n serializer = resource.metadata.serializer\n return Response(data=serializer.data, status=status.HTTP_202_ACCEPTED)\n", "path": "hs_core/views/resource_metadata_rest_api.py"}]} | 3,349 | 202 |
gh_patches_debug_16787 | rasdani/github-patches | git_diff | tensorflow__addons-219 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
WeightNormalization fails after update
Yesterday I cut a 0.3 release on pypi test, but as a sanity check on the weightnorm changes #208 I ran a quick regression check:
https://colab.research.google.com/drive/1RQlVR9X7vj8q3W3sJC6YA3Cq45XTXrau
The new layer is failing with loss of nan. Looking at the test coverage for this layer I'm struggling how to best catch this without running a full model. The losses for our toy test cases do not return nan so perhaps we need a true regression test like mnist?
EDIT -- Fix colab link
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensorflow_addons/layers/wrappers.py`
Content:
```
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # =============================================================================
15 from __future__ import absolute_import
16 from __future__ import division
17 from __future__ import print_function
18
19 import tensorflow as tf
20 from tensorflow_addons.utils import keras_utils
21
22
23 @keras_utils.register_keras_custom_object
24 class WeightNormalization(tf.keras.layers.Wrapper):
25 """This wrapper reparameterizes a layer by decoupling the weight's
26 magnitude and direction.
27
28 This speeds up convergence by improving the
29 conditioning of the optimization problem.
30 Weight Normalization: A Simple Reparameterization to Accelerate
31 Training of Deep Neural Networks: https://arxiv.org/abs/1602.07868
32 Tim Salimans, Diederik P. Kingma (2016)
33 WeightNormalization wrapper works for keras and tf layers.
34 ```python
35 net = WeightNormalization(
36 tf.keras.layers.Conv2D(2, 2, activation='relu'),
37 input_shape=(32, 32, 3),
38 data_init=True)(x)
39 net = WeightNormalization(
40 tf.keras.layers.Conv2D(16, 5, activation='relu'),
41 data_init=True)(net)
42 net = WeightNormalization(
43 tf.keras.layers.Dense(120, activation='relu'),
44 data_init=True)(net)
45 net = WeightNormalization(
46 tf.keras.layers.Dense(n_classes),
47 data_init=True)(net)
48 ```
49 Arguments:
50 layer: a layer instance.
51 data_init: If `True` use data dependent variable initialization
52 Raises:
53 ValueError: If not initialized with a `Layer` instance.
54 ValueError: If `Layer` does not contain a `kernel` of weights
55 NotImplementedError: If `data_init` is True and running graph execution
56 """
57
58 def __init__(self, layer, data_init=True, **kwargs):
59 super(WeightNormalization, self).__init__(layer, **kwargs)
60 self.data_init = data_init
61 self._initialized = False
62 self._track_trackable(layer, name='layer')
63
64 def build(self, input_shape):
65 """Build `Layer`"""
66 input_shape = tf.TensorShape(input_shape).as_list()
67 self.input_spec = tf.keras.layers.InputSpec(shape=input_shape)
68
69 if not self.layer.built:
70 self.layer.build(input_shape)
71
72 if not hasattr(self.layer, 'kernel'):
73 raise ValueError('`WeightNormalization` must wrap a layer that'
74 ' contains a `kernel` for weights')
75
76 # The kernel's filter or unit dimension is -1
77 self.layer_depth = int(self.layer.kernel.shape[-1])
78 self.kernel_norm_axes = list(
79 range(self.layer.kernel.shape.rank - 1))
80
81 self.v = self.layer.kernel
82 self.g = self.add_variable(
83 name="g",
84 shape=(self.layer_depth,),
85 initializer=tf.keras.initializers.get('ones'),
86 dtype=self.layer.kernel.dtype,
87 trainable=True)
88
89 super(WeightNormalization, self).build()
90
91 def call(self, inputs):
92 """Call `Layer`"""
93 if not self._initialized:
94 self._initialize_weights(inputs)
95
96 self._compute_weights() # Recompute weights for each forward pass
97 output = self.layer(inputs)
98 return output
99
100 def compute_output_shape(self, input_shape):
101 return tf.TensorShape(
102 self.layer.compute_output_shape(input_shape).as_list())
103
104 def _compute_weights(self):
105 """Generate normalized weights.
106
107 This method will update the value of self.layer.kernel with the
108 normalized value, so that the layer is ready for call().
109 """
110 with tf.name_scope('compute_weights'):
111 self.layer.kernel = tf.nn.l2_normalize(
112 self.v, axis=self.kernel_norm_axes) * self.g
113
114 def _initialize_weights(self, inputs):
115 """Initialize weight g.
116
117 The initial value of g could either from the initial value in v,
118 or by the input value if self.data_init is True.
119 """
120 if self.data_init:
121 self._data_dep_init(inputs)
122 else:
123 self._init_norm()
124 self._initialized = True
125
126 def _init_norm(self):
127 """Set the weight g with the norm of the weight vector."""
128 with tf.name_scope('init_norm'):
129 flat = tf.reshape(self.v, [-1, self.layer_depth])
130 self.g.assign(
131 tf.reshape(tf.linalg.norm(flat, axis=0), (self.layer_depth,)))
132
133 def _data_dep_init(self, inputs):
134 """Data dependent initialization."""
135
136 with tf.name_scope('data_dep_init'):
137 # Generate data dependent init values
138 existing_activation = self.layer.activation
139 self.layer.activation = None
140 x_init = self.layer(inputs)
141 data_norm_axes = list(range(x_init.shape.rank - 1))
142 m_init, v_init = tf.nn.moments(x_init, data_norm_axes)
143 scale_init = 1. / tf.math.sqrt(v_init + 1e-10)
144
145 # Assign data dependent init values
146 self.g.assign(self.g * scale_init)
147 if hasattr(self.layer, 'bias'):
148 self.layer.bias.assign(-m_init * scale_init)
149 self.layer.activation = existing_activation
150
151 def get_config(self):
152 config = {'data_init': self.data_init}
153 base_config = super(WeightNormalization, self).get_config()
154 return dict(list(base_config.items()) + list(config.items()))
155
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tensorflow_addons/layers/wrappers.py b/tensorflow_addons/layers/wrappers.py
--- a/tensorflow_addons/layers/wrappers.py
+++ b/tensorflow_addons/layers/wrappers.py
@@ -88,6 +88,7 @@
super(WeightNormalization, self).build()
+ @tf.function
def call(self, inputs):
"""Call `Layer`"""
if not self._initialized:
@@ -143,9 +144,9 @@
scale_init = 1. / tf.math.sqrt(v_init + 1e-10)
# Assign data dependent init values
- self.g.assign(self.g * scale_init)
+ self.g = self.g * scale_init
if hasattr(self.layer, 'bias'):
- self.layer.bias.assign(-m_init * scale_init)
+ self.layer.bias = -m_init * scale_init
self.layer.activation = existing_activation
def get_config(self):
| {"golden_diff": "diff --git a/tensorflow_addons/layers/wrappers.py b/tensorflow_addons/layers/wrappers.py\n--- a/tensorflow_addons/layers/wrappers.py\n+++ b/tensorflow_addons/layers/wrappers.py\n@@ -88,6 +88,7 @@\n \n super(WeightNormalization, self).build()\n \n+ @tf.function\n def call(self, inputs):\n \"\"\"Call `Layer`\"\"\"\n if not self._initialized:\n@@ -143,9 +144,9 @@\n scale_init = 1. / tf.math.sqrt(v_init + 1e-10)\n \n # Assign data dependent init values\n- self.g.assign(self.g * scale_init)\n+ self.g = self.g * scale_init\n if hasattr(self.layer, 'bias'):\n- self.layer.bias.assign(-m_init * scale_init)\n+ self.layer.bias = -m_init * scale_init\n self.layer.activation = existing_activation\n \n def get_config(self):\n", "issue": "WeightNormalization fails after update\nYesterday I cut a 0.3 release on pypi test, but as a sanity check on the weightnorm changes #208 I ran a quick regression check:\r\n\r\nhttps://colab.research.google.com/drive/1RQlVR9X7vj8q3W3sJC6YA3Cq45XTXrau\r\n\r\nThe new layer is failing with loss of nan. Looking at the test coverage for this layer I'm struggling how to best catch this without running a full model. The losses for our toy test cases do not return nan so perhaps we need a true regression test like mnist?\r\n\r\nEDIT -- Fix colab link\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# =============================================================================\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\nfrom tensorflow_addons.utils import keras_utils\n\n\n@keras_utils.register_keras_custom_object\nclass WeightNormalization(tf.keras.layers.Wrapper):\n \"\"\"This wrapper reparameterizes a layer by decoupling the weight's\n magnitude and direction.\n\n This speeds up convergence by improving the\n conditioning of the optimization problem.\n Weight Normalization: A Simple Reparameterization to Accelerate\n Training of Deep Neural Networks: https://arxiv.org/abs/1602.07868\n Tim Salimans, Diederik P. Kingma (2016)\n WeightNormalization wrapper works for keras and tf layers.\n ```python\n net = WeightNormalization(\n tf.keras.layers.Conv2D(2, 2, activation='relu'),\n input_shape=(32, 32, 3),\n data_init=True)(x)\n net = WeightNormalization(\n tf.keras.layers.Conv2D(16, 5, activation='relu'),\n data_init=True)(net)\n net = WeightNormalization(\n tf.keras.layers.Dense(120, activation='relu'),\n data_init=True)(net)\n net = WeightNormalization(\n tf.keras.layers.Dense(n_classes),\n data_init=True)(net)\n ```\n Arguments:\n layer: a layer instance.\n data_init: If `True` use data dependent variable initialization\n Raises:\n ValueError: If not initialized with a `Layer` instance.\n ValueError: If `Layer` does not contain a `kernel` of weights\n NotImplementedError: If `data_init` is True and running graph execution\n \"\"\"\n\n def __init__(self, layer, data_init=True, **kwargs):\n super(WeightNormalization, self).__init__(layer, **kwargs)\n self.data_init = data_init\n self._initialized = False\n self._track_trackable(layer, name='layer')\n\n def build(self, input_shape):\n \"\"\"Build `Layer`\"\"\"\n input_shape = tf.TensorShape(input_shape).as_list()\n self.input_spec = tf.keras.layers.InputSpec(shape=input_shape)\n\n if not self.layer.built:\n self.layer.build(input_shape)\n\n if not hasattr(self.layer, 'kernel'):\n raise ValueError('`WeightNormalization` must wrap a layer that'\n ' contains a `kernel` for weights')\n\n # The kernel's filter or unit dimension is -1\n self.layer_depth = int(self.layer.kernel.shape[-1])\n self.kernel_norm_axes = list(\n range(self.layer.kernel.shape.rank - 1))\n\n self.v = self.layer.kernel\n self.g = self.add_variable(\n name=\"g\",\n shape=(self.layer_depth,),\n initializer=tf.keras.initializers.get('ones'),\n dtype=self.layer.kernel.dtype,\n trainable=True)\n\n super(WeightNormalization, self).build()\n\n def call(self, inputs):\n \"\"\"Call `Layer`\"\"\"\n if not self._initialized:\n self._initialize_weights(inputs)\n\n self._compute_weights() # Recompute weights for each forward pass\n output = self.layer(inputs)\n return output\n\n def compute_output_shape(self, input_shape):\n return tf.TensorShape(\n self.layer.compute_output_shape(input_shape).as_list())\n\n def _compute_weights(self):\n \"\"\"Generate normalized weights.\n\n This method will update the value of self.layer.kernel with the\n normalized value, so that the layer is ready for call().\n \"\"\"\n with tf.name_scope('compute_weights'):\n self.layer.kernel = tf.nn.l2_normalize(\n self.v, axis=self.kernel_norm_axes) * self.g\n\n def _initialize_weights(self, inputs):\n \"\"\"Initialize weight g.\n\n The initial value of g could either from the initial value in v,\n or by the input value if self.data_init is True.\n \"\"\"\n if self.data_init:\n self._data_dep_init(inputs)\n else:\n self._init_norm()\n self._initialized = True\n\n def _init_norm(self):\n \"\"\"Set the weight g with the norm of the weight vector.\"\"\"\n with tf.name_scope('init_norm'):\n flat = tf.reshape(self.v, [-1, self.layer_depth])\n self.g.assign(\n tf.reshape(tf.linalg.norm(flat, axis=0), (self.layer_depth,)))\n\n def _data_dep_init(self, inputs):\n \"\"\"Data dependent initialization.\"\"\"\n\n with tf.name_scope('data_dep_init'):\n # Generate data dependent init values\n existing_activation = self.layer.activation\n self.layer.activation = None\n x_init = self.layer(inputs)\n data_norm_axes = list(range(x_init.shape.rank - 1))\n m_init, v_init = tf.nn.moments(x_init, data_norm_axes)\n scale_init = 1. / tf.math.sqrt(v_init + 1e-10)\n\n # Assign data dependent init values\n self.g.assign(self.g * scale_init)\n if hasattr(self.layer, 'bias'):\n self.layer.bias.assign(-m_init * scale_init)\n self.layer.activation = existing_activation\n\n def get_config(self):\n config = {'data_init': self.data_init}\n base_config = super(WeightNormalization, self).get_config()\n return dict(list(base_config.items()) + list(config.items()))\n", "path": "tensorflow_addons/layers/wrappers.py"}], "after_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# =============================================================================\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\nfrom tensorflow_addons.utils import keras_utils\n\n\n@keras_utils.register_keras_custom_object\nclass WeightNormalization(tf.keras.layers.Wrapper):\n \"\"\"This wrapper reparameterizes a layer by decoupling the weight's\n magnitude and direction.\n\n This speeds up convergence by improving the\n conditioning of the optimization problem.\n Weight Normalization: A Simple Reparameterization to Accelerate\n Training of Deep Neural Networks: https://arxiv.org/abs/1602.07868\n Tim Salimans, Diederik P. Kingma (2016)\n WeightNormalization wrapper works for keras and tf layers.\n ```python\n net = WeightNormalization(\n tf.keras.layers.Conv2D(2, 2, activation='relu'),\n input_shape=(32, 32, 3),\n data_init=True)(x)\n net = WeightNormalization(\n tf.keras.layers.Conv2D(16, 5, activation='relu'),\n data_init=True)(net)\n net = WeightNormalization(\n tf.keras.layers.Dense(120, activation='relu'),\n data_init=True)(net)\n net = WeightNormalization(\n tf.keras.layers.Dense(n_classes),\n data_init=True)(net)\n ```\n Arguments:\n layer: a layer instance.\n data_init: If `True` use data dependent variable initialization\n Raises:\n ValueError: If not initialized with a `Layer` instance.\n ValueError: If `Layer` does not contain a `kernel` of weights\n NotImplementedError: If `data_init` is True and running graph execution\n \"\"\"\n\n def __init__(self, layer, data_init=True, **kwargs):\n super(WeightNormalization, self).__init__(layer, **kwargs)\n self.data_init = data_init\n self._initialized = False\n self._track_trackable(layer, name='layer')\n\n def build(self, input_shape):\n \"\"\"Build `Layer`\"\"\"\n input_shape = tf.TensorShape(input_shape).as_list()\n self.input_spec = tf.keras.layers.InputSpec(shape=input_shape)\n\n if not self.layer.built:\n self.layer.build(input_shape)\n\n if not hasattr(self.layer, 'kernel'):\n raise ValueError('`WeightNormalization` must wrap a layer that'\n ' contains a `kernel` for weights')\n\n # The kernel's filter or unit dimension is -1\n self.layer_depth = int(self.layer.kernel.shape[-1])\n self.kernel_norm_axes = list(\n range(self.layer.kernel.shape.rank - 1))\n\n self.v = self.layer.kernel\n self.g = self.add_variable(\n name=\"g\",\n shape=(self.layer_depth,),\n initializer=tf.keras.initializers.get('ones'),\n dtype=self.layer.kernel.dtype,\n trainable=True)\n\n super(WeightNormalization, self).build()\n\n @tf.function\n def call(self, inputs):\n \"\"\"Call `Layer`\"\"\"\n if not self._initialized:\n self._initialize_weights(inputs)\n\n self._compute_weights() # Recompute weights for each forward pass\n output = self.layer(inputs)\n return output\n\n def compute_output_shape(self, input_shape):\n return tf.TensorShape(\n self.layer.compute_output_shape(input_shape).as_list())\n\n def _compute_weights(self):\n \"\"\"Generate normalized weights.\n\n This method will update the value of self.layer.kernel with the\n normalized value, so that the layer is ready for call().\n \"\"\"\n with tf.name_scope('compute_weights'):\n self.layer.kernel = tf.nn.l2_normalize(\n self.v, axis=self.kernel_norm_axes) * self.g\n\n def _initialize_weights(self, inputs):\n \"\"\"Initialize weight g.\n\n The initial value of g could either from the initial value in v,\n or by the input value if self.data_init is True.\n \"\"\"\n if self.data_init:\n self._data_dep_init(inputs)\n else:\n self._init_norm()\n self._initialized = True\n\n def _init_norm(self):\n \"\"\"Set the weight g with the norm of the weight vector.\"\"\"\n with tf.name_scope('init_norm'):\n flat = tf.reshape(self.v, [-1, self.layer_depth])\n self.g.assign(\n tf.reshape(tf.linalg.norm(flat, axis=0), (self.layer_depth,)))\n\n def _data_dep_init(self, inputs):\n \"\"\"Data dependent initialization.\"\"\"\n\n with tf.name_scope('data_dep_init'):\n # Generate data dependent init values\n existing_activation = self.layer.activation\n self.layer.activation = None\n x_init = self.layer(inputs)\n data_norm_axes = list(range(x_init.shape.rank - 1))\n m_init, v_init = tf.nn.moments(x_init, data_norm_axes)\n scale_init = 1. / tf.math.sqrt(v_init + 1e-10)\n\n # Assign data dependent init values\n self.g = self.g * scale_init\n if hasattr(self.layer, 'bias'):\n self.layer.bias = -m_init * scale_init\n self.layer.activation = existing_activation\n\n def get_config(self):\n config = {'data_init': self.data_init}\n base_config = super(WeightNormalization, self).get_config()\n return dict(list(base_config.items()) + list(config.items()))\n", "path": "tensorflow_addons/layers/wrappers.py"}]} | 2,050 | 222 |
gh_patches_debug_12300 | rasdani/github-patches | git_diff | mlcommons__GaNDLF-729 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Model saving is calling on `git` which causes an issue when the `subprocess` library is from the system
**Describe the bug**
When the `subprocess` library is called from the system installation of python, `git` invocation does not work, since there is no git repo for the system python installation.
**To Reproduce**
Steps to reproduce the behavior:
1. Start GaNDLF training
2. See the following error:
```bash
$ > cat ${error_log}
[SNIP!]
File "/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/compute/training_loop.py", line 290, in training_loop
save_model(
File "/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/utils/modelio.py", line 159, in save_model
subprocess.check_output(["git", "rev-parse", "HEAD"])
File "/N/soft/sles15/deeplearning/Python-3.10.10/Lib/subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/N/soft/sles15/deeplearning/Python-3.10.10/Lib/subprocess.py", line 503, in run
with Popen(*popenargs, **kwargs) as process:
File "/N/soft/sles15/deeplearning/Python-3.10.10/Lib/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/N/soft/sles15/deeplearning/Python-3.10.10/Lib/subprocess.py", line 1847, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
```
**Expected behavior**
The model saving should work.
**Screenshots**
N.A.
**GaNDLF Version**
<!-- Put the output of the following command:
python -c 'import GANDLF as g;print(g.__version__)'
-->
0.0.17-dev
**Desktop (please complete the following information):**
N.A.
**Additional context**
This is seen for the IU BigRed cluster.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `GANDLF/utils/modelio.py`
Content:
```
1 import hashlib
2 import os
3 import subprocess
4 from typing import Any, Dict
5
6 import torch
7
8 from ..version import __version__
9 from .generic import get_unique_timestamp
10
11 # these are the base keys for the model dictionary to save
12 model_dict_full = {
13 "epoch": 0,
14 "model_state_dict": None,
15 "optimizer_state_dict": None,
16 "loss": None,
17 "timestamp": None,
18 "timestamp_hash": None,
19 "git_hash": None,
20 "version": None,
21 }
22
23 model_dict_required = {
24 "model_state_dict": None,
25 "optimizer_state_dict": None,
26 }
27
28 best_model_path_end = "_best.pth.tar"
29 latest_model_path_end = "_latest.pth.tar"
30 initial_model_path_end = "_initial.pth.tar"
31
32
33 def optimize_and_save_model(model, params, path, onnx_export=True):
34 """
35 Perform post-training optimization and save it to a file.
36
37 Args:
38 model (torch.nn.Module): Trained torch model.
39 params (dict): The parameter dictionary.
40 path (str): The path to save the model dictionary to.
41 onnx_export (bool): Whether to export to ONNX and OpenVINO.
42 """
43 # Check if ONNX export is enabled in the parameter dictionary
44 onnx_export = params["model"].get("onnx_export", onnx_export)
45
46 # Check for incompatible topologies and disable ONNX export
47 # Customized imagenet_vgg no longer supported for ONNX export
48 if onnx_export:
49 architecture = params["model"]["architecture"]
50 if architecture in ["sdnet", "brain_age"] or "imagenet_vgg" in architecture:
51 onnx_export = False
52
53 if not onnx_export:
54 # Print a warning if ONNX export is disabled and not already warned
55 if "onnx_print" not in params:
56 print("WARNING: Current model is not supported by ONNX/OpenVINO!")
57 params["onnx_print"] = True
58 return
59 else:
60 try:
61 print("Optimizing the best model.")
62 num_channel = params["model"]["num_channels"]
63 model_dimension = params["model"]["dimension"]
64 input_shape = params["patch_size"]
65 onnx_path = path
66 if not onnx_path.endswith(".onnx"):
67 onnx_path = onnx_path.replace("pth.tar", "onnx")
68
69 if model_dimension == 2:
70 dummy_input = torch.randn(
71 (1, num_channel, input_shape[0], input_shape[1])
72 )
73 else:
74 dummy_input = torch.randn(
75 (1, num_channel, input_shape[0], input_shape[1], input_shape[2])
76 )
77
78 # Export the model to ONNX format
79 with torch.no_grad():
80 torch.onnx.export(
81 model.to("cpu"),
82 dummy_input.to("cpu"),
83 onnx_path,
84 opset_version=11,
85 export_params=True,
86 verbose=True,
87 input_names=["input"],
88 output_names=["output"],
89 )
90 except RuntimeWarning:
91 print("WARNING: Cannot export to ONNX model.")
92 return
93
94 # Check if OpenVINO is present and try to convert the ONNX model
95 openvino_present = False
96 try:
97 import openvino as ov
98 from openvino.tools.mo import convert_model
99 from openvino.runtime import get_version
100
101 openvino_present = False
102 # check for the correct openvino version to prevent inadvertent api breaks
103 if "2023.0.1" in get_version():
104 openvino_present = True
105 except ImportError:
106 print("WARNING: OpenVINO is not present.")
107
108 if openvino_present:
109 xml_path = onnx_path.replace("onnx", "xml")
110 bin_path = onnx_path.replace("onnx", "bin")
111 try:
112 if model_dimension == 2:
113 ov_model = convert_model(
114 onnx_path,
115 input_shape=(1, num_channel, input_shape[0], input_shape[1]),
116 )
117 else:
118 ov_model = convert_model(
119 onnx_path,
120 input_shape=(
121 1,
122 num_channel,
123 input_shape[0],
124 input_shape[1],
125 input_shape[2],
126 ),
127 )
128 ov.runtime.serialize(ov_model, xml_path=xml_path, bin_path=bin_path)
129 except Exception as e:
130 print("WARNING: OpenVINO Model Optimizer IR conversion failed: " + e)
131
132
133 def save_model(
134 model_dict: Dict[str, Any],
135 model: torch.nn.Module,
136 params: Dict[str, Any],
137 path: str,
138 onnx_export: bool = True,
139 ):
140 """
141 Save the model dictionary to a file.
142
143 Args:
144 model_dict (dict): Model dictionary to save.
145 model (torch.nn.Module): Trained torch model.
146 params (dict): The parameter dictionary.
147 path (str): The path to save the model dictionary to.
148 onnx_export (bool): Whether to export to ONNX and OpenVINO.
149 """
150 model_dict["timestamp"] = get_unique_timestamp()
151 model_dict["timestamp_hash"] = hashlib.sha256(
152 str(model_dict["timestamp"]).encode("utf-8")
153 ).hexdigest()
154 model_dict["version"] = __version__
155 model_dict["parameters"] = params
156
157 try:
158 model_dict["git_hash"] = (
159 subprocess.check_output(["git", "rev-parse", "HEAD"])
160 .decode("ascii")
161 .strip()
162 )
163 except subprocess.CalledProcessError:
164 model_dict["git_hash"] = None
165
166 torch.save(model_dict, path)
167
168 # post-training optimization
169 optimize_and_save_model(model, params, path, onnx_export=onnx_export)
170
171
172 def load_model(
173 path: str, device: torch.device, full_sanity_check: bool = True
174 ) -> Dict[str, Any]:
175 """
176 Load a model dictionary from a file.
177
178 Args:
179 path (str): The path to save the model dictionary to.
180 device (torch.device): The device to run the model on.
181 full_sanity_check (bool): Whether to run full sanity checking on the model.
182
183 Returns:
184 dict: Model dictionary containing model parameters and metadata.
185 """
186 model_dict = torch.load(path, map_location=device)
187
188 # check if the model dictionary is complete
189 if full_sanity_check:
190 incomplete_keys = [
191 key for key in model_dict_full.keys() if key not in model_dict.keys()
192 ]
193 if len(incomplete_keys) > 0:
194 raise RuntimeWarning(
195 "Model dictionary is incomplete; the following keys are missing:",
196 incomplete_keys,
197 )
198
199 # check if required keys are absent, and if so raise an error
200 incomplete_required_keys = [
201 key for key in model_dict_required.keys() if key not in model_dict.keys()
202 ]
203 if len(incomplete_required_keys) > 0:
204 raise KeyError(
205 "Model dictionary is incomplete; the following keys are missing:",
206 incomplete_required_keys,
207 )
208
209 return model_dict
210
211
212 def load_ov_model(path: str, device: str = "CPU"):
213 """
214 Load an OpenVINO IR model from an .xml file.
215
216 Args:
217 path (str): The path to the OpenVINO .xml file.
218 device (str): The device to run inference, can be "CPU", "GPU" or "MULTI:CPU,GPU". Default to be "CPU".
219
220 Returns:
221 exec_net (OpenVINO executable net): executable OpenVINO model.
222 input_blob (str): Input name.
223 output_blob (str): Output name.
224 """
225
226 try:
227 from openvino import runtime as ov
228 except ImportError:
229 raise ImportError("OpenVINO inference engine is not configured correctly.")
230
231 core = ov.Core()
232 if device.lower() == "cuda":
233 device = "GPU"
234
235 if device == "GPU":
236 core.set_property({"CACHE_DIR": os.path.dirname(os.path.abspath(path))})
237
238 model = core.read_model(model=path, weights=path.replace("xml", "bin"))
239 compiled_model = core.compile_model(model=model, device_name=device.upper())
240 input_layer = compiled_model.inputs
241 output_layer = compiled_model.outputs
242
243 return compiled_model, input_layer, output_layer
244
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/GANDLF/utils/modelio.py b/GANDLF/utils/modelio.py
--- a/GANDLF/utils/modelio.py
+++ b/GANDLF/utils/modelio.py
@@ -155,13 +155,14 @@
model_dict["parameters"] = params
try:
+ # this will try to encode the git hash of the current GaNDLF codebase, and reverts to "None" if not found
model_dict["git_hash"] = (
- subprocess.check_output(["git", "rev-parse", "HEAD"])
+ subprocess.check_output(["git", "rev-parse", "HEAD"], cwd=os.getcwd())
.decode("ascii")
.strip()
)
- except subprocess.CalledProcessError:
- model_dict["git_hash"] = None
+ except (subprocess.CalledProcessError, FileNotFoundError):
+ model_dict["git_hash"] = "None"
torch.save(model_dict, path)
| {"golden_diff": "diff --git a/GANDLF/utils/modelio.py b/GANDLF/utils/modelio.py\n--- a/GANDLF/utils/modelio.py\n+++ b/GANDLF/utils/modelio.py\n@@ -155,13 +155,14 @@\n model_dict[\"parameters\"] = params\n \n try:\n+ # this will try to encode the git hash of the current GaNDLF codebase, and reverts to \"None\" if not found\n model_dict[\"git_hash\"] = (\n- subprocess.check_output([\"git\", \"rev-parse\", \"HEAD\"])\n+ subprocess.check_output([\"git\", \"rev-parse\", \"HEAD\"], cwd=os.getcwd())\n .decode(\"ascii\")\n .strip()\n )\n- except subprocess.CalledProcessError:\n- model_dict[\"git_hash\"] = None\n+ except (subprocess.CalledProcessError, FileNotFoundError):\n+ model_dict[\"git_hash\"] = \"None\"\n \n torch.save(model_dict, path)\n", "issue": "Model saving is calling on `git` which causes an issue when the `subprocess` library is from the system\n**Describe the bug**\r\nWhen the `subprocess` library is called from the system installation of python, `git` invocation does not work, since there is no git repo for the system python installation. \r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Start GaNDLF training\r\n2. See the following error:\r\n```bash\r\n$ > cat ${error_log}\r\n[SNIP!]\r\n File \"/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/compute/training_loop.py\", line 290, in training_loop\r\n save_model(\r\n File \"/geode2/home/u070/patis/BigRed200/projects/gandlf_mine/GANDLF/utils/modelio.py\", line 159, in save_model\r\n subprocess.check_output([\"git\", \"rev-parse\", \"HEAD\"])\r\n File \"/N/soft/sles15/deeplearning/Python-3.10.10/Lib/subprocess.py\", line 421, in check_output\r\n return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,\r\n File \"/N/soft/sles15/deeplearning/Python-3.10.10/Lib/subprocess.py\", line 503, in run\r\n with Popen(*popenargs, **kwargs) as process:\r\n File \"/N/soft/sles15/deeplearning/Python-3.10.10/Lib/subprocess.py\", line 971, in __init__\r\n self._execute_child(args, executable, preexec_fn, close_fds,\r\n File \"/N/soft/sles15/deeplearning/Python-3.10.10/Lib/subprocess.py\", line 1847, in _execute_child\r\n raise child_exception_type(errno_num, err_msg, err_filename)\r\n```\r\n\r\n**Expected behavior**\r\nThe model saving should work.\r\n\r\n**Screenshots**\r\nN.A.\r\n\r\n**GaNDLF Version**\r\n<!-- Put the output of the following command:\r\npython -c 'import GANDLF as g;print(g.__version__)'\r\n-->\r\n0.0.17-dev\r\n\r\n**Desktop (please complete the following information):**\r\nN.A.\r\n\r\n**Additional context**\r\nThis is seen for the IU BigRed cluster. \n", "before_files": [{"content": "import hashlib\nimport os\nimport subprocess\nfrom typing import Any, Dict\n\nimport torch\n\nfrom ..version import __version__\nfrom .generic import get_unique_timestamp\n\n# these are the base keys for the model dictionary to save\nmodel_dict_full = {\n \"epoch\": 0,\n \"model_state_dict\": None,\n \"optimizer_state_dict\": None,\n \"loss\": None,\n \"timestamp\": None,\n \"timestamp_hash\": None,\n \"git_hash\": None,\n \"version\": None,\n}\n\nmodel_dict_required = {\n \"model_state_dict\": None,\n \"optimizer_state_dict\": None,\n}\n\nbest_model_path_end = \"_best.pth.tar\"\nlatest_model_path_end = \"_latest.pth.tar\"\ninitial_model_path_end = \"_initial.pth.tar\"\n\n\ndef optimize_and_save_model(model, params, path, onnx_export=True):\n \"\"\"\n Perform post-training optimization and save it to a file.\n\n Args:\n model (torch.nn.Module): Trained torch model.\n params (dict): The parameter dictionary.\n path (str): The path to save the model dictionary to.\n onnx_export (bool): Whether to export to ONNX and OpenVINO.\n \"\"\"\n # Check if ONNX export is enabled in the parameter dictionary\n onnx_export = params[\"model\"].get(\"onnx_export\", onnx_export)\n\n # Check for incompatible topologies and disable ONNX export\n # Customized imagenet_vgg no longer supported for ONNX export\n if onnx_export:\n architecture = params[\"model\"][\"architecture\"]\n if architecture in [\"sdnet\", \"brain_age\"] or \"imagenet_vgg\" in architecture:\n onnx_export = False\n\n if not onnx_export:\n # Print a warning if ONNX export is disabled and not already warned\n if \"onnx_print\" not in params:\n print(\"WARNING: Current model is not supported by ONNX/OpenVINO!\")\n params[\"onnx_print\"] = True\n return\n else:\n try:\n print(\"Optimizing the best model.\")\n num_channel = params[\"model\"][\"num_channels\"]\n model_dimension = params[\"model\"][\"dimension\"]\n input_shape = params[\"patch_size\"]\n onnx_path = path\n if not onnx_path.endswith(\".onnx\"):\n onnx_path = onnx_path.replace(\"pth.tar\", \"onnx\")\n\n if model_dimension == 2:\n dummy_input = torch.randn(\n (1, num_channel, input_shape[0], input_shape[1])\n )\n else:\n dummy_input = torch.randn(\n (1, num_channel, input_shape[0], input_shape[1], input_shape[2])\n )\n\n # Export the model to ONNX format\n with torch.no_grad():\n torch.onnx.export(\n model.to(\"cpu\"),\n dummy_input.to(\"cpu\"),\n onnx_path,\n opset_version=11,\n export_params=True,\n verbose=True,\n input_names=[\"input\"],\n output_names=[\"output\"],\n )\n except RuntimeWarning:\n print(\"WARNING: Cannot export to ONNX model.\")\n return\n\n # Check if OpenVINO is present and try to convert the ONNX model\n openvino_present = False\n try:\n import openvino as ov\n from openvino.tools.mo import convert_model\n from openvino.runtime import get_version\n\n openvino_present = False\n # check for the correct openvino version to prevent inadvertent api breaks\n if \"2023.0.1\" in get_version():\n openvino_present = True\n except ImportError:\n print(\"WARNING: OpenVINO is not present.\")\n\n if openvino_present:\n xml_path = onnx_path.replace(\"onnx\", \"xml\")\n bin_path = onnx_path.replace(\"onnx\", \"bin\")\n try:\n if model_dimension == 2:\n ov_model = convert_model(\n onnx_path,\n input_shape=(1, num_channel, input_shape[0], input_shape[1]),\n )\n else:\n ov_model = convert_model(\n onnx_path,\n input_shape=(\n 1,\n num_channel,\n input_shape[0],\n input_shape[1],\n input_shape[2],\n ),\n )\n ov.runtime.serialize(ov_model, xml_path=xml_path, bin_path=bin_path)\n except Exception as e:\n print(\"WARNING: OpenVINO Model Optimizer IR conversion failed: \" + e)\n\n\ndef save_model(\n model_dict: Dict[str, Any],\n model: torch.nn.Module,\n params: Dict[str, Any],\n path: str,\n onnx_export: bool = True,\n):\n \"\"\"\n Save the model dictionary to a file.\n\n Args:\n model_dict (dict): Model dictionary to save.\n model (torch.nn.Module): Trained torch model.\n params (dict): The parameter dictionary.\n path (str): The path to save the model dictionary to.\n onnx_export (bool): Whether to export to ONNX and OpenVINO.\n \"\"\"\n model_dict[\"timestamp\"] = get_unique_timestamp()\n model_dict[\"timestamp_hash\"] = hashlib.sha256(\n str(model_dict[\"timestamp\"]).encode(\"utf-8\")\n ).hexdigest()\n model_dict[\"version\"] = __version__\n model_dict[\"parameters\"] = params\n\n try:\n model_dict[\"git_hash\"] = (\n subprocess.check_output([\"git\", \"rev-parse\", \"HEAD\"])\n .decode(\"ascii\")\n .strip()\n )\n except subprocess.CalledProcessError:\n model_dict[\"git_hash\"] = None\n\n torch.save(model_dict, path)\n\n # post-training optimization\n optimize_and_save_model(model, params, path, onnx_export=onnx_export)\n\n\ndef load_model(\n path: str, device: torch.device, full_sanity_check: bool = True\n) -> Dict[str, Any]:\n \"\"\"\n Load a model dictionary from a file.\n\n Args:\n path (str): The path to save the model dictionary to.\n device (torch.device): The device to run the model on.\n full_sanity_check (bool): Whether to run full sanity checking on the model.\n\n Returns:\n dict: Model dictionary containing model parameters and metadata.\n \"\"\"\n model_dict = torch.load(path, map_location=device)\n\n # check if the model dictionary is complete\n if full_sanity_check:\n incomplete_keys = [\n key for key in model_dict_full.keys() if key not in model_dict.keys()\n ]\n if len(incomplete_keys) > 0:\n raise RuntimeWarning(\n \"Model dictionary is incomplete; the following keys are missing:\",\n incomplete_keys,\n )\n\n # check if required keys are absent, and if so raise an error\n incomplete_required_keys = [\n key for key in model_dict_required.keys() if key not in model_dict.keys()\n ]\n if len(incomplete_required_keys) > 0:\n raise KeyError(\n \"Model dictionary is incomplete; the following keys are missing:\",\n incomplete_required_keys,\n )\n\n return model_dict\n\n\ndef load_ov_model(path: str, device: str = \"CPU\"):\n \"\"\"\n Load an OpenVINO IR model from an .xml file.\n\n Args:\n path (str): The path to the OpenVINO .xml file.\n device (str): The device to run inference, can be \"CPU\", \"GPU\" or \"MULTI:CPU,GPU\". Default to be \"CPU\".\n\n Returns:\n exec_net (OpenVINO executable net): executable OpenVINO model.\n input_blob (str): Input name.\n output_blob (str): Output name.\n \"\"\"\n\n try:\n from openvino import runtime as ov\n except ImportError:\n raise ImportError(\"OpenVINO inference engine is not configured correctly.\")\n\n core = ov.Core()\n if device.lower() == \"cuda\":\n device = \"GPU\"\n\n if device == \"GPU\":\n core.set_property({\"CACHE_DIR\": os.path.dirname(os.path.abspath(path))})\n\n model = core.read_model(model=path, weights=path.replace(\"xml\", \"bin\"))\n compiled_model = core.compile_model(model=model, device_name=device.upper())\n input_layer = compiled_model.inputs\n output_layer = compiled_model.outputs\n\n return compiled_model, input_layer, output_layer\n", "path": "GANDLF/utils/modelio.py"}], "after_files": [{"content": "import hashlib\nimport os\nimport subprocess\nfrom typing import Any, Dict\n\nimport torch\n\nfrom ..version import __version__\nfrom .generic import get_unique_timestamp\n\n# these are the base keys for the model dictionary to save\nmodel_dict_full = {\n \"epoch\": 0,\n \"model_state_dict\": None,\n \"optimizer_state_dict\": None,\n \"loss\": None,\n \"timestamp\": None,\n \"timestamp_hash\": None,\n \"git_hash\": None,\n \"version\": None,\n}\n\nmodel_dict_required = {\n \"model_state_dict\": None,\n \"optimizer_state_dict\": None,\n}\n\nbest_model_path_end = \"_best.pth.tar\"\nlatest_model_path_end = \"_latest.pth.tar\"\ninitial_model_path_end = \"_initial.pth.tar\"\n\n\ndef optimize_and_save_model(model, params, path, onnx_export=True):\n \"\"\"\n Perform post-training optimization and save it to a file.\n\n Args:\n model (torch.nn.Module): Trained torch model.\n params (dict): The parameter dictionary.\n path (str): The path to save the model dictionary to.\n onnx_export (bool): Whether to export to ONNX and OpenVINO.\n \"\"\"\n # Check if ONNX export is enabled in the parameter dictionary\n onnx_export = params[\"model\"].get(\"onnx_export\", onnx_export)\n\n # Check for incompatible topologies and disable ONNX export\n # Customized imagenet_vgg no longer supported for ONNX export\n if onnx_export:\n architecture = params[\"model\"][\"architecture\"]\n if architecture in [\"sdnet\", \"brain_age\"] or \"imagenet_vgg\" in architecture:\n onnx_export = False\n\n if not onnx_export:\n # Print a warning if ONNX export is disabled and not already warned\n if \"onnx_print\" not in params:\n print(\"WARNING: Current model is not supported by ONNX/OpenVINO!\")\n params[\"onnx_print\"] = True\n return\n else:\n try:\n print(\"Optimizing the best model.\")\n num_channel = params[\"model\"][\"num_channels\"]\n model_dimension = params[\"model\"][\"dimension\"]\n input_shape = params[\"patch_size\"]\n onnx_path = path\n if not onnx_path.endswith(\".onnx\"):\n onnx_path = onnx_path.replace(\"pth.tar\", \"onnx\")\n\n if model_dimension == 2:\n dummy_input = torch.randn(\n (1, num_channel, input_shape[0], input_shape[1])\n )\n else:\n dummy_input = torch.randn(\n (1, num_channel, input_shape[0], input_shape[1], input_shape[2])\n )\n\n # Export the model to ONNX format\n with torch.no_grad():\n torch.onnx.export(\n model.to(\"cpu\"),\n dummy_input.to(\"cpu\"),\n onnx_path,\n opset_version=11,\n export_params=True,\n verbose=True,\n input_names=[\"input\"],\n output_names=[\"output\"],\n )\n except RuntimeWarning:\n print(\"WARNING: Cannot export to ONNX model.\")\n return\n\n # Check if OpenVINO is present and try to convert the ONNX model\n openvino_present = False\n try:\n import openvino as ov\n from openvino.tools.mo import convert_model\n from openvino.runtime import get_version\n\n openvino_present = False\n # check for the correct openvino version to prevent inadvertent api breaks\n if \"2023.0.1\" in get_version():\n openvino_present = True\n except ImportError:\n print(\"WARNING: OpenVINO is not present.\")\n\n if openvino_present:\n xml_path = onnx_path.replace(\"onnx\", \"xml\")\n bin_path = onnx_path.replace(\"onnx\", \"bin\")\n try:\n if model_dimension == 2:\n ov_model = convert_model(\n onnx_path,\n input_shape=(1, num_channel, input_shape[0], input_shape[1]),\n )\n else:\n ov_model = convert_model(\n onnx_path,\n input_shape=(\n 1,\n num_channel,\n input_shape[0],\n input_shape[1],\n input_shape[2],\n ),\n )\n ov.runtime.serialize(ov_model, xml_path=xml_path, bin_path=bin_path)\n except Exception as e:\n print(\"WARNING: OpenVINO Model Optimizer IR conversion failed: \" + e)\n\n\ndef save_model(\n model_dict: Dict[str, Any],\n model: torch.nn.Module,\n params: Dict[str, Any],\n path: str,\n onnx_export: bool = True,\n):\n \"\"\"\n Save the model dictionary to a file.\n\n Args:\n model_dict (dict): Model dictionary to save.\n model (torch.nn.Module): Trained torch model.\n params (dict): The parameter dictionary.\n path (str): The path to save the model dictionary to.\n onnx_export (bool): Whether to export to ONNX and OpenVINO.\n \"\"\"\n model_dict[\"timestamp\"] = get_unique_timestamp()\n model_dict[\"timestamp_hash\"] = hashlib.sha256(\n str(model_dict[\"timestamp\"]).encode(\"utf-8\")\n ).hexdigest()\n model_dict[\"version\"] = __version__\n model_dict[\"parameters\"] = params\n\n try:\n # this will try to encode the git hash of the current GaNDLF codebase, and reverts to \"None\" if not found\n model_dict[\"git_hash\"] = (\n subprocess.check_output([\"git\", \"rev-parse\", \"HEAD\"], cwd=os.getcwd())\n .decode(\"ascii\")\n .strip()\n )\n except (subprocess.CalledProcessError, FileNotFoundError):\n model_dict[\"git_hash\"] = \"None\"\n\n torch.save(model_dict, path)\n\n # post-training optimization\n optimize_and_save_model(model, params, path, onnx_export=onnx_export)\n\n\ndef load_model(\n path: str, device: torch.device, full_sanity_check: bool = True\n) -> Dict[str, Any]:\n \"\"\"\n Load a model dictionary from a file.\n\n Args:\n path (str): The path to save the model dictionary to.\n device (torch.device): The device to run the model on.\n full_sanity_check (bool): Whether to run full sanity checking on the model.\n\n Returns:\n dict: Model dictionary containing model parameters and metadata.\n \"\"\"\n model_dict = torch.load(path, map_location=device)\n\n # check if the model dictionary is complete\n if full_sanity_check:\n incomplete_keys = [\n key for key in model_dict_full.keys() if key not in model_dict.keys()\n ]\n if len(incomplete_keys) > 0:\n raise RuntimeWarning(\n \"Model dictionary is incomplete; the following keys are missing:\",\n incomplete_keys,\n )\n\n # check if required keys are absent, and if so raise an error\n incomplete_required_keys = [\n key for key in model_dict_required.keys() if key not in model_dict.keys()\n ]\n if len(incomplete_required_keys) > 0:\n raise KeyError(\n \"Model dictionary is incomplete; the following keys are missing:\",\n incomplete_required_keys,\n )\n\n return model_dict\n\n\ndef load_ov_model(path: str, device: str = \"CPU\"):\n \"\"\"\n Load an OpenVINO IR model from an .xml file.\n\n Args:\n path (str): The path to the OpenVINO .xml file.\n device (str): The device to run inference, can be \"CPU\", \"GPU\" or \"MULTI:CPU,GPU\". Default to be \"CPU\".\n\n Returns:\n exec_net (OpenVINO executable net): executable OpenVINO model.\n input_blob (str): Input name.\n output_blob (str): Output name.\n \"\"\"\n\n try:\n from openvino import runtime as ov\n except ImportError:\n raise ImportError(\"OpenVINO inference engine is not configured correctly.\")\n\n core = ov.Core()\n if device.lower() == \"cuda\":\n device = \"GPU\"\n\n if device == \"GPU\":\n core.set_property({\"CACHE_DIR\": os.path.dirname(os.path.abspath(path))})\n\n model = core.read_model(model=path, weights=path.replace(\"xml\", \"bin\"))\n compiled_model = core.compile_model(model=model, device_name=device.upper())\n input_layer = compiled_model.inputs\n output_layer = compiled_model.outputs\n\n return compiled_model, input_layer, output_layer\n", "path": "GANDLF/utils/modelio.py"}]} | 3,214 | 208 |
gh_patches_debug_22600 | rasdani/github-patches | git_diff | pyca__cryptography-1349 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
_truncate_digest_for_ecdsa in the OpenSSL document should be commented
Right now there are no comments explaining why it exists, or what it's attempting to do.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cryptography/hazmat/backends/openssl/ec.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16 import six
17
18 from cryptography import utils
19 from cryptography.exceptions import (
20 InvalidSignature, UnsupportedAlgorithm, _Reasons
21 )
22 from cryptography.hazmat.primitives import hashes, interfaces
23 from cryptography.hazmat.primitives.asymmetric import ec
24
25
26 def _truncate_digest_for_ecdsa(ec_key_cdata, digest, backend):
27 _lib = backend._lib
28 _ffi = backend._ffi
29
30 digest_len = len(digest)
31
32 group = _lib.EC_KEY_get0_group(ec_key_cdata)
33
34 bn_ctx = _lib.BN_CTX_new()
35 assert bn_ctx != _ffi.NULL
36 bn_ctx = _ffi.gc(bn_ctx, _lib.BN_CTX_free)
37
38 order = _lib.BN_CTX_get(bn_ctx)
39 assert order != _ffi.NULL
40
41 res = _lib.EC_GROUP_get_order(group, order, bn_ctx)
42 assert res == 1
43
44 order_bits = _lib.BN_num_bits(order)
45
46 if 8 * digest_len > order_bits:
47 digest_len = (order_bits + 7) // 8
48 digest = digest[:digest_len]
49
50 if 8 * digest_len > order_bits:
51 rshift = 8 - (order_bits & 0x7)
52 assert rshift > 0 and rshift < 8
53
54 mask = 0xFF >> rshift << rshift
55
56 # Set the bottom rshift bits to 0
57 digest = digest[:-1] + six.int2byte(six.indexbytes(digest, -1) & mask)
58
59 return digest
60
61
62 @utils.register_interface(interfaces.AsymmetricSignatureContext)
63 class _ECDSASignatureContext(object):
64 def __init__(self, backend, private_key, algorithm):
65 self._backend = backend
66 self._private_key = private_key
67 self._digest = hashes.Hash(algorithm, backend)
68
69 def update(self, data):
70 self._digest.update(data)
71
72 def finalize(self):
73 ec_key = self._private_key._ec_key
74
75 digest = self._digest.finalize()
76
77 digest = _truncate_digest_for_ecdsa(ec_key, digest, self._backend)
78
79 max_size = self._backend._lib.ECDSA_size(ec_key)
80 assert max_size > 0
81
82 sigbuf = self._backend._ffi.new("char[]", max_size)
83 siglen_ptr = self._backend._ffi.new("unsigned int[]", 1)
84 res = self._backend._lib.ECDSA_sign(
85 0,
86 digest,
87 len(digest),
88 sigbuf,
89 siglen_ptr,
90 ec_key
91 )
92 assert res == 1
93 return self._backend._ffi.buffer(sigbuf)[:siglen_ptr[0]]
94
95
96 @utils.register_interface(interfaces.AsymmetricVerificationContext)
97 class _ECDSAVerificationContext(object):
98 def __init__(self, backend, public_key, signature, algorithm):
99 self._backend = backend
100 self._public_key = public_key
101 self._signature = signature
102 self._digest = hashes.Hash(algorithm, backend)
103
104 def update(self, data):
105 self._digest.update(data)
106
107 def verify(self):
108 ec_key = self._public_key._ec_key
109
110 digest = self._digest.finalize()
111
112 digest = _truncate_digest_for_ecdsa(ec_key, digest, self._backend)
113
114 res = self._backend._lib.ECDSA_verify(
115 0,
116 digest,
117 len(digest),
118 self._signature,
119 len(self._signature),
120 ec_key
121 )
122 if res != 1:
123 self._backend._consume_errors()
124 raise InvalidSignature
125 return True
126
127
128 @utils.register_interface(interfaces.EllipticCurvePrivateKey)
129 class _EllipticCurvePrivateKey(object):
130 def __init__(self, backend, ec_key_cdata, curve):
131 self._backend = backend
132 self._ec_key = ec_key_cdata
133 self._curve = curve
134
135 @property
136 def curve(self):
137 return self._curve
138
139 def signer(self, signature_algorithm):
140 if isinstance(signature_algorithm, ec.ECDSA):
141 return _ECDSASignatureContext(
142 self._backend, self, signature_algorithm.algorithm
143 )
144 else:
145 raise UnsupportedAlgorithm(
146 "Unsupported elliptic curve signature algorithm.",
147 _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM)
148
149 def public_key(self):
150 group = self._backend._lib.EC_KEY_get0_group(self._ec_key)
151 assert group != self._backend._ffi.NULL
152
153 curve_nid = self._backend._lib.EC_GROUP_get_curve_name(group)
154
155 public_ec_key = self._backend._lib.EC_KEY_new_by_curve_name(curve_nid)
156 assert public_ec_key != self._backend._ffi.NULL
157 public_ec_key = self._backend._ffi.gc(
158 public_ec_key, self._backend._lib.EC_KEY_free
159 )
160
161 point = self._backend._lib.EC_KEY_get0_public_key(self._ec_key)
162 assert point != self._backend._ffi.NULL
163
164 res = self._backend._lib.EC_KEY_set_public_key(public_ec_key, point)
165 assert res == 1
166
167 return _EllipticCurvePublicKey(
168 self._backend, public_ec_key, self._curve
169 )
170
171
172 @utils.register_interface(interfaces.EllipticCurvePublicKey)
173 class _EllipticCurvePublicKey(object):
174 def __init__(self, backend, ec_key_cdata, curve):
175 self._backend = backend
176 self._ec_key = ec_key_cdata
177 self._curve = curve
178
179 @property
180 def curve(self):
181 return self._curve
182
183 def verifier(self, signature, signature_algorithm):
184 if isinstance(signature_algorithm, ec.ECDSA):
185 return _ECDSAVerificationContext(
186 self._backend, self, signature, signature_algorithm.algorithm
187 )
188 else:
189 raise UnsupportedAlgorithm(
190 "Unsupported elliptic curve signature algorithm.",
191 _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM)
192
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cryptography/hazmat/backends/openssl/ec.py b/cryptography/hazmat/backends/openssl/ec.py
--- a/cryptography/hazmat/backends/openssl/ec.py
+++ b/cryptography/hazmat/backends/openssl/ec.py
@@ -24,6 +24,13 @@
def _truncate_digest_for_ecdsa(ec_key_cdata, digest, backend):
+ """
+ This function truncates digests that are longer than a given elliptic
+ curve key's length so they can be signed. Since elliptic curve keys are
+ much shorter than RSA keys many digests (e.g. SHA-512) may require
+ truncation.
+ """
+
_lib = backend._lib
_ffi = backend._ffi
@@ -31,17 +38,14 @@
group = _lib.EC_KEY_get0_group(ec_key_cdata)
- bn_ctx = _lib.BN_CTX_new()
- assert bn_ctx != _ffi.NULL
- bn_ctx = _ffi.gc(bn_ctx, _lib.BN_CTX_free)
-
- order = _lib.BN_CTX_get(bn_ctx)
- assert order != _ffi.NULL
+ with backend._bn_ctx_manager() as bn_ctx:
+ order = _lib.BN_CTX_get(bn_ctx)
+ assert order != _ffi.NULL
- res = _lib.EC_GROUP_get_order(group, order, bn_ctx)
- assert res == 1
+ res = _lib.EC_GROUP_get_order(group, order, bn_ctx)
+ assert res == 1
- order_bits = _lib.BN_num_bits(order)
+ order_bits = _lib.BN_num_bits(order)
if 8 * digest_len > order_bits:
digest_len = (order_bits + 7) // 8
| {"golden_diff": "diff --git a/cryptography/hazmat/backends/openssl/ec.py b/cryptography/hazmat/backends/openssl/ec.py\n--- a/cryptography/hazmat/backends/openssl/ec.py\n+++ b/cryptography/hazmat/backends/openssl/ec.py\n@@ -24,6 +24,13 @@\n \n \n def _truncate_digest_for_ecdsa(ec_key_cdata, digest, backend):\n+ \"\"\"\n+ This function truncates digests that are longer than a given elliptic\n+ curve key's length so they can be signed. Since elliptic curve keys are\n+ much shorter than RSA keys many digests (e.g. SHA-512) may require\n+ truncation.\n+ \"\"\"\n+\n _lib = backend._lib\n _ffi = backend._ffi\n \n@@ -31,17 +38,14 @@\n \n group = _lib.EC_KEY_get0_group(ec_key_cdata)\n \n- bn_ctx = _lib.BN_CTX_new()\n- assert bn_ctx != _ffi.NULL\n- bn_ctx = _ffi.gc(bn_ctx, _lib.BN_CTX_free)\n-\n- order = _lib.BN_CTX_get(bn_ctx)\n- assert order != _ffi.NULL\n+ with backend._bn_ctx_manager() as bn_ctx:\n+ order = _lib.BN_CTX_get(bn_ctx)\n+ assert order != _ffi.NULL\n \n- res = _lib.EC_GROUP_get_order(group, order, bn_ctx)\n- assert res == 1\n+ res = _lib.EC_GROUP_get_order(group, order, bn_ctx)\n+ assert res == 1\n \n- order_bits = _lib.BN_num_bits(order)\n+ order_bits = _lib.BN_num_bits(order)\n \n if 8 * digest_len > order_bits:\n digest_len = (order_bits + 7) // 8\n", "issue": "_truncate_digest_for_ecdsa in the OpenSSL document should be commented\nRight now there are no comments explaining why it exists, or what it's attempting to do.\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport six\n\nfrom cryptography import utils\nfrom cryptography.exceptions import (\n InvalidSignature, UnsupportedAlgorithm, _Reasons\n)\nfrom cryptography.hazmat.primitives import hashes, interfaces\nfrom cryptography.hazmat.primitives.asymmetric import ec\n\n\ndef _truncate_digest_for_ecdsa(ec_key_cdata, digest, backend):\n _lib = backend._lib\n _ffi = backend._ffi\n\n digest_len = len(digest)\n\n group = _lib.EC_KEY_get0_group(ec_key_cdata)\n\n bn_ctx = _lib.BN_CTX_new()\n assert bn_ctx != _ffi.NULL\n bn_ctx = _ffi.gc(bn_ctx, _lib.BN_CTX_free)\n\n order = _lib.BN_CTX_get(bn_ctx)\n assert order != _ffi.NULL\n\n res = _lib.EC_GROUP_get_order(group, order, bn_ctx)\n assert res == 1\n\n order_bits = _lib.BN_num_bits(order)\n\n if 8 * digest_len > order_bits:\n digest_len = (order_bits + 7) // 8\n digest = digest[:digest_len]\n\n if 8 * digest_len > order_bits:\n rshift = 8 - (order_bits & 0x7)\n assert rshift > 0 and rshift < 8\n\n mask = 0xFF >> rshift << rshift\n\n # Set the bottom rshift bits to 0\n digest = digest[:-1] + six.int2byte(six.indexbytes(digest, -1) & mask)\n\n return digest\n\n\[email protected]_interface(interfaces.AsymmetricSignatureContext)\nclass _ECDSASignatureContext(object):\n def __init__(self, backend, private_key, algorithm):\n self._backend = backend\n self._private_key = private_key\n self._digest = hashes.Hash(algorithm, backend)\n\n def update(self, data):\n self._digest.update(data)\n\n def finalize(self):\n ec_key = self._private_key._ec_key\n\n digest = self._digest.finalize()\n\n digest = _truncate_digest_for_ecdsa(ec_key, digest, self._backend)\n\n max_size = self._backend._lib.ECDSA_size(ec_key)\n assert max_size > 0\n\n sigbuf = self._backend._ffi.new(\"char[]\", max_size)\n siglen_ptr = self._backend._ffi.new(\"unsigned int[]\", 1)\n res = self._backend._lib.ECDSA_sign(\n 0,\n digest,\n len(digest),\n sigbuf,\n siglen_ptr,\n ec_key\n )\n assert res == 1\n return self._backend._ffi.buffer(sigbuf)[:siglen_ptr[0]]\n\n\[email protected]_interface(interfaces.AsymmetricVerificationContext)\nclass _ECDSAVerificationContext(object):\n def __init__(self, backend, public_key, signature, algorithm):\n self._backend = backend\n self._public_key = public_key\n self._signature = signature\n self._digest = hashes.Hash(algorithm, backend)\n\n def update(self, data):\n self._digest.update(data)\n\n def verify(self):\n ec_key = self._public_key._ec_key\n\n digest = self._digest.finalize()\n\n digest = _truncate_digest_for_ecdsa(ec_key, digest, self._backend)\n\n res = self._backend._lib.ECDSA_verify(\n 0,\n digest,\n len(digest),\n self._signature,\n len(self._signature),\n ec_key\n )\n if res != 1:\n self._backend._consume_errors()\n raise InvalidSignature\n return True\n\n\[email protected]_interface(interfaces.EllipticCurvePrivateKey)\nclass _EllipticCurvePrivateKey(object):\n def __init__(self, backend, ec_key_cdata, curve):\n self._backend = backend\n self._ec_key = ec_key_cdata\n self._curve = curve\n\n @property\n def curve(self):\n return self._curve\n\n def signer(self, signature_algorithm):\n if isinstance(signature_algorithm, ec.ECDSA):\n return _ECDSASignatureContext(\n self._backend, self, signature_algorithm.algorithm\n )\n else:\n raise UnsupportedAlgorithm(\n \"Unsupported elliptic curve signature algorithm.\",\n _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM)\n\n def public_key(self):\n group = self._backend._lib.EC_KEY_get0_group(self._ec_key)\n assert group != self._backend._ffi.NULL\n\n curve_nid = self._backend._lib.EC_GROUP_get_curve_name(group)\n\n public_ec_key = self._backend._lib.EC_KEY_new_by_curve_name(curve_nid)\n assert public_ec_key != self._backend._ffi.NULL\n public_ec_key = self._backend._ffi.gc(\n public_ec_key, self._backend._lib.EC_KEY_free\n )\n\n point = self._backend._lib.EC_KEY_get0_public_key(self._ec_key)\n assert point != self._backend._ffi.NULL\n\n res = self._backend._lib.EC_KEY_set_public_key(public_ec_key, point)\n assert res == 1\n\n return _EllipticCurvePublicKey(\n self._backend, public_ec_key, self._curve\n )\n\n\[email protected]_interface(interfaces.EllipticCurvePublicKey)\nclass _EllipticCurvePublicKey(object):\n def __init__(self, backend, ec_key_cdata, curve):\n self._backend = backend\n self._ec_key = ec_key_cdata\n self._curve = curve\n\n @property\n def curve(self):\n return self._curve\n\n def verifier(self, signature, signature_algorithm):\n if isinstance(signature_algorithm, ec.ECDSA):\n return _ECDSAVerificationContext(\n self._backend, self, signature, signature_algorithm.algorithm\n )\n else:\n raise UnsupportedAlgorithm(\n \"Unsupported elliptic curve signature algorithm.\",\n _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM)\n", "path": "cryptography/hazmat/backends/openssl/ec.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport six\n\nfrom cryptography import utils\nfrom cryptography.exceptions import (\n InvalidSignature, UnsupportedAlgorithm, _Reasons\n)\nfrom cryptography.hazmat.primitives import hashes, interfaces\nfrom cryptography.hazmat.primitives.asymmetric import ec\n\n\ndef _truncate_digest_for_ecdsa(ec_key_cdata, digest, backend):\n \"\"\"\n This function truncates digests that are longer than a given elliptic\n curve key's length so they can be signed. Since elliptic curve keys are\n much shorter than RSA keys many digests (e.g. SHA-512) may require\n truncation.\n \"\"\"\n\n _lib = backend._lib\n _ffi = backend._ffi\n\n digest_len = len(digest)\n\n group = _lib.EC_KEY_get0_group(ec_key_cdata)\n\n with backend._bn_ctx_manager() as bn_ctx:\n order = _lib.BN_CTX_get(bn_ctx)\n assert order != _ffi.NULL\n\n res = _lib.EC_GROUP_get_order(group, order, bn_ctx)\n assert res == 1\n\n order_bits = _lib.BN_num_bits(order)\n\n if 8 * digest_len > order_bits:\n digest_len = (order_bits + 7) // 8\n digest = digest[:digest_len]\n\n if 8 * digest_len > order_bits:\n rshift = 8 - (order_bits & 0x7)\n assert rshift > 0 and rshift < 8\n\n mask = 0xFF >> rshift << rshift\n\n # Set the bottom rshift bits to 0\n digest = digest[:-1] + six.int2byte(six.indexbytes(digest, -1) & mask)\n\n return digest\n\n\[email protected]_interface(interfaces.AsymmetricSignatureContext)\nclass _ECDSASignatureContext(object):\n def __init__(self, backend, private_key, algorithm):\n self._backend = backend\n self._private_key = private_key\n self._digest = hashes.Hash(algorithm, backend)\n\n def update(self, data):\n self._digest.update(data)\n\n def finalize(self):\n ec_key = self._private_key._ec_key\n\n digest = self._digest.finalize()\n\n digest = _truncate_digest_for_ecdsa(ec_key, digest, self._backend)\n\n max_size = self._backend._lib.ECDSA_size(ec_key)\n assert max_size > 0\n\n sigbuf = self._backend._ffi.new(\"char[]\", max_size)\n siglen_ptr = self._backend._ffi.new(\"unsigned int[]\", 1)\n res = self._backend._lib.ECDSA_sign(\n 0,\n digest,\n len(digest),\n sigbuf,\n siglen_ptr,\n ec_key\n )\n assert res == 1\n return self._backend._ffi.buffer(sigbuf)[:siglen_ptr[0]]\n\n\[email protected]_interface(interfaces.AsymmetricVerificationContext)\nclass _ECDSAVerificationContext(object):\n def __init__(self, backend, public_key, signature, algorithm):\n self._backend = backend\n self._public_key = public_key\n self._signature = signature\n self._digest = hashes.Hash(algorithm, backend)\n\n def update(self, data):\n self._digest.update(data)\n\n def verify(self):\n ec_key = self._public_key._ec_key\n\n digest = self._digest.finalize()\n\n digest = _truncate_digest_for_ecdsa(ec_key, digest, self._backend)\n\n res = self._backend._lib.ECDSA_verify(\n 0,\n digest,\n len(digest),\n self._signature,\n len(self._signature),\n ec_key\n )\n if res != 1:\n self._backend._consume_errors()\n raise InvalidSignature\n return True\n\n\[email protected]_interface(interfaces.EllipticCurvePrivateKey)\nclass _EllipticCurvePrivateKey(object):\n def __init__(self, backend, ec_key_cdata, curve):\n self._backend = backend\n self._ec_key = ec_key_cdata\n self._curve = curve\n\n @property\n def curve(self):\n return self._curve\n\n def signer(self, signature_algorithm):\n if isinstance(signature_algorithm, ec.ECDSA):\n return _ECDSASignatureContext(\n self._backend, self, signature_algorithm.algorithm\n )\n else:\n raise UnsupportedAlgorithm(\n \"Unsupported elliptic curve signature algorithm.\",\n _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM)\n\n def public_key(self):\n group = self._backend._lib.EC_KEY_get0_group(self._ec_key)\n assert group != self._backend._ffi.NULL\n\n curve_nid = self._backend._lib.EC_GROUP_get_curve_name(group)\n\n public_ec_key = self._backend._lib.EC_KEY_new_by_curve_name(curve_nid)\n assert public_ec_key != self._backend._ffi.NULL\n public_ec_key = self._backend._ffi.gc(\n public_ec_key, self._backend._lib.EC_KEY_free\n )\n\n point = self._backend._lib.EC_KEY_get0_public_key(self._ec_key)\n assert point != self._backend._ffi.NULL\n\n res = self._backend._lib.EC_KEY_set_public_key(public_ec_key, point)\n assert res == 1\n\n return _EllipticCurvePublicKey(\n self._backend, public_ec_key, self._curve\n )\n\n\[email protected]_interface(interfaces.EllipticCurvePublicKey)\nclass _EllipticCurvePublicKey(object):\n def __init__(self, backend, ec_key_cdata, curve):\n self._backend = backend\n self._ec_key = ec_key_cdata\n self._curve = curve\n\n @property\n def curve(self):\n return self._curve\n\n def verifier(self, signature, signature_algorithm):\n if isinstance(signature_algorithm, ec.ECDSA):\n return _ECDSAVerificationContext(\n self._backend, self, signature, signature_algorithm.algorithm\n )\n else:\n raise UnsupportedAlgorithm(\n \"Unsupported elliptic curve signature algorithm.\",\n _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM)\n", "path": "cryptography/hazmat/backends/openssl/ec.py"}]} | 2,217 | 411 |
gh_patches_debug_26665 | rasdani/github-patches | git_diff | netbox-community__netbox-11076 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Beta: CachedValue search values may (and do) exceed maximum PostgreSQL B-Tree index row size
### NetBox version
v3.4-beta1
### Python version
3.10
### Steps to Reproduce
1. Have a NetBox instance with a lot of real-life data
2. Try to migrate data from NetBox 3.3.8 to 3.4-beta1
### Expected Behavior
Migrations succeed
### Observed Behavior
Migration `extras.0083_search` fails.
Log:
```bash
$ NETBOX_DELETE_LEGACY_DATA=1 netbox/manage.py migrate
netbox.denormalized: DEBUG Registering denormalized field <class 'dcim.models.cables.CableTermination'>._device
netbox.denormalized: DEBUG Registering denormalized field <class 'dcim.models.cables.CableTermination'>._rack
netbox.denormalized: DEBUG Registering denormalized field <class 'dcim.models.cables.CableTermination'>._location
Operations to perform:
Apply all migrations: admin, auth, circuits, contenttypes, dcim, django_rq, extras, ipam, sessions, social_django, taggit, tenancy, users, virtualization, wireless
Running migrations:
Applying extras.0083_search...Reindexing 63 models.
Clearing cached values... 0 entries deleted.
Indexing models
circuits.circuit... 153 entries cached.
circuits.circuittermination... 234 entries cached.
circuits.circuittype... 20 entries cached.
circuits.provider... 17 entries cached.
circuits.providernetwork... 2 entries cached.
dcim.cable... 13284 entries cached.
dcim.consoleport... 6478 entries cached.
dcim.consoleserverport... 1378 entries cached.
dcim.device... 39290 entries cached.
dcim.devicebay... 422 entries cached.
dcim.devicerole... 68 entries cached.
dcim.devicetype... 1022 entries cached.
dcim.frontport... 21296 entries cached.
dcim.interface... 805260 entries cached.
dcim.inventoryitem... None found.
dcim.location... 469 entries cached.
dcim.manufacturer... 144 entries cached.
dcim.module... 40 entries cached.
dcim.modulebay... 136 entries cached.
dcim.moduletype... 24 entries cached.
dcim.platform... 212 entries cached.
dcim.powerfeed... 48 entries cached.
dcim.poweroutlet... 464 entries cached.
dcim.powerpanel... 3 entries cached.
dcim.powerport... 33112 entries cached.
dcim.rack... 7741 entries cached.
dcim.rackreservation... 223 entries cached.
dcim.rackrole... 33 entries cached.
dcim.rearport... 17705 entries cached.
dcim.region... 39 entries cached.
dcim.site... 270 entries cached.
dcim.sitegroup... 18 entries cached.
dcim.virtualchassis... 347 entries cached.
dcim.virtualdevicecontext... None found.
ipam.aggregate... 647 entries cached.
ipam.asn... 18 entries cached.
ipam.fhrpgroup... None found.
ipam.ipaddress... 159974 entries cached.
ipam.iprange... None found.
ipam.l2vpn... None found.
ipam.prefix... 43320 entries cached.
ipam.rir... 12 entries cached.
ipam.role... 9 entries cached.
ipam.routetarget... 129 entries cached.
ipam.service... None found.
ipam.servicetemplate... None found.
ipam.vlan... 30219 entries cached.
ipam.vlangroup... 298 entries cached.
ipam.vrf... 7301 entries cached.
extras.journalentry... 23594 entries cached.
tenancy.contact... 24 entries cached.
tenancy.contactgroup... 2 entries cached.
tenancy.contactrole... 6 entries cached.
tenancy.tenant... 14755 entries cached.
tenancy.tenantgroup... 9 entries cached.
virtualization.cluster... 257 entries cached.
virtualization.clustergroup... 161 entries cached.
virtualization.clustertype... 14 entries cached.
virtualization.virtualmachine... 25878 entries cached.
virtualization.vminterface... 13559 entries cached.
wireless.wirelesslan... None found.
wireless.wirelesslangroup... None found.
wireless.wirelesslink... None found.
Completed. Total entries: 1270138
Traceback (most recent call last):
File "/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
psycopg2.errors.ProgramLimitExceeded: FEHLER: Indexzeile benötigt 122744 Bytes, Maximalgröße ist 8191
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/var/dev/netbox/netbox/manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/var/dev/netbox/venv/lib64/python3.10/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/var/dev/netbox/venv/lib64/python3.10/site-packages/django/core/management/__init__.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/var/dev/netbox/venv/lib64/python3.10/site-packages/django/core/management/base.py", line 402, in run_from_argv
self.execute(*args, **cmd_options)
File "/var/dev/netbox/venv/lib64/python3.10/site-packages/django/core/management/base.py", line 448, in execute
output = self.handle(*args, **options)
File "/var/dev/netbox/venv/lib64/python3.10/site-packages/django/core/management/base.py", line 96, in wrapped
res = handle_func(*args, **kwargs)
File "/var/dev/netbox/venv/lib64/python3.10/site-packages/django/core/management/commands/migrate.py", line 349, in handle
post_migrate_state = executor.migrate(
File "/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/migrations/executor.py", line 135, in migrate
state = self._migrate_all_forwards(
File "/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/migrations/executor.py", line 167, in _migrate_all_forwards
state = self.apply_migration(
File "/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/migrations/executor.py", line 249, in apply_migration
with self.connection.schema_editor(
File "/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/backends/base/schema.py", line 164, in __exit__
self.execute(sql)
File "/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/backends/base/schema.py", line 199, in execute
cursor.execute(sql, params)
File "/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/backends/utils.py", line 103, in execute
return super().execute(sql, params)
File "/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
File "/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/backends/utils.py", line 84, in _execute
with self.db.wrap_database_errors:
File "/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
django.db.utils.OperationalError: FEHLER: Indexzeile benötigt 122744 Bytes, Maximalgröße ist 8191
```
which translates to: `ERROR: index row requires 122744 bytes, maximum size is 8191`
This seems to be the issue from the perspective of the migration:
* https://github.com/netbox-community/netbox/blob/v3.4-beta1/netbox/extras/migrations/0083_search.py#L12
* causes run of: https://github.com/netbox-community/netbox/blob/v3.4-beta1/netbox/extras/management/commands/reindex.py
* causes run of: https://github.com/netbox-community/netbox/blob/v3.4-beta1/netbox/netbox/search/backends.py#L138
As I am not a database engineer, I'm somewhat guessing my way through here, so in the case, I've missed an obvious configuration value, please have that in mind.
Django seems to use B-Tree indexes in PostgreSQL by default: https://docs.djangoproject.com/en/4.1/ref/models/indexes/#index-options
B-Tree indexes are described in PostgreSQL-docs with a size limit:
> ["The only limitation is that an index entry cannot exceed approximately one-third of a page (after TOAST compression, if applicable)."](https://www.postgresql.org/docs/current/btree-intro.html)
From my understanding of the matter, the size is exceeded as a [CachedValue.value](https://github.com/netbox-community/netbox/blob/v3.4-beta1/netbox/extras/models/search.py#L39) may be too large for a B-Tree index row.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/extras/migrations/0083_search.py`
Content:
```
1 import sys
2 import uuid
3
4 import django.db.models.deletion
5 from django.core import management
6 from django.db import migrations, models
7
8
9 def reindex(apps, schema_editor):
10 # Build the search index (except during tests)
11 if 'test' not in sys.argv:
12 management.call_command('reindex')
13
14
15 class Migration(migrations.Migration):
16
17 dependencies = [
18 ('circuits', '0041_standardize_description_comments'),
19 ('contenttypes', '0002_remove_content_type_name'),
20 ('dcim', '0166_virtualdevicecontext'),
21 ('extras', '0082_savedfilter'),
22 ('ipam', '0063_standardize_description_comments'),
23 ('tenancy', '0009_standardize_description_comments'),
24 ('virtualization', '0034_standardize_description_comments'),
25 ('wireless', '0008_wirelesslan_status'),
26 ]
27
28 operations = [
29 migrations.AddField(
30 model_name='customfield',
31 name='search_weight',
32 field=models.PositiveSmallIntegerField(default=1000),
33 ),
34 migrations.CreateModel(
35 name='CachedValue',
36 fields=[
37 ('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),
38 ('timestamp', models.DateTimeField(auto_now_add=True)),
39 ('object_id', models.PositiveBigIntegerField()),
40 ('field', models.CharField(max_length=200)),
41 ('type', models.CharField(max_length=30)),
42 ('value', models.TextField(db_index=True)),
43 ('weight', models.PositiveSmallIntegerField(default=1000)),
44 ('object_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to='contenttypes.contenttype')),
45 ],
46 options={
47 'ordering': ('weight', 'object_type', 'object_id'),
48 },
49 ),
50 migrations.RunPython(
51 code=reindex,
52 reverse_code=migrations.RunPython.noop
53 ),
54 ]
55
```
Path: `netbox/extras/models/search.py`
Content:
```
1 import uuid
2
3 from django.contrib.contenttypes.models import ContentType
4 from django.db import models
5
6 from utilities.fields import RestrictedGenericForeignKey
7
8 __all__ = (
9 'CachedValue',
10 )
11
12
13 class CachedValue(models.Model):
14 id = models.UUIDField(
15 primary_key=True,
16 default=uuid.uuid4,
17 editable=False
18 )
19 timestamp = models.DateTimeField(
20 auto_now_add=True,
21 editable=False
22 )
23 object_type = models.ForeignKey(
24 to=ContentType,
25 on_delete=models.CASCADE,
26 related_name='+'
27 )
28 object_id = models.PositiveBigIntegerField()
29 object = RestrictedGenericForeignKey(
30 ct_field='object_type',
31 fk_field='object_id'
32 )
33 field = models.CharField(
34 max_length=200
35 )
36 type = models.CharField(
37 max_length=30
38 )
39 value = models.TextField(
40 db_index=True
41 )
42 weight = models.PositiveSmallIntegerField(
43 default=1000
44 )
45
46 class Meta:
47 ordering = ('weight', 'object_type', 'object_id')
48
49 def __str__(self):
50 return f'{self.object_type} {self.object_id}: {self.field}={self.value}'
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/netbox/extras/migrations/0083_search.py b/netbox/extras/migrations/0083_search.py
--- a/netbox/extras/migrations/0083_search.py
+++ b/netbox/extras/migrations/0083_search.py
@@ -2,6 +2,7 @@
import uuid
import django.db.models.deletion
+import django.db.models.lookups
from django.core import management
from django.db import migrations, models
@@ -39,7 +40,7 @@
('object_id', models.PositiveBigIntegerField()),
('field', models.CharField(max_length=200)),
('type', models.CharField(max_length=30)),
- ('value', models.TextField(db_index=True)),
+ ('value', models.TextField()),
('weight', models.PositiveSmallIntegerField(default=1000)),
('object_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to='contenttypes.contenttype')),
],
diff --git a/netbox/extras/models/search.py b/netbox/extras/models/search.py
--- a/netbox/extras/models/search.py
+++ b/netbox/extras/models/search.py
@@ -36,9 +36,7 @@
type = models.CharField(
max_length=30
)
- value = models.TextField(
- db_index=True
- )
+ value = models.TextField()
weight = models.PositiveSmallIntegerField(
default=1000
)
| {"golden_diff": "diff --git a/netbox/extras/migrations/0083_search.py b/netbox/extras/migrations/0083_search.py\n--- a/netbox/extras/migrations/0083_search.py\n+++ b/netbox/extras/migrations/0083_search.py\n@@ -2,6 +2,7 @@\n import uuid\n \n import django.db.models.deletion\n+import django.db.models.lookups\n from django.core import management\n from django.db import migrations, models\n \n@@ -39,7 +40,7 @@\n ('object_id', models.PositiveBigIntegerField()),\n ('field', models.CharField(max_length=200)),\n ('type', models.CharField(max_length=30)),\n- ('value', models.TextField(db_index=True)),\n+ ('value', models.TextField()),\n ('weight', models.PositiveSmallIntegerField(default=1000)),\n ('object_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to='contenttypes.contenttype')),\n ],\ndiff --git a/netbox/extras/models/search.py b/netbox/extras/models/search.py\n--- a/netbox/extras/models/search.py\n+++ b/netbox/extras/models/search.py\n@@ -36,9 +36,7 @@\n type = models.CharField(\n max_length=30\n )\n- value = models.TextField(\n- db_index=True\n- )\n+ value = models.TextField()\n weight = models.PositiveSmallIntegerField(\n default=1000\n )\n", "issue": "Beta: CachedValue search values may (and do) exceed maximum PostgreSQL B-Tree index row size\n### NetBox version\n\nv3.4-beta1\n\n### Python version\n\n3.10\n\n### Steps to Reproduce\n\n1. Have a NetBox instance with a lot of real-life data\r\n2. Try to migrate data from NetBox 3.3.8 to 3.4-beta1\n\n### Expected Behavior\n\nMigrations succeed\n\n### Observed Behavior\n\nMigration `extras.0083_search` fails.\r\n\r\nLog:\r\n```bash\r\n$ NETBOX_DELETE_LEGACY_DATA=1 netbox/manage.py migrate\r\nnetbox.denormalized: DEBUG Registering denormalized field <class 'dcim.models.cables.CableTermination'>._device\r\nnetbox.denormalized: DEBUG Registering denormalized field <class 'dcim.models.cables.CableTermination'>._rack\r\nnetbox.denormalized: DEBUG Registering denormalized field <class 'dcim.models.cables.CableTermination'>._location\r\nOperations to perform:\r\n Apply all migrations: admin, auth, circuits, contenttypes, dcim, django_rq, extras, ipam, sessions, social_django, taggit, tenancy, users, virtualization, wireless\r\nRunning migrations:\r\n Applying extras.0083_search...Reindexing 63 models.\r\nClearing cached values... 0 entries deleted.\r\nIndexing models\r\n circuits.circuit... 153 entries cached.\r\n circuits.circuittermination... 234 entries cached.\r\n circuits.circuittype... 20 entries cached.\r\n circuits.provider... 17 entries cached.\r\n circuits.providernetwork... 2 entries cached.\r\n dcim.cable... 13284 entries cached.\r\n dcim.consoleport... 6478 entries cached.\r\n dcim.consoleserverport... 1378 entries cached.\r\n dcim.device... 39290 entries cached.\r\n dcim.devicebay... 422 entries cached.\r\n dcim.devicerole... 68 entries cached.\r\n dcim.devicetype... 1022 entries cached.\r\n dcim.frontport... 21296 entries cached.\r\n dcim.interface... 805260 entries cached.\r\n dcim.inventoryitem... None found.\r\n dcim.location... 469 entries cached.\r\n dcim.manufacturer... 144 entries cached.\r\n dcim.module... 40 entries cached.\r\n dcim.modulebay... 136 entries cached.\r\n dcim.moduletype... 24 entries cached.\r\n dcim.platform... 212 entries cached.\r\n dcim.powerfeed... 48 entries cached.\r\n dcim.poweroutlet... 464 entries cached.\r\n dcim.powerpanel... 3 entries cached.\r\n dcim.powerport... 33112 entries cached.\r\n dcim.rack... 7741 entries cached.\r\n dcim.rackreservation... 223 entries cached.\r\n dcim.rackrole... 33 entries cached.\r\n dcim.rearport... 17705 entries cached.\r\n dcim.region... 39 entries cached.\r\n dcim.site... 270 entries cached.\r\n dcim.sitegroup... 18 entries cached.\r\n dcim.virtualchassis... 347 entries cached.\r\n dcim.virtualdevicecontext... None found.\r\n ipam.aggregate... 647 entries cached.\r\n ipam.asn... 18 entries cached.\r\n ipam.fhrpgroup... None found.\r\n ipam.ipaddress... 159974 entries cached.\r\n ipam.iprange... None found.\r\n ipam.l2vpn... None found.\r\n ipam.prefix... 43320 entries cached.\r\n ipam.rir... 12 entries cached.\r\n ipam.role... 9 entries cached.\r\n ipam.routetarget... 129 entries cached.\r\n ipam.service... None found.\r\n ipam.servicetemplate... None found.\r\n ipam.vlan... 30219 entries cached.\r\n ipam.vlangroup... 298 entries cached.\r\n ipam.vrf... 7301 entries cached.\r\n extras.journalentry... 23594 entries cached.\r\n tenancy.contact... 24 entries cached.\r\n tenancy.contactgroup... 2 entries cached.\r\n tenancy.contactrole... 6 entries cached.\r\n tenancy.tenant... 14755 entries cached.\r\n tenancy.tenantgroup... 9 entries cached.\r\n virtualization.cluster... 257 entries cached.\r\n virtualization.clustergroup... 161 entries cached.\r\n virtualization.clustertype... 14 entries cached.\r\n virtualization.virtualmachine... 25878 entries cached.\r\n virtualization.vminterface... 13559 entries cached.\r\n wireless.wirelesslan... None found.\r\n wireless.wirelesslangroup... None found.\r\n wireless.wirelesslink... None found.\r\nCompleted. Total entries: 1270138\r\nTraceback (most recent call last):\r\n File \"/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/backends/utils.py\", line 89, in _execute\r\n return self.cursor.execute(sql, params)\r\npsycopg2.errors.ProgramLimitExceeded: FEHLER: Indexzeile ben\u00f6tigt 122744 Bytes, Maximalgr\u00f6\u00dfe ist 8191\r\n\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/var/dev/netbox/netbox/manage.py\", line 10, in <module>\r\n execute_from_command_line(sys.argv)\r\n File \"/var/dev/netbox/venv/lib64/python3.10/site-packages/django/core/management/__init__.py\", line 446, in execute_from_command_line\r\n utility.execute()\r\n File \"/var/dev/netbox/venv/lib64/python3.10/site-packages/django/core/management/__init__.py\", line 440, in execute\r\n self.fetch_command(subcommand).run_from_argv(self.argv)\r\n File \"/var/dev/netbox/venv/lib64/python3.10/site-packages/django/core/management/base.py\", line 402, in run_from_argv\r\n self.execute(*args, **cmd_options)\r\n File \"/var/dev/netbox/venv/lib64/python3.10/site-packages/django/core/management/base.py\", line 448, in execute\r\n output = self.handle(*args, **options)\r\n File \"/var/dev/netbox/venv/lib64/python3.10/site-packages/django/core/management/base.py\", line 96, in wrapped\r\n res = handle_func(*args, **kwargs)\r\n File \"/var/dev/netbox/venv/lib64/python3.10/site-packages/django/core/management/commands/migrate.py\", line 349, in handle\r\n post_migrate_state = executor.migrate(\r\n File \"/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/migrations/executor.py\", line 135, in migrate\r\n state = self._migrate_all_forwards(\r\n File \"/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/migrations/executor.py\", line 167, in _migrate_all_forwards\r\n state = self.apply_migration(\r\n File \"/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/migrations/executor.py\", line 249, in apply_migration\r\n with self.connection.schema_editor(\r\n File \"/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/backends/base/schema.py\", line 164, in __exit__\r\n self.execute(sql)\r\n File \"/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/backends/base/schema.py\", line 199, in execute\r\n cursor.execute(sql, params)\r\n File \"/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/backends/utils.py\", line 103, in execute\r\n return super().execute(sql, params)\r\n File \"/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/backends/utils.py\", line 67, in execute\r\n return self._execute_with_wrappers(\r\n File \"/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/backends/utils.py\", line 80, in _execute_with_wrappers\r\n return executor(sql, params, many, context)\r\n File \"/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/backends/utils.py\", line 84, in _execute\r\n with self.db.wrap_database_errors:\r\n File \"/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/utils.py\", line 91, in __exit__\r\n raise dj_exc_value.with_traceback(traceback) from exc_value\r\n File \"/var/dev/netbox/venv/lib64/python3.10/site-packages/django/db/backends/utils.py\", line 89, in _execute\r\n return self.cursor.execute(sql, params)\r\ndjango.db.utils.OperationalError: FEHLER: Indexzeile ben\u00f6tigt 122744 Bytes, Maximalgr\u00f6\u00dfe ist 8191\r\n```\r\nwhich translates to: `ERROR: index row requires 122744 bytes, maximum size is 8191`\r\n\r\nThis seems to be the issue from the perspective of the migration:\r\n* https://github.com/netbox-community/netbox/blob/v3.4-beta1/netbox/extras/migrations/0083_search.py#L12\r\n* causes run of: https://github.com/netbox-community/netbox/blob/v3.4-beta1/netbox/extras/management/commands/reindex.py\r\n* causes run of: https://github.com/netbox-community/netbox/blob/v3.4-beta1/netbox/netbox/search/backends.py#L138\r\n\r\nAs I am not a database engineer, I'm somewhat guessing my way through here, so in the case, I've missed an obvious configuration value, please have that in mind.\r\n\r\nDjango seems to use B-Tree indexes in PostgreSQL by default: https://docs.djangoproject.com/en/4.1/ref/models/indexes/#index-options\r\n\r\nB-Tree indexes are described in PostgreSQL-docs with a size limit:\r\n> [\"The only limitation is that an index entry cannot exceed approximately one-third of a page (after TOAST compression, if applicable).\"](https://www.postgresql.org/docs/current/btree-intro.html)\r\n\r\nFrom my understanding of the matter, the size is exceeded as a [CachedValue.value](https://github.com/netbox-community/netbox/blob/v3.4-beta1/netbox/extras/models/search.py#L39) may be too large for a B-Tree index row.\n", "before_files": [{"content": "import sys\nimport uuid\n\nimport django.db.models.deletion\nfrom django.core import management\nfrom django.db import migrations, models\n\n\ndef reindex(apps, schema_editor):\n # Build the search index (except during tests)\n if 'test' not in sys.argv:\n management.call_command('reindex')\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('circuits', '0041_standardize_description_comments'),\n ('contenttypes', '0002_remove_content_type_name'),\n ('dcim', '0166_virtualdevicecontext'),\n ('extras', '0082_savedfilter'),\n ('ipam', '0063_standardize_description_comments'),\n ('tenancy', '0009_standardize_description_comments'),\n ('virtualization', '0034_standardize_description_comments'),\n ('wireless', '0008_wirelesslan_status'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='customfield',\n name='search_weight',\n field=models.PositiveSmallIntegerField(default=1000),\n ),\n migrations.CreateModel(\n name='CachedValue',\n fields=[\n ('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),\n ('timestamp', models.DateTimeField(auto_now_add=True)),\n ('object_id', models.PositiveBigIntegerField()),\n ('field', models.CharField(max_length=200)),\n ('type', models.CharField(max_length=30)),\n ('value', models.TextField(db_index=True)),\n ('weight', models.PositiveSmallIntegerField(default=1000)),\n ('object_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to='contenttypes.contenttype')),\n ],\n options={\n 'ordering': ('weight', 'object_type', 'object_id'),\n },\n ),\n migrations.RunPython(\n code=reindex,\n reverse_code=migrations.RunPython.noop\n ),\n ]\n", "path": "netbox/extras/migrations/0083_search.py"}, {"content": "import uuid\n\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.db import models\n\nfrom utilities.fields import RestrictedGenericForeignKey\n\n__all__ = (\n 'CachedValue',\n)\n\n\nclass CachedValue(models.Model):\n id = models.UUIDField(\n primary_key=True,\n default=uuid.uuid4,\n editable=False\n )\n timestamp = models.DateTimeField(\n auto_now_add=True,\n editable=False\n )\n object_type = models.ForeignKey(\n to=ContentType,\n on_delete=models.CASCADE,\n related_name='+'\n )\n object_id = models.PositiveBigIntegerField()\n object = RestrictedGenericForeignKey(\n ct_field='object_type',\n fk_field='object_id'\n )\n field = models.CharField(\n max_length=200\n )\n type = models.CharField(\n max_length=30\n )\n value = models.TextField(\n db_index=True\n )\n weight = models.PositiveSmallIntegerField(\n default=1000\n )\n\n class Meta:\n ordering = ('weight', 'object_type', 'object_id')\n\n def __str__(self):\n return f'{self.object_type} {self.object_id}: {self.field}={self.value}'\n", "path": "netbox/extras/models/search.py"}], "after_files": [{"content": "import sys\nimport uuid\n\nimport django.db.models.deletion\nimport django.db.models.lookups\nfrom django.core import management\nfrom django.db import migrations, models\n\n\ndef reindex(apps, schema_editor):\n # Build the search index (except during tests)\n if 'test' not in sys.argv:\n management.call_command('reindex')\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('circuits', '0041_standardize_description_comments'),\n ('contenttypes', '0002_remove_content_type_name'),\n ('dcim', '0166_virtualdevicecontext'),\n ('extras', '0082_savedfilter'),\n ('ipam', '0063_standardize_description_comments'),\n ('tenancy', '0009_standardize_description_comments'),\n ('virtualization', '0034_standardize_description_comments'),\n ('wireless', '0008_wirelesslan_status'),\n ]\n\n operations = [\n migrations.AddField(\n model_name='customfield',\n name='search_weight',\n field=models.PositiveSmallIntegerField(default=1000),\n ),\n migrations.CreateModel(\n name='CachedValue',\n fields=[\n ('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False)),\n ('timestamp', models.DateTimeField(auto_now_add=True)),\n ('object_id', models.PositiveBigIntegerField()),\n ('field', models.CharField(max_length=200)),\n ('type', models.CharField(max_length=30)),\n ('value', models.TextField()),\n ('weight', models.PositiveSmallIntegerField(default=1000)),\n ('object_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='+', to='contenttypes.contenttype')),\n ],\n options={\n 'ordering': ('weight', 'object_type', 'object_id'),\n },\n ),\n migrations.RunPython(\n code=reindex,\n reverse_code=migrations.RunPython.noop\n ),\n ]\n", "path": "netbox/extras/migrations/0083_search.py"}, {"content": "import uuid\n\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.db import models\n\nfrom utilities.fields import RestrictedGenericForeignKey\n\n__all__ = (\n 'CachedValue',\n)\n\n\nclass CachedValue(models.Model):\n id = models.UUIDField(\n primary_key=True,\n default=uuid.uuid4,\n editable=False\n )\n timestamp = models.DateTimeField(\n auto_now_add=True,\n editable=False\n )\n object_type = models.ForeignKey(\n to=ContentType,\n on_delete=models.CASCADE,\n related_name='+'\n )\n object_id = models.PositiveBigIntegerField()\n object = RestrictedGenericForeignKey(\n ct_field='object_type',\n fk_field='object_id'\n )\n field = models.CharField(\n max_length=200\n )\n type = models.CharField(\n max_length=30\n )\n value = models.TextField()\n weight = models.PositiveSmallIntegerField(\n default=1000\n )\n\n class Meta:\n ordering = ('weight', 'object_type', 'object_id')\n\n def __str__(self):\n return f'{self.object_type} {self.object_id}: {self.field}={self.value}'\n", "path": "netbox/extras/models/search.py"}]} | 3,591 | 325 |
gh_patches_debug_15180 | rasdani/github-patches | git_diff | pre-commit__pre-commit-38 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Consider using --no-checkout for cloning
I'd assume it is faster...
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/repository.py`
Content:
```
1
2 import contextlib
3 from plumbum import local
4
5 import pre_commit.constants as C
6 from pre_commit.clientlib.validate_manifest import validate_manifest
7 from pre_commit.hooks_workspace import in_hooks_workspace
8 from pre_commit.languages.all import languages
9 from pre_commit.util import cached_property
10
11
12 class Repository(object):
13 def __init__(self, repo_config):
14 self.repo_config = repo_config
15
16 @cached_property
17 def repo_url(self):
18 return self.repo_config['repo']
19
20 @cached_property
21 def sha(self):
22 return self.repo_config['sha']
23
24 @cached_property
25 def languages(self):
26 return set(filter(None, (
27 hook.get('language') for hook in self.hooks.values()
28 )))
29
30 @cached_property
31 def hooks(self):
32 return dict(
33 (hook['id'], dict(hook, **self.manifest[hook['id']]))
34 for hook in self.repo_config['hooks']
35 )
36
37 @cached_property
38 def manifest(self):
39 with self.in_checkout():
40 return dict(
41 (hook['id'], hook)
42 for hook in validate_manifest(C.MANIFEST_FILE)
43 )
44
45 @contextlib.contextmanager
46 def in_checkout(self):
47 with in_hooks_workspace():
48 # SMELL:
49 self.create()
50 with local.cwd(self.sha):
51 yield
52
53 def create(self):
54 with in_hooks_workspace():
55 if local.path(self.sha).exists():
56 # Project already exists, no reason to re-create it
57 return
58
59 local['git']['clone', self.repo_url, self.sha]()
60 with self.in_checkout():
61 local['git']['checkout', self.sha]()
62
63 def install(self):
64 with self.in_checkout():
65 for language in C.SUPPORTED_LANGUAGES:
66 if language in self.languages:
67 languages[language].install_environment()
68
69 def run_hook(self, hook_id, file_args):
70 with self.in_checkout():
71 hook = self.hooks[hook_id]
72 return languages[hook['language']].run_hook(hook, file_args)
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/repository.py b/pre_commit/repository.py
--- a/pre_commit/repository.py
+++ b/pre_commit/repository.py
@@ -56,7 +56,7 @@
# Project already exists, no reason to re-create it
return
- local['git']['clone', self.repo_url, self.sha]()
+ local['git']['clone', '--no-checkout', self.repo_url, self.sha]()
with self.in_checkout():
local['git']['checkout', self.sha]()
@@ -69,4 +69,4 @@
def run_hook(self, hook_id, file_args):
with self.in_checkout():
hook = self.hooks[hook_id]
- return languages[hook['language']].run_hook(hook, file_args)
\ No newline at end of file
+ return languages[hook['language']].run_hook(hook, file_args)
| {"golden_diff": "diff --git a/pre_commit/repository.py b/pre_commit/repository.py\n--- a/pre_commit/repository.py\n+++ b/pre_commit/repository.py\n@@ -56,7 +56,7 @@\n # Project already exists, no reason to re-create it\n return\n \n- local['git']['clone', self.repo_url, self.sha]()\n+ local['git']['clone', '--no-checkout', self.repo_url, self.sha]()\n with self.in_checkout():\n local['git']['checkout', self.sha]()\n \n@@ -69,4 +69,4 @@\n def run_hook(self, hook_id, file_args):\n with self.in_checkout():\n hook = self.hooks[hook_id]\n- return languages[hook['language']].run_hook(hook, file_args)\n\\ No newline at end of file\n+ return languages[hook['language']].run_hook(hook, file_args)\n", "issue": "Consider using --no-checkout for cloning\nI'd assume it is faster...\n\n", "before_files": [{"content": "\nimport contextlib\nfrom plumbum import local\n\nimport pre_commit.constants as C\nfrom pre_commit.clientlib.validate_manifest import validate_manifest\nfrom pre_commit.hooks_workspace import in_hooks_workspace\nfrom pre_commit.languages.all import languages\nfrom pre_commit.util import cached_property\n\n\nclass Repository(object):\n def __init__(self, repo_config):\n self.repo_config = repo_config\n\n @cached_property\n def repo_url(self):\n return self.repo_config['repo']\n\n @cached_property\n def sha(self):\n return self.repo_config['sha']\n\n @cached_property\n def languages(self):\n return set(filter(None, (\n hook.get('language') for hook in self.hooks.values()\n )))\n\n @cached_property\n def hooks(self):\n return dict(\n (hook['id'], dict(hook, **self.manifest[hook['id']]))\n for hook in self.repo_config['hooks']\n )\n\n @cached_property\n def manifest(self):\n with self.in_checkout():\n return dict(\n (hook['id'], hook)\n for hook in validate_manifest(C.MANIFEST_FILE)\n )\n\n @contextlib.contextmanager\n def in_checkout(self):\n with in_hooks_workspace():\n # SMELL:\n self.create()\n with local.cwd(self.sha):\n yield\n\n def create(self):\n with in_hooks_workspace():\n if local.path(self.sha).exists():\n # Project already exists, no reason to re-create it\n return\n\n local['git']['clone', self.repo_url, self.sha]()\n with self.in_checkout():\n local['git']['checkout', self.sha]()\n\n def install(self):\n with self.in_checkout():\n for language in C.SUPPORTED_LANGUAGES:\n if language in self.languages:\n languages[language].install_environment()\n\n def run_hook(self, hook_id, file_args):\n with self.in_checkout():\n hook = self.hooks[hook_id]\n return languages[hook['language']].run_hook(hook, file_args)", "path": "pre_commit/repository.py"}], "after_files": [{"content": "\nimport contextlib\nfrom plumbum import local\n\nimport pre_commit.constants as C\nfrom pre_commit.clientlib.validate_manifest import validate_manifest\nfrom pre_commit.hooks_workspace import in_hooks_workspace\nfrom pre_commit.languages.all import languages\nfrom pre_commit.util import cached_property\n\n\nclass Repository(object):\n def __init__(self, repo_config):\n self.repo_config = repo_config\n\n @cached_property\n def repo_url(self):\n return self.repo_config['repo']\n\n @cached_property\n def sha(self):\n return self.repo_config['sha']\n\n @cached_property\n def languages(self):\n return set(filter(None, (\n hook.get('language') for hook in self.hooks.values()\n )))\n\n @cached_property\n def hooks(self):\n return dict(\n (hook['id'], dict(hook, **self.manifest[hook['id']]))\n for hook in self.repo_config['hooks']\n )\n\n @cached_property\n def manifest(self):\n with self.in_checkout():\n return dict(\n (hook['id'], hook)\n for hook in validate_manifest(C.MANIFEST_FILE)\n )\n\n @contextlib.contextmanager\n def in_checkout(self):\n with in_hooks_workspace():\n # SMELL:\n self.create()\n with local.cwd(self.sha):\n yield\n\n def create(self):\n with in_hooks_workspace():\n if local.path(self.sha).exists():\n # Project already exists, no reason to re-create it\n return\n\n local['git']['clone', '--no-checkout', self.repo_url, self.sha]()\n with self.in_checkout():\n local['git']['checkout', self.sha]()\n\n def install(self):\n with self.in_checkout():\n for language in C.SUPPORTED_LANGUAGES:\n if language in self.languages:\n languages[language].install_environment()\n\n def run_hook(self, hook_id, file_args):\n with self.in_checkout():\n hook = self.hooks[hook_id]\n return languages[hook['language']].run_hook(hook, file_args)\n", "path": "pre_commit/repository.py"}]} | 844 | 190 |
gh_patches_debug_12470 | rasdani/github-patches | git_diff | joke2k__faker-759 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Generating invalid cpf (brazillian ssn)
Faker is generating invalid checksum digits for cpf (brazillian ssn).
### Steps to reproduce
1. Create fake instance using localization "pt_BR"
1. Call fake.cpf()
### Expected behavior
It should generate a valid CPF.
### Actual behavior
It is generating a CPF with invalid checksum digits, in some cases.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `faker/providers/ssn/pt_BR/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from __future__ import unicode_literals
4 from .. import Provider as SsnProvider
5
6
7 def checksum(digits):
8 s = 0
9 p = len(digits) + 1
10 for i in range(0, len(digits)):
11 s += digits[i] * p
12 p -= 1
13
14 reminder = s % 11
15 if reminder == 0 or reminder == 1:
16 return 1
17 else:
18 return 11 - reminder
19
20
21 class Provider(SsnProvider):
22 """
23 Provider for Brazilian SSN also known in Brazil as CPF.
24 There are two methods Provider.ssn and Provider.cpf
25 The snn returns a valid number with numbers only
26 The cpf return a valid number formatted with brazilian mask. eg nnn.nnn.nnn-nn
27 """
28
29 def ssn(self):
30 digits = self.generator.random.sample(range(10), 9)
31
32 dv = checksum(digits)
33 digits.append(dv)
34 digits.append(checksum(digits))
35
36 return ''.join(map(str, digits))
37
38 def cpf(self):
39 c = self.ssn()
40 return c[:3] + '.' + c[3:6] + '.' + c[6:9] + '-' + c[9:]
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/faker/providers/ssn/pt_BR/__init__.py b/faker/providers/ssn/pt_BR/__init__.py
--- a/faker/providers/ssn/pt_BR/__init__.py
+++ b/faker/providers/ssn/pt_BR/__init__.py
@@ -5,6 +5,12 @@
def checksum(digits):
+ """
+ Returns the checksum of CPF digits.
+ References to the algorithm:
+ https://pt.wikipedia.org/wiki/Cadastro_de_pessoas_f%C3%ADsicas#Algoritmo
+ https://metacpan.org/source/MAMAWE/Algorithm-CheckDigits-v1.3.0/lib/Algorithm/CheckDigits/M11_004.pm
+ """
s = 0
p = len(digits) + 1
for i in range(0, len(digits)):
@@ -13,7 +19,7 @@
reminder = s % 11
if reminder == 0 or reminder == 1:
- return 1
+ return 0
else:
return 11 - reminder
| {"golden_diff": "diff --git a/faker/providers/ssn/pt_BR/__init__.py b/faker/providers/ssn/pt_BR/__init__.py\n--- a/faker/providers/ssn/pt_BR/__init__.py\n+++ b/faker/providers/ssn/pt_BR/__init__.py\n@@ -5,6 +5,12 @@\n \n \n def checksum(digits):\n+ \"\"\"\n+ Returns the checksum of CPF digits.\n+ References to the algorithm:\n+ https://pt.wikipedia.org/wiki/Cadastro_de_pessoas_f%C3%ADsicas#Algoritmo\n+ https://metacpan.org/source/MAMAWE/Algorithm-CheckDigits-v1.3.0/lib/Algorithm/CheckDigits/M11_004.pm\n+ \"\"\"\n s = 0\n p = len(digits) + 1\n for i in range(0, len(digits)):\n@@ -13,7 +19,7 @@\n \n reminder = s % 11\n if reminder == 0 or reminder == 1:\n- return 1\n+ return 0\n else:\n return 11 - reminder\n", "issue": "Generating invalid cpf (brazillian ssn)\nFaker is generating invalid checksum digits for cpf (brazillian ssn).\r\n\r\n### Steps to reproduce\r\n\r\n1. Create fake instance using localization \"pt_BR\"\r\n1. Call fake.cpf()\r\n\r\n### Expected behavior\r\n\r\nIt should generate a valid CPF.\r\n\r\n### Actual behavior\r\n\r\nIt is generating a CPF with invalid checksum digits, in some cases.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom __future__ import unicode_literals\nfrom .. import Provider as SsnProvider\n\n\ndef checksum(digits):\n s = 0\n p = len(digits) + 1\n for i in range(0, len(digits)):\n s += digits[i] * p\n p -= 1\n\n reminder = s % 11\n if reminder == 0 or reminder == 1:\n return 1\n else:\n return 11 - reminder\n\n\nclass Provider(SsnProvider):\n \"\"\"\n Provider for Brazilian SSN also known in Brazil as CPF.\n There are two methods Provider.ssn and Provider.cpf\n The snn returns a valid number with numbers only\n The cpf return a valid number formatted with brazilian mask. eg nnn.nnn.nnn-nn\n \"\"\"\n\n def ssn(self):\n digits = self.generator.random.sample(range(10), 9)\n\n dv = checksum(digits)\n digits.append(dv)\n digits.append(checksum(digits))\n\n return ''.join(map(str, digits))\n\n def cpf(self):\n c = self.ssn()\n return c[:3] + '.' + c[3:6] + '.' + c[6:9] + '-' + c[9:]\n", "path": "faker/providers/ssn/pt_BR/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom __future__ import unicode_literals\nfrom .. import Provider as SsnProvider\n\n\ndef checksum(digits):\n \"\"\"\n Returns the checksum of CPF digits.\n References to the algorithm:\n https://pt.wikipedia.org/wiki/Cadastro_de_pessoas_f%C3%ADsicas#Algoritmo\n https://metacpan.org/source/MAMAWE/Algorithm-CheckDigits-v1.3.0/lib/Algorithm/CheckDigits/M11_004.pm\n \"\"\"\n s = 0\n p = len(digits) + 1\n for i in range(0, len(digits)):\n s += digits[i] * p\n p -= 1\n\n reminder = s % 11\n if reminder == 0 or reminder == 1:\n return 0\n else:\n return 11 - reminder\n\n\nclass Provider(SsnProvider):\n \"\"\"\n Provider for Brazilian SSN also known in Brazil as CPF.\n There are two methods Provider.ssn and Provider.cpf\n The snn returns a valid number with numbers only\n The cpf return a valid number formatted with brazilian mask. eg nnn.nnn.nnn-nn\n \"\"\"\n\n def ssn(self):\n digits = self.generator.random.sample(range(10), 9)\n\n dv = checksum(digits)\n digits.append(dv)\n digits.append(checksum(digits))\n\n return ''.join(map(str, digits))\n\n def cpf(self):\n c = self.ssn()\n return c[:3] + '.' + c[3:6] + '.' + c[6:9] + '-' + c[9:]\n", "path": "faker/providers/ssn/pt_BR/__init__.py"}]} | 701 | 246 |
gh_patches_debug_7971 | rasdani/github-patches | git_diff | ipython__ipython-4563 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
nbconvert: Default encoding problem on OS X
Greetings.
I am using IPython 1.1.0 via MacPorts on OSX 10.7.5. The following problem is reproducible on the master git branch (IPython 2.0.0-dev).
On any call to nbconvert, I get the following failure:
```
[NbConvertApp] Using existing profile dir: u'/Users/USERNAME_REDACTED/.ipython/profile_default'
[NbConvertApp] Converting notebook ticks.ipynb to html
[NbConvertApp] Support files will be in ticks_files/
Traceback (most recent call last):
File "/opt/local/bin/ipython", line 6, in <module>
start_ipython()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/__init__.py", line 118, in start_ipython
return launch_new_instance(argv=argv, **kwargs)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/config/application.py", line 545, in launch_instance
app.start()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/terminal/ipapp.py", line 358, in start
return self.subapp.start()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/nbconvert/nbconvertapp.py", line 267, in start
self.convert_notebooks()
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/nbconvert/nbconvertapp.py", line 300, in convert_notebooks
output, resources = exporter.from_filename(notebook_filename, resources=resources)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/nbconvert/exporters/exporter.py", line 288, in from_filename
with io.open(filename) as f:
LookupError: unknown encoding:
If you suspect this is an IPython bug, please report it at:
https://github.com/ipython/ipython/issues
or send an email to the mailing list at [email protected]
You can print a more detailed traceback right now with "%tb", or use "%debug"
to interactively debug it.
Extra-detailed tracebacks for bug-reporting purposes can be enabled via:
c.Application.verbose_crash=True
```
This is an easy fix: I change the troublesome line such that it reads,
```
with io.open(filename, encoding='ascii') as f:
```
However, this ad hoc and likely a suboptimal solution. I wanted to bring this to the developers' attention and inquire about a proper solution. Thanks!
System info:
```
python -c "import IPython; print(IPython.sys_info())"
{'codename': 'An Afternoon Hack',
'commit_hash': '7c2ea3a',
'commit_source': 'installation',
'default_encoding': 'US-ASCII',
'ipython_path': '/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython',
'ipython_version': '1.1.0',
'os_name': 'posix',
'platform': 'Darwin-11.4.2-x86_64-i386-64bit',
'sys_executable': '/opt/local/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python',
'sys_platform': 'darwin',
'sys_version': '2.7.6 (default, Nov 19 2013, 16:37:14) \n[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)]'}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `IPython/nbconvert/exporters/exporter.py`
Content:
```
1 """This module defines Exporter, a highly configurable converter
2 that uses Jinja2 to export notebook files into different formats.
3 """
4
5 #-----------------------------------------------------------------------------
6 # Copyright (c) 2013, the IPython Development Team.
7 #
8 # Distributed under the terms of the Modified BSD License.
9 #
10 # The full license is in the file COPYING.txt, distributed with this software.
11 #-----------------------------------------------------------------------------
12
13 #-----------------------------------------------------------------------------
14 # Imports
15 #-----------------------------------------------------------------------------
16
17 from __future__ import print_function, absolute_import
18
19 # Stdlib imports
20 import io
21 import os
22 import copy
23 import collections
24 import datetime
25
26
27 # IPython imports
28 from IPython.config.configurable import LoggingConfigurable
29 from IPython.config import Config
30 from IPython.nbformat import current as nbformat
31 from IPython.utils.traitlets import MetaHasTraits, Unicode, List
32 from IPython.utils.importstring import import_item
33 from IPython.utils import text, py3compat
34
35 #-----------------------------------------------------------------------------
36 # Class
37 #-----------------------------------------------------------------------------
38
39 class ResourcesDict(collections.defaultdict):
40 def __missing__(self, key):
41 return ''
42
43
44 class Exporter(LoggingConfigurable):
45 """
46 Class containing methods that sequentially run a list of preprocessors on a
47 NotebookNode object and then return the modified NotebookNode object and
48 accompanying resources dict.
49 """
50
51 file_extension = Unicode(
52 'txt', config=True,
53 help="Extension of the file that should be written to disk"
54 )
55
56 #Configurability, allows the user to easily add filters and preprocessors.
57 preprocessors = List(config=True,
58 help="""List of preprocessors, by name or namespace, to enable.""")
59
60 _preprocessors = None
61
62 default_preprocessors = List(['IPython.nbconvert.preprocessors.coalesce_streams',
63 'IPython.nbconvert.preprocessors.SVG2PDFPreprocessor',
64 'IPython.nbconvert.preprocessors.ExtractOutputPreprocessor',
65 'IPython.nbconvert.preprocessors.CSSHTMLHeaderPreprocessor',
66 'IPython.nbconvert.preprocessors.RevealHelpPreprocessor',
67 'IPython.nbconvert.preprocessors.LatexPreprocessor',
68 'IPython.nbconvert.preprocessors.HighlightMagicsPreprocessor'],
69 config=True,
70 help="""List of preprocessors available by default, by name, namespace,
71 instance, or type.""")
72
73
74 def __init__(self, config=None, **kw):
75 """
76 Public constructor
77
78 Parameters
79 ----------
80 config : config
81 User configuration instance.
82 """
83 with_default_config = self.default_config
84 if config:
85 with_default_config.merge(config)
86
87 super(Exporter, self).__init__(config=with_default_config, **kw)
88
89 self._init_preprocessors()
90
91
92 @property
93 def default_config(self):
94 return Config()
95
96 @nbformat.docstring_nbformat_mod
97 def from_notebook_node(self, nb, resources=None, **kw):
98 """
99 Convert a notebook from a notebook node instance.
100
101 Parameters
102 ----------
103 nb : :class:`~{nbformat_mod}.nbbase.NotebookNode`
104 Notebook node
105 resources : dict
106 Additional resources that can be accessed read/write by
107 preprocessors and filters.
108 **kw
109 Ignored (?)
110 """
111 nb_copy = copy.deepcopy(nb)
112 resources = self._init_resources(resources)
113
114 # Preprocess
115 nb_copy, resources = self._preprocess(nb_copy, resources)
116
117 return nb_copy, resources
118
119
120 def from_filename(self, filename, resources=None, **kw):
121 """
122 Convert a notebook from a notebook file.
123
124 Parameters
125 ----------
126 filename : str
127 Full filename of the notebook file to open and convert.
128 """
129
130 # Pull the metadata from the filesystem.
131 if resources is None:
132 resources = ResourcesDict()
133 if not 'metadata' in resources or resources['metadata'] == '':
134 resources['metadata'] = ResourcesDict()
135 basename = os.path.basename(filename)
136 notebook_name = basename[:basename.rfind('.')]
137 resources['metadata']['name'] = notebook_name
138
139 modified_date = datetime.datetime.fromtimestamp(os.path.getmtime(filename))
140 resources['metadata']['modified_date'] = modified_date.strftime(text.date_format)
141
142 with io.open(filename) as f:
143 return self.from_notebook_node(nbformat.read(f, 'json'), resources=resources, **kw)
144
145
146 def from_file(self, file_stream, resources=None, **kw):
147 """
148 Convert a notebook from a notebook file.
149
150 Parameters
151 ----------
152 file_stream : file-like object
153 Notebook file-like object to convert.
154 """
155 return self.from_notebook_node(nbformat.read(file_stream, 'json'), resources=resources, **kw)
156
157
158 def register_preprocessor(self, preprocessor, enabled=False):
159 """
160 Register a preprocessor.
161 Preprocessors are classes that act upon the notebook before it is
162 passed into the Jinja templating engine. preprocessors are also
163 capable of passing additional information to the Jinja
164 templating engine.
165
166 Parameters
167 ----------
168 preprocessor : preprocessor
169 """
170 if preprocessor is None:
171 raise TypeError('preprocessor')
172 isclass = isinstance(preprocessor, type)
173 constructed = not isclass
174
175 # Handle preprocessor's registration based on it's type
176 if constructed and isinstance(preprocessor, py3compat.string_types):
177 # Preprocessor is a string, import the namespace and recursively call
178 # this register_preprocessor method
179 preprocessor_cls = import_item(preprocessor)
180 return self.register_preprocessor(preprocessor_cls, enabled)
181
182 if constructed and hasattr(preprocessor, '__call__'):
183 # Preprocessor is a function, no need to construct it.
184 # Register and return the preprocessor.
185 if enabled:
186 preprocessor.enabled = True
187 self._preprocessors.append(preprocessor)
188 return preprocessor
189
190 elif isclass and isinstance(preprocessor, MetaHasTraits):
191 # Preprocessor is configurable. Make sure to pass in new default for
192 # the enabled flag if one was specified.
193 self.register_preprocessor(preprocessor(parent=self), enabled)
194
195 elif isclass:
196 # Preprocessor is not configurable, construct it
197 self.register_preprocessor(preprocessor(), enabled)
198
199 else:
200 # Preprocessor is an instance of something without a __call__
201 # attribute.
202 raise TypeError('preprocessor')
203
204
205 def _init_preprocessors(self):
206 """
207 Register all of the preprocessors needed for this exporter, disabled
208 unless specified explicitly.
209 """
210 if self._preprocessors is None:
211 self._preprocessors = []
212
213 #Load default preprocessors (not necessarly enabled by default).
214 if self.default_preprocessors:
215 for preprocessor in self.default_preprocessors:
216 self.register_preprocessor(preprocessor)
217
218 #Load user preprocessors. Enable by default.
219 if self.preprocessors:
220 for preprocessor in self.preprocessors:
221 self.register_preprocessor(preprocessor, enabled=True)
222
223
224 def _init_resources(self, resources):
225
226 #Make sure the resources dict is of ResourcesDict type.
227 if resources is None:
228 resources = ResourcesDict()
229 if not isinstance(resources, ResourcesDict):
230 new_resources = ResourcesDict()
231 new_resources.update(resources)
232 resources = new_resources
233
234 #Make sure the metadata extension exists in resources
235 if 'metadata' in resources:
236 if not isinstance(resources['metadata'], ResourcesDict):
237 resources['metadata'] = ResourcesDict(resources['metadata'])
238 else:
239 resources['metadata'] = ResourcesDict()
240 if not resources['metadata']['name']:
241 resources['metadata']['name'] = 'Notebook'
242
243 #Set the output extension
244 resources['output_extension'] = self.file_extension
245 return resources
246
247
248 def _preprocess(self, nb, resources):
249 """
250 Preprocess the notebook before passing it into the Jinja engine.
251 To preprocess the notebook is to apply all of the
252
253 Parameters
254 ----------
255 nb : notebook node
256 notebook that is being exported.
257 resources : a dict of additional resources that
258 can be accessed read/write by preprocessors
259 """
260
261 # Do a copy.deepcopy first,
262 # we are never safe enough with what the preprocessors could do.
263 nbc = copy.deepcopy(nb)
264 resc = copy.deepcopy(resources)
265
266 #Run each preprocessor on the notebook. Carry the output along
267 #to each preprocessor
268 for preprocessor in self._preprocessors:
269 nbc, resc = preprocessor(nbc, resc)
270 return nbc, resc
271
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/IPython/nbconvert/exporters/exporter.py b/IPython/nbconvert/exporters/exporter.py
--- a/IPython/nbconvert/exporters/exporter.py
+++ b/IPython/nbconvert/exporters/exporter.py
@@ -139,7 +139,7 @@
modified_date = datetime.datetime.fromtimestamp(os.path.getmtime(filename))
resources['metadata']['modified_date'] = modified_date.strftime(text.date_format)
- with io.open(filename) as f:
+ with io.open(filename, encoding='utf-8') as f:
return self.from_notebook_node(nbformat.read(f, 'json'), resources=resources, **kw)
| {"golden_diff": "diff --git a/IPython/nbconvert/exporters/exporter.py b/IPython/nbconvert/exporters/exporter.py\n--- a/IPython/nbconvert/exporters/exporter.py\n+++ b/IPython/nbconvert/exporters/exporter.py\n@@ -139,7 +139,7 @@\n modified_date = datetime.datetime.fromtimestamp(os.path.getmtime(filename))\n resources['metadata']['modified_date'] = modified_date.strftime(text.date_format)\n \n- with io.open(filename) as f:\n+ with io.open(filename, encoding='utf-8') as f:\n return self.from_notebook_node(nbformat.read(f, 'json'), resources=resources, **kw)\n", "issue": "nbconvert: Default encoding problem on OS X\nGreetings.\n\nI am using IPython 1.1.0 via MacPorts on OSX 10.7.5. The following problem is reproducible on the master git branch (IPython 2.0.0-dev).\n\nOn any call to nbconvert, I get the following failure:\n\n```\n[NbConvertApp] Using existing profile dir: u'/Users/USERNAME_REDACTED/.ipython/profile_default'\n[NbConvertApp] Converting notebook ticks.ipynb to html\n[NbConvertApp] Support files will be in ticks_files/\nTraceback (most recent call last):\n File \"/opt/local/bin/ipython\", line 6, in <module>\n start_ipython()\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/__init__.py\", line 118, in start_ipython\n return launch_new_instance(argv=argv, **kwargs)\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/config/application.py\", line 545, in launch_instance\n app.start()\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/terminal/ipapp.py\", line 358, in start\n return self.subapp.start()\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/nbconvert/nbconvertapp.py\", line 267, in start\n self.convert_notebooks()\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/nbconvert/nbconvertapp.py\", line 300, in convert_notebooks\n output, resources = exporter.from_filename(notebook_filename, resources=resources)\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/nbconvert/exporters/exporter.py\", line 288, in from_filename\n with io.open(filename) as f:\nLookupError: unknown encoding: \n\nIf you suspect this is an IPython bug, please report it at:\n https://github.com/ipython/ipython/issues\nor send an email to the mailing list at [email protected]\n\nYou can print a more detailed traceback right now with \"%tb\", or use \"%debug\"\nto interactively debug it.\n\nExtra-detailed tracebacks for bug-reporting purposes can be enabled via:\n c.Application.verbose_crash=True\n```\n\nThis is an easy fix: I change the troublesome line such that it reads,\n\n```\nwith io.open(filename, encoding='ascii') as f:\n```\n\nHowever, this ad hoc and likely a suboptimal solution. I wanted to bring this to the developers' attention and inquire about a proper solution. Thanks!\n\nSystem info:\n\n```\npython -c \"import IPython; print(IPython.sys_info())\"\n{'codename': 'An Afternoon Hack',\n 'commit_hash': '7c2ea3a',\n 'commit_source': 'installation',\n 'default_encoding': 'US-ASCII',\n 'ipython_path': '/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython',\n 'ipython_version': '1.1.0',\n 'os_name': 'posix',\n 'platform': 'Darwin-11.4.2-x86_64-i386-64bit',\n 'sys_executable': '/opt/local/Library/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python',\n 'sys_platform': 'darwin',\n 'sys_version': '2.7.6 (default, Nov 19 2013, 16:37:14) \\n[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00)]'}\n```\n\n", "before_files": [{"content": "\"\"\"This module defines Exporter, a highly configurable converter\nthat uses Jinja2 to export notebook files into different formats.\n\"\"\"\n\n#-----------------------------------------------------------------------------\n# Copyright (c) 2013, the IPython Development Team.\n#\n# Distributed under the terms of the Modified BSD License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\nfrom __future__ import print_function, absolute_import\n\n# Stdlib imports\nimport io\nimport os\nimport copy\nimport collections\nimport datetime\n\n\n# IPython imports\nfrom IPython.config.configurable import LoggingConfigurable\nfrom IPython.config import Config\nfrom IPython.nbformat import current as nbformat\nfrom IPython.utils.traitlets import MetaHasTraits, Unicode, List\nfrom IPython.utils.importstring import import_item\nfrom IPython.utils import text, py3compat\n\n#-----------------------------------------------------------------------------\n# Class\n#-----------------------------------------------------------------------------\n\nclass ResourcesDict(collections.defaultdict):\n def __missing__(self, key):\n return ''\n\n\nclass Exporter(LoggingConfigurable):\n \"\"\"\n Class containing methods that sequentially run a list of preprocessors on a \n NotebookNode object and then return the modified NotebookNode object and \n accompanying resources dict.\n \"\"\"\n\n file_extension = Unicode(\n 'txt', config=True,\n help=\"Extension of the file that should be written to disk\"\n )\n\n #Configurability, allows the user to easily add filters and preprocessors.\n preprocessors = List(config=True,\n help=\"\"\"List of preprocessors, by name or namespace, to enable.\"\"\")\n\n _preprocessors = None\n\n default_preprocessors = List(['IPython.nbconvert.preprocessors.coalesce_streams',\n 'IPython.nbconvert.preprocessors.SVG2PDFPreprocessor',\n 'IPython.nbconvert.preprocessors.ExtractOutputPreprocessor',\n 'IPython.nbconvert.preprocessors.CSSHTMLHeaderPreprocessor',\n 'IPython.nbconvert.preprocessors.RevealHelpPreprocessor',\n 'IPython.nbconvert.preprocessors.LatexPreprocessor',\n 'IPython.nbconvert.preprocessors.HighlightMagicsPreprocessor'],\n config=True,\n help=\"\"\"List of preprocessors available by default, by name, namespace, \n instance, or type.\"\"\")\n\n\n def __init__(self, config=None, **kw):\n \"\"\"\n Public constructor\n\n Parameters\n ----------\n config : config\n User configuration instance.\n \"\"\"\n with_default_config = self.default_config\n if config:\n with_default_config.merge(config)\n \n super(Exporter, self).__init__(config=with_default_config, **kw)\n\n self._init_preprocessors()\n\n\n @property\n def default_config(self):\n return Config()\n\n @nbformat.docstring_nbformat_mod\n def from_notebook_node(self, nb, resources=None, **kw):\n \"\"\"\n Convert a notebook from a notebook node instance.\n\n Parameters\n ----------\n nb : :class:`~{nbformat_mod}.nbbase.NotebookNode`\n Notebook node\n resources : dict\n Additional resources that can be accessed read/write by\n preprocessors and filters.\n **kw\n Ignored (?)\n \"\"\"\n nb_copy = copy.deepcopy(nb)\n resources = self._init_resources(resources)\n\n # Preprocess\n nb_copy, resources = self._preprocess(nb_copy, resources)\n\n return nb_copy, resources\n\n\n def from_filename(self, filename, resources=None, **kw):\n \"\"\"\n Convert a notebook from a notebook file.\n\n Parameters\n ----------\n filename : str\n Full filename of the notebook file to open and convert.\n \"\"\"\n\n # Pull the metadata from the filesystem.\n if resources is None:\n resources = ResourcesDict()\n if not 'metadata' in resources or resources['metadata'] == '':\n resources['metadata'] = ResourcesDict()\n basename = os.path.basename(filename)\n notebook_name = basename[:basename.rfind('.')]\n resources['metadata']['name'] = notebook_name\n\n modified_date = datetime.datetime.fromtimestamp(os.path.getmtime(filename))\n resources['metadata']['modified_date'] = modified_date.strftime(text.date_format)\n\n with io.open(filename) as f:\n return self.from_notebook_node(nbformat.read(f, 'json'), resources=resources, **kw)\n\n\n def from_file(self, file_stream, resources=None, **kw):\n \"\"\"\n Convert a notebook from a notebook file.\n\n Parameters\n ----------\n file_stream : file-like object\n Notebook file-like object to convert.\n \"\"\"\n return self.from_notebook_node(nbformat.read(file_stream, 'json'), resources=resources, **kw)\n\n\n def register_preprocessor(self, preprocessor, enabled=False):\n \"\"\"\n Register a preprocessor.\n Preprocessors are classes that act upon the notebook before it is\n passed into the Jinja templating engine. preprocessors are also\n capable of passing additional information to the Jinja\n templating engine.\n\n Parameters\n ----------\n preprocessor : preprocessor\n \"\"\"\n if preprocessor is None:\n raise TypeError('preprocessor')\n isclass = isinstance(preprocessor, type)\n constructed = not isclass\n\n # Handle preprocessor's registration based on it's type\n if constructed and isinstance(preprocessor, py3compat.string_types):\n # Preprocessor is a string, import the namespace and recursively call\n # this register_preprocessor method\n preprocessor_cls = import_item(preprocessor)\n return self.register_preprocessor(preprocessor_cls, enabled)\n\n if constructed and hasattr(preprocessor, '__call__'):\n # Preprocessor is a function, no need to construct it.\n # Register and return the preprocessor.\n if enabled:\n preprocessor.enabled = True\n self._preprocessors.append(preprocessor)\n return preprocessor\n\n elif isclass and isinstance(preprocessor, MetaHasTraits):\n # Preprocessor is configurable. Make sure to pass in new default for \n # the enabled flag if one was specified.\n self.register_preprocessor(preprocessor(parent=self), enabled)\n\n elif isclass:\n # Preprocessor is not configurable, construct it\n self.register_preprocessor(preprocessor(), enabled)\n\n else:\n # Preprocessor is an instance of something without a __call__ \n # attribute. \n raise TypeError('preprocessor')\n\n\n def _init_preprocessors(self):\n \"\"\"\n Register all of the preprocessors needed for this exporter, disabled\n unless specified explicitly.\n \"\"\"\n if self._preprocessors is None:\n self._preprocessors = []\n\n #Load default preprocessors (not necessarly enabled by default).\n if self.default_preprocessors:\n for preprocessor in self.default_preprocessors:\n self.register_preprocessor(preprocessor)\n\n #Load user preprocessors. Enable by default.\n if self.preprocessors:\n for preprocessor in self.preprocessors:\n self.register_preprocessor(preprocessor, enabled=True)\n\n\n def _init_resources(self, resources):\n\n #Make sure the resources dict is of ResourcesDict type.\n if resources is None:\n resources = ResourcesDict()\n if not isinstance(resources, ResourcesDict):\n new_resources = ResourcesDict()\n new_resources.update(resources)\n resources = new_resources\n\n #Make sure the metadata extension exists in resources\n if 'metadata' in resources:\n if not isinstance(resources['metadata'], ResourcesDict):\n resources['metadata'] = ResourcesDict(resources['metadata'])\n else:\n resources['metadata'] = ResourcesDict()\n if not resources['metadata']['name']:\n resources['metadata']['name'] = 'Notebook'\n\n #Set the output extension\n resources['output_extension'] = self.file_extension\n return resources\n\n\n def _preprocess(self, nb, resources):\n \"\"\"\n Preprocess the notebook before passing it into the Jinja engine.\n To preprocess the notebook is to apply all of the\n\n Parameters\n ----------\n nb : notebook node\n notebook that is being exported.\n resources : a dict of additional resources that\n can be accessed read/write by preprocessors\n \"\"\"\n\n # Do a copy.deepcopy first,\n # we are never safe enough with what the preprocessors could do.\n nbc = copy.deepcopy(nb)\n resc = copy.deepcopy(resources)\n\n #Run each preprocessor on the notebook. Carry the output along\n #to each preprocessor\n for preprocessor in self._preprocessors:\n nbc, resc = preprocessor(nbc, resc)\n return nbc, resc\n", "path": "IPython/nbconvert/exporters/exporter.py"}], "after_files": [{"content": "\"\"\"This module defines Exporter, a highly configurable converter\nthat uses Jinja2 to export notebook files into different formats.\n\"\"\"\n\n#-----------------------------------------------------------------------------\n# Copyright (c) 2013, the IPython Development Team.\n#\n# Distributed under the terms of the Modified BSD License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\nfrom __future__ import print_function, absolute_import\n\n# Stdlib imports\nimport io\nimport os\nimport copy\nimport collections\nimport datetime\n\n\n# IPython imports\nfrom IPython.config.configurable import LoggingConfigurable\nfrom IPython.config import Config\nfrom IPython.nbformat import current as nbformat\nfrom IPython.utils.traitlets import MetaHasTraits, Unicode, List\nfrom IPython.utils.importstring import import_item\nfrom IPython.utils import text, py3compat\n\n#-----------------------------------------------------------------------------\n# Class\n#-----------------------------------------------------------------------------\n\nclass ResourcesDict(collections.defaultdict):\n def __missing__(self, key):\n return ''\n\n\nclass Exporter(LoggingConfigurable):\n \"\"\"\n Class containing methods that sequentially run a list of preprocessors on a \n NotebookNode object and then return the modified NotebookNode object and \n accompanying resources dict.\n \"\"\"\n\n file_extension = Unicode(\n 'txt', config=True,\n help=\"Extension of the file that should be written to disk\"\n )\n\n #Configurability, allows the user to easily add filters and preprocessors.\n preprocessors = List(config=True,\n help=\"\"\"List of preprocessors, by name or namespace, to enable.\"\"\")\n\n _preprocessors = None\n\n default_preprocessors = List(['IPython.nbconvert.preprocessors.coalesce_streams',\n 'IPython.nbconvert.preprocessors.SVG2PDFPreprocessor',\n 'IPython.nbconvert.preprocessors.ExtractOutputPreprocessor',\n 'IPython.nbconvert.preprocessors.CSSHTMLHeaderPreprocessor',\n 'IPython.nbconvert.preprocessors.RevealHelpPreprocessor',\n 'IPython.nbconvert.preprocessors.LatexPreprocessor',\n 'IPython.nbconvert.preprocessors.HighlightMagicsPreprocessor'],\n config=True,\n help=\"\"\"List of preprocessors available by default, by name, namespace, \n instance, or type.\"\"\")\n\n\n def __init__(self, config=None, **kw):\n \"\"\"\n Public constructor\n\n Parameters\n ----------\n config : config\n User configuration instance.\n \"\"\"\n with_default_config = self.default_config\n if config:\n with_default_config.merge(config)\n \n super(Exporter, self).__init__(config=with_default_config, **kw)\n\n self._init_preprocessors()\n\n\n @property\n def default_config(self):\n return Config()\n\n @nbformat.docstring_nbformat_mod\n def from_notebook_node(self, nb, resources=None, **kw):\n \"\"\"\n Convert a notebook from a notebook node instance.\n\n Parameters\n ----------\n nb : :class:`~{nbformat_mod}.nbbase.NotebookNode`\n Notebook node\n resources : dict\n Additional resources that can be accessed read/write by\n preprocessors and filters.\n **kw\n Ignored (?)\n \"\"\"\n nb_copy = copy.deepcopy(nb)\n resources = self._init_resources(resources)\n\n # Preprocess\n nb_copy, resources = self._preprocess(nb_copy, resources)\n\n return nb_copy, resources\n\n\n def from_filename(self, filename, resources=None, **kw):\n \"\"\"\n Convert a notebook from a notebook file.\n\n Parameters\n ----------\n filename : str\n Full filename of the notebook file to open and convert.\n \"\"\"\n\n # Pull the metadata from the filesystem.\n if resources is None:\n resources = ResourcesDict()\n if not 'metadata' in resources or resources['metadata'] == '':\n resources['metadata'] = ResourcesDict()\n basename = os.path.basename(filename)\n notebook_name = basename[:basename.rfind('.')]\n resources['metadata']['name'] = notebook_name\n\n modified_date = datetime.datetime.fromtimestamp(os.path.getmtime(filename))\n resources['metadata']['modified_date'] = modified_date.strftime(text.date_format)\n\n with io.open(filename, encoding='utf-8') as f:\n return self.from_notebook_node(nbformat.read(f, 'json'), resources=resources, **kw)\n\n\n def from_file(self, file_stream, resources=None, **kw):\n \"\"\"\n Convert a notebook from a notebook file.\n\n Parameters\n ----------\n file_stream : file-like object\n Notebook file-like object to convert.\n \"\"\"\n return self.from_notebook_node(nbformat.read(file_stream, 'json'), resources=resources, **kw)\n\n\n def register_preprocessor(self, preprocessor, enabled=False):\n \"\"\"\n Register a preprocessor.\n Preprocessors are classes that act upon the notebook before it is\n passed into the Jinja templating engine. preprocessors are also\n capable of passing additional information to the Jinja\n templating engine.\n\n Parameters\n ----------\n preprocessor : preprocessor\n \"\"\"\n if preprocessor is None:\n raise TypeError('preprocessor')\n isclass = isinstance(preprocessor, type)\n constructed = not isclass\n\n # Handle preprocessor's registration based on it's type\n if constructed and isinstance(preprocessor, py3compat.string_types):\n # Preprocessor is a string, import the namespace and recursively call\n # this register_preprocessor method\n preprocessor_cls = import_item(preprocessor)\n return self.register_preprocessor(preprocessor_cls, enabled)\n\n if constructed and hasattr(preprocessor, '__call__'):\n # Preprocessor is a function, no need to construct it.\n # Register and return the preprocessor.\n if enabled:\n preprocessor.enabled = True\n self._preprocessors.append(preprocessor)\n return preprocessor\n\n elif isclass and isinstance(preprocessor, MetaHasTraits):\n # Preprocessor is configurable. Make sure to pass in new default for \n # the enabled flag if one was specified.\n self.register_preprocessor(preprocessor(parent=self), enabled)\n\n elif isclass:\n # Preprocessor is not configurable, construct it\n self.register_preprocessor(preprocessor(), enabled)\n\n else:\n # Preprocessor is an instance of something without a __call__ \n # attribute. \n raise TypeError('preprocessor')\n\n\n def _init_preprocessors(self):\n \"\"\"\n Register all of the preprocessors needed for this exporter, disabled\n unless specified explicitly.\n \"\"\"\n if self._preprocessors is None:\n self._preprocessors = []\n\n #Load default preprocessors (not necessarly enabled by default).\n if self.default_preprocessors:\n for preprocessor in self.default_preprocessors:\n self.register_preprocessor(preprocessor)\n\n #Load user preprocessors. Enable by default.\n if self.preprocessors:\n for preprocessor in self.preprocessors:\n self.register_preprocessor(preprocessor, enabled=True)\n\n\n def _init_resources(self, resources):\n\n #Make sure the resources dict is of ResourcesDict type.\n if resources is None:\n resources = ResourcesDict()\n if not isinstance(resources, ResourcesDict):\n new_resources = ResourcesDict()\n new_resources.update(resources)\n resources = new_resources\n\n #Make sure the metadata extension exists in resources\n if 'metadata' in resources:\n if not isinstance(resources['metadata'], ResourcesDict):\n resources['metadata'] = ResourcesDict(resources['metadata'])\n else:\n resources['metadata'] = ResourcesDict()\n if not resources['metadata']['name']:\n resources['metadata']['name'] = 'Notebook'\n\n #Set the output extension\n resources['output_extension'] = self.file_extension\n return resources\n\n\n def _preprocess(self, nb, resources):\n \"\"\"\n Preprocess the notebook before passing it into the Jinja engine.\n To preprocess the notebook is to apply all of the\n\n Parameters\n ----------\n nb : notebook node\n notebook that is being exported.\n resources : a dict of additional resources that\n can be accessed read/write by preprocessors\n \"\"\"\n\n # Do a copy.deepcopy first,\n # we are never safe enough with what the preprocessors could do.\n nbc = copy.deepcopy(nb)\n resc = copy.deepcopy(resources)\n\n #Run each preprocessor on the notebook. Carry the output along\n #to each preprocessor\n for preprocessor in self._preprocessors:\n nbc, resc = preprocessor(nbc, resc)\n return nbc, resc\n", "path": "IPython/nbconvert/exporters/exporter.py"}]} | 3,684 | 145 |
gh_patches_debug_4256 | rasdani/github-patches | git_diff | ivy-llc__ivy-17092 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
solve
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ivy/functional/frontends/paddle/tensor/linalg.py`
Content:
```
1 # global
2 import ivy
3 from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
4 from ivy.functional.frontends.paddle import promote_types_of_paddle_inputs
5 from ivy.functional.frontends.paddle.func_wrapper import (
6 to_ivy_arrays_and_back,
7 )
8
9
10 @with_supported_dtypes(
11 {"2.4.2 and below": ("float32", "float64", "int32", "int64")}, "paddle"
12 )
13 @to_ivy_arrays_and_back
14 def cross(x, y, /, *, axis=9, name=None):
15 x, y = promote_types_of_paddle_inputs(x, y)
16 return ivy.cross(x, y, axis=axis)
17
18
19 # matmul
20 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
21 @to_ivy_arrays_and_back
22 def matmul(x, y, transpose_x=False, transpose_y=False, name=None):
23 x, y = promote_types_of_paddle_inputs(x, y)
24 return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)
25
26
27 # norm
28 @with_supported_dtypes({"2.4.2 and below": ("float32", "float64")}, "paddle")
29 @to_ivy_arrays_and_back
30 def norm(x, p="fro", axis=None, keepdim=False, name=None):
31 if axis is None and p is not None:
32 if p == "fro":
33 p = 2
34 ret = ivy.vector_norm(x.flatten(), ord=p, axis=-1)
35 if keepdim:
36 ret = ret.reshape([1] * len(x.shape))
37 if len(ret.shape) == 0:
38 return ivy.array([ret])
39 return ret
40
41 if isinstance(axis, tuple):
42 axis = list(axis)
43 if isinstance(axis, list) and len(axis) == 1:
44 axis = axis[0]
45
46 if isinstance(axis, int):
47 if p == "fro":
48 p = 2
49 if p in [0, 1, 2, ivy.inf, -ivy.inf]:
50 ret = ivy.vector_norm(x, ord=p, axis=axis, keepdims=keepdim)
51 elif isinstance(p, (int, float)):
52 ret = ivy.pow(
53 ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),
54 float(1.0 / p),
55 )
56
57 elif isinstance(axis, list) and len(axis) == 2:
58 if p == 0:
59 raise ValueError
60 elif p == 1:
61 ret = ivy.sum(ivy.abs(x), axis=axis, keepdims=keepdim)
62 elif p == 2 or p == "fro":
63 ret = ivy.matrix_norm(x, ord="fro", axis=axis, keepdims=keepdim)
64 elif p == ivy.inf:
65 ret = ivy.max(ivy.abs(x), axis=axis, keepdims=keepdim)
66 elif p == -ivy.inf:
67 ret = ivy.min(ivy.abs(x), axis=axis, keepdims=keepdim)
68 elif isinstance(p, (int, float)) and p > 0:
69 ret = ivy.pow(
70 ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),
71 float(1.0 / p),
72 )
73 else:
74 raise ValueError
75
76 else:
77 raise ValueError
78
79 if len(ret.shape) == 0:
80 ret = ivy.array(
81 [ret]
82 ) # this is done so as to match shape of output from paddle
83 return ret
84
85
86 # eig
87 @to_ivy_arrays_and_back
88 def eig(x, name=None):
89 return ivy.eig(x)
90
91
92 # eigvals
93 @to_ivy_arrays_and_back
94 def eigvals(x, name=None):
95 return ivy.eigvals(x)
96
97
98 # eigvalsh
99 @to_ivy_arrays_and_back
100 def eigvalsh(x, UPLO="L", name=None):
101 return ivy.eigvalsh(x, UPLO=UPLO)
102
103
104 # eigh
105 @to_ivy_arrays_and_back
106 def eigh(x, UPLO="L", name=None):
107 return ivy.eigh(x, UPLO=UPLO)
108
109
110 # pinv
111 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
112 @to_ivy_arrays_and_back
113 def pinv(x, rcond=1e-15, hermitian=False, name=None):
114 # TODO: Add hermitian functionality
115 return ivy.pinv(x, rtol=rcond)
116
117
118 # cholesky
119 @with_supported_dtypes({"2.4.2 and below": ("float32", "float64")}, "paddle")
120 @to_ivy_arrays_and_back
121 def cholesky(x, /, *, upper=False, name=None):
122 return ivy.cholesky(x, upper=upper)
123
124
125 # bmm
126 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
127 @to_ivy_arrays_and_back
128 def bmm(x, y, transpose_x=False, transpose_y=False, name=None):
129 if len(ivy.shape(x)) != 3 or len(ivy.shape(y)) != 3:
130 raise RuntimeError("input must be 3D matrices")
131 x, y = promote_types_of_paddle_inputs(x, y)
132 return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ivy/functional/frontends/paddle/tensor/linalg.py b/ivy/functional/frontends/paddle/tensor/linalg.py
--- a/ivy/functional/frontends/paddle/tensor/linalg.py
+++ b/ivy/functional/frontends/paddle/tensor/linalg.py
@@ -115,6 +115,13 @@
return ivy.pinv(x, rtol=rcond)
+# solve
+@with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
+@to_ivy_arrays_and_back
+def solve(x1, x2, name=None):
+ return ivy.solve(x1, x2)
+
+
# cholesky
@with_supported_dtypes({"2.4.2 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
| {"golden_diff": "diff --git a/ivy/functional/frontends/paddle/tensor/linalg.py b/ivy/functional/frontends/paddle/tensor/linalg.py\n--- a/ivy/functional/frontends/paddle/tensor/linalg.py\n+++ b/ivy/functional/frontends/paddle/tensor/linalg.py\n@@ -115,6 +115,13 @@\n return ivy.pinv(x, rtol=rcond)\n \n \n+# solve\n+@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n+@to_ivy_arrays_and_back\n+def solve(x1, x2, name=None):\n+ return ivy.solve(x1, x2)\n+\n+\n # cholesky\n @with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n @to_ivy_arrays_and_back\n", "issue": "solve\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.paddle import promote_types_of_paddle_inputs\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_supported_dtypes(\n {\"2.4.2 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef cross(x, y, /, *, axis=9, name=None):\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.cross(x, y, axis=axis)\n\n\n# matmul\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef matmul(x, y, transpose_x=False, transpose_y=False, name=None):\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)\n\n\n# norm\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef norm(x, p=\"fro\", axis=None, keepdim=False, name=None):\n if axis is None and p is not None:\n if p == \"fro\":\n p = 2\n ret = ivy.vector_norm(x.flatten(), ord=p, axis=-1)\n if keepdim:\n ret = ret.reshape([1] * len(x.shape))\n if len(ret.shape) == 0:\n return ivy.array([ret])\n return ret\n\n if isinstance(axis, tuple):\n axis = list(axis)\n if isinstance(axis, list) and len(axis) == 1:\n axis = axis[0]\n\n if isinstance(axis, int):\n if p == \"fro\":\n p = 2\n if p in [0, 1, 2, ivy.inf, -ivy.inf]:\n ret = ivy.vector_norm(x, ord=p, axis=axis, keepdims=keepdim)\n elif isinstance(p, (int, float)):\n ret = ivy.pow(\n ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),\n float(1.0 / p),\n )\n\n elif isinstance(axis, list) and len(axis) == 2:\n if p == 0:\n raise ValueError\n elif p == 1:\n ret = ivy.sum(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif p == 2 or p == \"fro\":\n ret = ivy.matrix_norm(x, ord=\"fro\", axis=axis, keepdims=keepdim)\n elif p == ivy.inf:\n ret = ivy.max(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif p == -ivy.inf:\n ret = ivy.min(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif isinstance(p, (int, float)) and p > 0:\n ret = ivy.pow(\n ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),\n float(1.0 / p),\n )\n else:\n raise ValueError\n\n else:\n raise ValueError\n\n if len(ret.shape) == 0:\n ret = ivy.array(\n [ret]\n ) # this is done so as to match shape of output from paddle\n return ret\n\n\n# eig\n@to_ivy_arrays_and_back\ndef eig(x, name=None):\n return ivy.eig(x)\n\n\n# eigvals\n@to_ivy_arrays_and_back\ndef eigvals(x, name=None):\n return ivy.eigvals(x)\n\n\n# eigvalsh\n@to_ivy_arrays_and_back\ndef eigvalsh(x, UPLO=\"L\", name=None):\n return ivy.eigvalsh(x, UPLO=UPLO)\n\n\n# eigh\n@to_ivy_arrays_and_back\ndef eigh(x, UPLO=\"L\", name=None):\n return ivy.eigh(x, UPLO=UPLO)\n\n\n# pinv\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef pinv(x, rcond=1e-15, hermitian=False, name=None):\n # TODO: Add hermitian functionality\n return ivy.pinv(x, rtol=rcond)\n\n\n# cholesky\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cholesky(x, /, *, upper=False, name=None):\n return ivy.cholesky(x, upper=upper)\n\n\n# bmm\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef bmm(x, y, transpose_x=False, transpose_y=False, name=None):\n if len(ivy.shape(x)) != 3 or len(ivy.shape(y)) != 3:\n raise RuntimeError(\"input must be 3D matrices\")\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)\n", "path": "ivy/functional/frontends/paddle/tensor/linalg.py"}], "after_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.paddle import promote_types_of_paddle_inputs\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_supported_dtypes(\n {\"2.4.2 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\n)\n@to_ivy_arrays_and_back\ndef cross(x, y, /, *, axis=9, name=None):\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.cross(x, y, axis=axis)\n\n\n# matmul\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef matmul(x, y, transpose_x=False, transpose_y=False, name=None):\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)\n\n\n# norm\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef norm(x, p=\"fro\", axis=None, keepdim=False, name=None):\n if axis is None and p is not None:\n if p == \"fro\":\n p = 2\n ret = ivy.vector_norm(x.flatten(), ord=p, axis=-1)\n if keepdim:\n ret = ret.reshape([1] * len(x.shape))\n if len(ret.shape) == 0:\n return ivy.array([ret])\n return ret\n\n if isinstance(axis, tuple):\n axis = list(axis)\n if isinstance(axis, list) and len(axis) == 1:\n axis = axis[0]\n\n if isinstance(axis, int):\n if p == \"fro\":\n p = 2\n if p in [0, 1, 2, ivy.inf, -ivy.inf]:\n ret = ivy.vector_norm(x, ord=p, axis=axis, keepdims=keepdim)\n elif isinstance(p, (int, float)):\n ret = ivy.pow(\n ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),\n float(1.0 / p),\n )\n\n elif isinstance(axis, list) and len(axis) == 2:\n if p == 0:\n raise ValueError\n elif p == 1:\n ret = ivy.sum(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif p == 2 or p == \"fro\":\n ret = ivy.matrix_norm(x, ord=\"fro\", axis=axis, keepdims=keepdim)\n elif p == ivy.inf:\n ret = ivy.max(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif p == -ivy.inf:\n ret = ivy.min(ivy.abs(x), axis=axis, keepdims=keepdim)\n elif isinstance(p, (int, float)) and p > 0:\n ret = ivy.pow(\n ivy.sum(ivy.pow(ivy.abs(x), p), axis=axis, keepdims=keepdim),\n float(1.0 / p),\n )\n else:\n raise ValueError\n\n else:\n raise ValueError\n\n if len(ret.shape) == 0:\n ret = ivy.array(\n [ret]\n ) # this is done so as to match shape of output from paddle\n return ret\n\n\n# eig\n@to_ivy_arrays_and_back\ndef eig(x, name=None):\n return ivy.eig(x)\n\n\n# eigvals\n@to_ivy_arrays_and_back\ndef eigvals(x, name=None):\n return ivy.eigvals(x)\n\n\n# eigvalsh\n@to_ivy_arrays_and_back\ndef eigvalsh(x, UPLO=\"L\", name=None):\n return ivy.eigvalsh(x, UPLO=UPLO)\n\n\n# eigh\n@to_ivy_arrays_and_back\ndef eigh(x, UPLO=\"L\", name=None):\n return ivy.eigh(x, UPLO=UPLO)\n\n\n# pinv\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef pinv(x, rcond=1e-15, hermitian=False, name=None):\n # TODO: Add hermitian functionality\n return ivy.pinv(x, rtol=rcond)\n\n\n# solve\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef solve(x1, x2, name=None):\n return ivy.solve(x1, x2)\n\n\n# cholesky\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cholesky(x, /, *, upper=False, name=None):\n return ivy.cholesky(x, upper=upper)\n\n\n# bmm\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef bmm(x, y, transpose_x=False, transpose_y=False, name=None):\n if len(ivy.shape(x)) != 3 or len(ivy.shape(y)) != 3:\n raise RuntimeError(\"input must be 3D matrices\")\n x, y = promote_types_of_paddle_inputs(x, y)\n return ivy.matmul(x, y, transpose_a=transpose_x, transpose_b=transpose_y)\n", "path": "ivy/functional/frontends/paddle/tensor/linalg.py"}]} | 1,818 | 205 |
gh_patches_debug_63106 | rasdani/github-patches | git_diff | kornia__kornia-1263 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] save pointcloud not updates num_points when inf
## 🐛 Bug
The function `K.utils.save_pointcloud_ply` doesn't update the final number of points to be serialized when one of the values contain an infinite value.
How to fix:
update this line https://github.com/kornia/kornia/blob/master/kornia/utils/pointcloud_io.py#L34
```python
if not bool(torch.isfinite(xyz).any()):
continue
```
by
```python
if not bool(torch.isfinite(xyz).any()):
num_points -= 1
continue
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kornia/utils/pointcloud_io.py`
Content:
```
1 import os
2 from typing import Optional
3
4 import torch
5
6
7 def save_pointcloud_ply(filename: str, pointcloud: torch.Tensor) -> None:
8 r"""Utility function to save to disk a pointcloud in PLY format.
9
10 Args:
11 filename: the path to save the pointcloud.
12 pointcloud: tensor containing the pointcloud to save.
13 The tensor must be in the shape of :math:`(*, 3)` where the last
14 component is assumed to be a 3d point coordinate :math:`(X, Y, Z)`.
15 """
16 if not isinstance(filename, str) and filename[-3:] == '.ply':
17 raise TypeError("Input filename must be a string in with the .ply " "extension. Got {}".format(filename))
18
19 if not torch.is_tensor(pointcloud):
20 raise TypeError(f"Input pointcloud type is not a torch.Tensor. Got {type(pointcloud)}")
21
22 if not len(pointcloud.shape) == 3 and pointcloud.shape[-1] == 3:
23 raise TypeError("Input pointcloud must be in the following shape " "HxWx3. Got {}.".format(pointcloud.shape))
24
25 # flatten the input pointcloud in a vector to iterate points
26 xyz_vec: torch.Tensor = pointcloud.reshape(-1, 3)
27
28 with open(filename, 'w') as f:
29 data_str: str = ''
30 num_points: int = xyz_vec.shape[0]
31 for idx in range(num_points):
32 xyz = xyz_vec[idx]
33 if not bool(torch.isfinite(xyz).any()):
34 continue
35 x: float = xyz[0].item()
36 y: float = xyz[1].item()
37 z: float = xyz[2].item()
38 data_str += f'{x} {y} {z}\n'
39
40 f.write("ply\n")
41 f.write("format ascii 1.0\n")
42 f.write("comment arraiy generated\n")
43 f.write("element vertex %d\n" % num_points)
44 f.write("property double x\n")
45 f.write("property double y\n")
46 f.write("property double z\n")
47 f.write("end_header\n")
48 f.write(data_str)
49
50
51 def load_pointcloud_ply(filename: str, header_size: int = 8) -> torch.Tensor:
52 r"""Utility function to load from disk a pointcloud in PLY format.
53
54 Args:
55 filename: the path to the pointcloud.
56 header_size: the size of the ply file header that will
57 be skipped during loading.
58
59 Return:
60 tensor containing the loaded point with shape :math:`(*, 3)` where
61 :math:`*` represents the number of points.
62 """
63 if not isinstance(filename, str) and filename[-3:] == '.ply':
64 raise TypeError("Input filename must be a string in with the .ply " "extension. Got {}".format(filename))
65 if not os.path.isfile(filename):
66 raise ValueError("Input filename is not an existing file.")
67 if not (isinstance(header_size, int) and header_size > 0):
68 raise TypeError(f"Input header_size must be a positive integer. Got {header_size}.")
69 # open the file and populate tensor
70 with open(filename) as f:
71 points = []
72
73 # skip header
74 lines = f.readlines()[header_size:]
75
76 # iterate over the points
77 for line in lines:
78 x_str, y_str, z_str = line.split()
79 points.append((torch.tensor(float(x_str)), torch.tensor(float(y_str)), torch.tensor(float(z_str))))
80
81 # create tensor from list
82 pointcloud: torch.Tensor = torch.tensor(points)
83 return pointcloud
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/kornia/utils/pointcloud_io.py b/kornia/utils/pointcloud_io.py
--- a/kornia/utils/pointcloud_io.py
+++ b/kornia/utils/pointcloud_io.py
@@ -31,6 +31,7 @@
for idx in range(num_points):
xyz = xyz_vec[idx]
if not bool(torch.isfinite(xyz).any()):
+ num_points -= 1
continue
x: float = xyz[0].item()
y: float = xyz[1].item()
| {"golden_diff": "diff --git a/kornia/utils/pointcloud_io.py b/kornia/utils/pointcloud_io.py\n--- a/kornia/utils/pointcloud_io.py\n+++ b/kornia/utils/pointcloud_io.py\n@@ -31,6 +31,7 @@\n for idx in range(num_points):\n xyz = xyz_vec[idx]\n if not bool(torch.isfinite(xyz).any()):\n+ num_points -= 1\n continue\n x: float = xyz[0].item()\n y: float = xyz[1].item()\n", "issue": "[Bug] save pointcloud not updates num_points when inf\n## \ud83d\udc1b Bug\r\n\r\nThe function `K.utils.save_pointcloud_ply` doesn't update the final number of points to be serialized when one of the values contain an infinite value.\r\n\r\nHow to fix:\r\n\r\nupdate this line https://github.com/kornia/kornia/blob/master/kornia/utils/pointcloud_io.py#L34\r\n\r\n```python\r\n if not bool(torch.isfinite(xyz).any()):\r\n continue\r\n```\r\nby\r\n\r\n```python\r\n if not bool(torch.isfinite(xyz).any()):\r\n num_points -= 1\r\n continue\r\n```\n", "before_files": [{"content": "import os\nfrom typing import Optional\n\nimport torch\n\n\ndef save_pointcloud_ply(filename: str, pointcloud: torch.Tensor) -> None:\n r\"\"\"Utility function to save to disk a pointcloud in PLY format.\n\n Args:\n filename: the path to save the pointcloud.\n pointcloud: tensor containing the pointcloud to save.\n The tensor must be in the shape of :math:`(*, 3)` where the last\n component is assumed to be a 3d point coordinate :math:`(X, Y, Z)`.\n \"\"\"\n if not isinstance(filename, str) and filename[-3:] == '.ply':\n raise TypeError(\"Input filename must be a string in with the .ply \" \"extension. Got {}\".format(filename))\n\n if not torch.is_tensor(pointcloud):\n raise TypeError(f\"Input pointcloud type is not a torch.Tensor. Got {type(pointcloud)}\")\n\n if not len(pointcloud.shape) == 3 and pointcloud.shape[-1] == 3:\n raise TypeError(\"Input pointcloud must be in the following shape \" \"HxWx3. Got {}.\".format(pointcloud.shape))\n\n # flatten the input pointcloud in a vector to iterate points\n xyz_vec: torch.Tensor = pointcloud.reshape(-1, 3)\n\n with open(filename, 'w') as f:\n data_str: str = ''\n num_points: int = xyz_vec.shape[0]\n for idx in range(num_points):\n xyz = xyz_vec[idx]\n if not bool(torch.isfinite(xyz).any()):\n continue\n x: float = xyz[0].item()\n y: float = xyz[1].item()\n z: float = xyz[2].item()\n data_str += f'{x} {y} {z}\\n'\n\n f.write(\"ply\\n\")\n f.write(\"format ascii 1.0\\n\")\n f.write(\"comment arraiy generated\\n\")\n f.write(\"element vertex %d\\n\" % num_points)\n f.write(\"property double x\\n\")\n f.write(\"property double y\\n\")\n f.write(\"property double z\\n\")\n f.write(\"end_header\\n\")\n f.write(data_str)\n\n\ndef load_pointcloud_ply(filename: str, header_size: int = 8) -> torch.Tensor:\n r\"\"\"Utility function to load from disk a pointcloud in PLY format.\n\n Args:\n filename: the path to the pointcloud.\n header_size: the size of the ply file header that will\n be skipped during loading.\n\n Return:\n tensor containing the loaded point with shape :math:`(*, 3)` where\n :math:`*` represents the number of points.\n \"\"\"\n if not isinstance(filename, str) and filename[-3:] == '.ply':\n raise TypeError(\"Input filename must be a string in with the .ply \" \"extension. Got {}\".format(filename))\n if not os.path.isfile(filename):\n raise ValueError(\"Input filename is not an existing file.\")\n if not (isinstance(header_size, int) and header_size > 0):\n raise TypeError(f\"Input header_size must be a positive integer. Got {header_size}.\")\n # open the file and populate tensor\n with open(filename) as f:\n points = []\n\n # skip header\n lines = f.readlines()[header_size:]\n\n # iterate over the points\n for line in lines:\n x_str, y_str, z_str = line.split()\n points.append((torch.tensor(float(x_str)), torch.tensor(float(y_str)), torch.tensor(float(z_str))))\n\n # create tensor from list\n pointcloud: torch.Tensor = torch.tensor(points)\n return pointcloud\n", "path": "kornia/utils/pointcloud_io.py"}], "after_files": [{"content": "import os\nfrom typing import Optional\n\nimport torch\n\n\ndef save_pointcloud_ply(filename: str, pointcloud: torch.Tensor) -> None:\n r\"\"\"Utility function to save to disk a pointcloud in PLY format.\n\n Args:\n filename: the path to save the pointcloud.\n pointcloud: tensor containing the pointcloud to save.\n The tensor must be in the shape of :math:`(*, 3)` where the last\n component is assumed to be a 3d point coordinate :math:`(X, Y, Z)`.\n \"\"\"\n if not isinstance(filename, str) and filename[-3:] == '.ply':\n raise TypeError(\"Input filename must be a string in with the .ply \" \"extension. Got {}\".format(filename))\n\n if not torch.is_tensor(pointcloud):\n raise TypeError(f\"Input pointcloud type is not a torch.Tensor. Got {type(pointcloud)}\")\n\n if not len(pointcloud.shape) == 3 and pointcloud.shape[-1] == 3:\n raise TypeError(\"Input pointcloud must be in the following shape \" \"HxWx3. Got {}.\".format(pointcloud.shape))\n\n # flatten the input pointcloud in a vector to iterate points\n xyz_vec: torch.Tensor = pointcloud.reshape(-1, 3)\n\n with open(filename, 'w') as f:\n data_str: str = ''\n num_points: int = xyz_vec.shape[0]\n for idx in range(num_points):\n xyz = xyz_vec[idx]\n if not bool(torch.isfinite(xyz).any()):\n num_points -= 1\n continue\n x: float = xyz[0].item()\n y: float = xyz[1].item()\n z: float = xyz[2].item()\n data_str += f'{x} {y} {z}\\n'\n\n f.write(\"ply\\n\")\n f.write(\"format ascii 1.0\\n\")\n f.write(\"comment arraiy generated\\n\")\n f.write(\"element vertex %d\\n\" % num_points)\n f.write(\"property double x\\n\")\n f.write(\"property double y\\n\")\n f.write(\"property double z\\n\")\n f.write(\"end_header\\n\")\n f.write(data_str)\n\n\ndef load_pointcloud_ply(filename: str, header_size: int = 8) -> torch.Tensor:\n r\"\"\"Utility function to load from disk a pointcloud in PLY format.\n\n Args:\n filename: the path to the pointcloud.\n header_size: the size of the ply file header that will\n be skipped during loading.\n\n Return:\n tensor containing the loaded point with shape :math:`(*, 3)` where\n :math:`*` represents the number of points.\n \"\"\"\n if not isinstance(filename, str) and filename[-3:] == '.ply':\n raise TypeError(\"Input filename must be a string in with the .ply \" \"extension. Got {}\".format(filename))\n if not os.path.isfile(filename):\n raise ValueError(\"Input filename is not an existing file.\")\n if not (isinstance(header_size, int) and header_size > 0):\n raise TypeError(f\"Input header_size must be a positive integer. Got {header_size}.\")\n # open the file and populate tensor\n with open(filename) as f:\n points = []\n\n # skip header\n lines = f.readlines()[header_size:]\n\n # iterate over the points\n for line in lines:\n x_str, y_str, z_str = line.split()\n points.append((torch.tensor(float(x_str)), torch.tensor(float(y_str)), torch.tensor(float(z_str))))\n\n # create tensor from list\n pointcloud: torch.Tensor = torch.tensor(points)\n return pointcloud\n", "path": "kornia/utils/pointcloud_io.py"}]} | 1,354 | 120 |
gh_patches_debug_39139 | rasdani/github-patches | git_diff | digitalfabrik__integreat-cms-304 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add currently_in_translation to translation coverage
The translation coverage lacks the `currently_in_translation` field, which should also be added.
Add currently_in_translation to translation coverage
The translation coverage lacks the `currently_in_translation` field, which should also be added.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/cms/views/analytics/translation_coverage_view.py`
Content:
```
1 """Views related to the statistics module"""
2 from django.contrib.auth.decorators import login_required
3 from django.utils.decorators import method_decorator
4 from django.views.generic import TemplateView
5 from django.shortcuts import render
6
7 from ...models import PageTranslation, Region
8 from ...decorators import region_permission_required
9
10
11 @method_decorator(login_required, name='dispatch')
12 @method_decorator(region_permission_required, name='dispatch')
13 class TranslationCoverageView(TemplateView):
14 """
15 Class to create the translation coverage statistic
16 """
17 template_name = 'analytics/translation_coverage.html'
18 base_context = {'current_menu_item': 'translation_coverage'}
19
20 def get(self, request, *args, **kwargs):
21
22 region = Region.get_current_region(request)
23 num_pages = region.pages.count()
24 languages = []
25
26 for language in region.languages:
27 page_translations = PageTranslation.get_translations(region, language)
28 languages.append({
29 'translated_name': language.translated_name,
30 'num_page_translations_up_to_date': len([t for t in page_translations if not t.is_outdated]),
31 'num_page_translations_outdated': len([t for t in page_translations if t.is_outdated]),
32 'num_page_translations_missing': num_pages - page_translations.count()
33 })
34
35 return render(
36 request,
37 self.template_name,
38 {
39 **self.base_context,
40 'languages': languages
41 }
42 )
43
```
Path: `backend/cms/models/pages/page_translation.py`
Content:
```
1 """Models representing a page and page translation with content
2 """
3 import logging
4
5 from django.conf import settings
6 from django.db import models
7 from django.utils import timezone
8
9 from .page import Page
10 from ..languages.language import Language
11 from ...constants import status
12
13
14 logger = logging.getLogger(__name__)
15
16
17 class PageTranslation(models.Model):
18 """Class defining a Translation of a Page
19
20 Args:
21 models : Class inherit of django-Models
22 """
23
24 page = models.ForeignKey(Page, related_name='translations', on_delete=models.CASCADE)
25 language = models.ForeignKey(
26 Language,
27 related_name='page_translations',
28 on_delete=models.CASCADE
29 )
30 slug = models.SlugField(max_length=200, blank=True, allow_unicode=True)
31 title = models.CharField(max_length=250)
32 text = models.TextField(blank=True)
33 status = models.CharField(max_length=6, choices=status.CHOICES, default=status.DRAFT)
34 currently_in_translation = models.BooleanField(default=False)
35 version = models.PositiveIntegerField(default=0)
36 minor_edit = models.BooleanField(default=False)
37 creator = models.ForeignKey(settings.AUTH_USER_MODEL, null=True, on_delete=models.SET_NULL)
38 created_date = models.DateTimeField(default=timezone.now)
39 last_updated = models.DateTimeField(auto_now=True)
40
41 @property
42 def foreign_object(self):
43 return self.page
44
45 @property
46 def ancestor_path(self):
47 return '/'.join([
48 ancestor.get_first_translation([self.language.code]).slug
49 for ancestor in self.page.get_ancestors()
50 ])
51
52 @property
53 def permalink(self):
54 return '/'.join(filter(None, [
55 self.page.region.slug,
56 self.language.code,
57 self.ancestor_path,
58 self.slug
59 ]))
60
61 @property
62 def available_languages(self):
63 languages = self.page.languages
64 languages.remove(self.language)
65 available_languages = {}
66 for language in languages:
67 other_translation = self.page.get_public_translation(language.code)
68 if other_translation:
69 available_languages[language.code] = {
70 'id': other_translation.id,
71 'url': other_translation.permalink
72 }
73 return available_languages
74
75 @property
76 def source_translation(self):
77 source_language_tree_node = self.page.region.language_tree_nodes.get(language=self.language).parent
78 if source_language_tree_node:
79 return self.page.get_translation(source_language_tree_node.code)
80 return None
81
82 @property
83 def latest_public_revision(self):
84 return self.page.translations.filter(
85 language=self.language,
86 status=status.PUBLIC,
87 ).first()
88
89 @property
90 def latest_major_revision(self):
91 return self.page.translations.filter(
92 language=self.language,
93 minor_edit=False,
94 ).first()
95
96 @property
97 def latest_major_public_revision(self):
98 return self.page.translations.filter(
99 language=self.language,
100 status=status.PUBLIC,
101 minor_edit=False,
102 ).first()
103
104 @property
105 def previous_revision(self):
106 version = self.version - 1
107 return self.page.translations.filter(
108 language=self.language,
109 version=version,
110 ).first()
111
112 @property
113 def is_outdated(self):
114 source_translation = self.source_translation
115 # If self.language is the root language, this translation can never be outdated
116 if not source_translation:
117 return False
118 # If the source translation is outdated, this translation can not be up to date
119 if source_translation.is_outdated:
120 return True
121 self_revision = self.latest_major_public_revision
122 source_revision = source_translation.latest_major_public_revision
123 # If one of the translations has no major public revision, it cannot be outdated
124 if not self_revision or not source_revision:
125 return False
126 return self_revision.last_updated < source_revision.last_updated
127
128 @property
129 def combined_text(self):
130 """
131 Combines the text from the PageTranslation with the text from the mirrored page.
132 """
133 if self.page.mirrored_page_first:
134 return self.page.get_mirrored_text(self.language.code) + self.text
135 return self.text + self.page.get_mirrored_text(self.language.code)
136
137 @classmethod
138 def get_translations(cls, region, language):
139 return cls.objects.filter(page__region=region, language=language).distinct('page')
140
141 @classmethod
142 def get_outdated_translations(cls, region, language):
143 return [t for t in cls.objects.filter(page__region=region, language=language).distinct('page') if t.is_outdated]
144
145 @classmethod
146 def get_up_to_date_translations(cls, region, language):
147 return [t for t in cls.objects.filter(page__region=region, language=language).distinct('page') if not t.is_outdated]
148
149 def __str__(self):
150 if self.id:
151 return '(id: {}, page_id: {}, lang: {}, version: {}, slug: {})'.format(self.id, self.page.id, self.language.code, self.version, self.slug)
152 return super(PageTranslation, self).__str__()
153
154 class Meta:
155 ordering = ['page', '-version']
156 default_permissions = ()
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/backend/cms/models/pages/page_translation.py b/backend/cms/models/pages/page_translation.py
--- a/backend/cms/models/pages/page_translation.py
+++ b/backend/cms/models/pages/page_translation.py
@@ -111,6 +111,9 @@
@property
def is_outdated(self):
+ # If the page translation is currently in translation, it is defined as not outdated
+ if self.currently_in_translation:
+ return False
source_translation = self.source_translation
# If self.language is the root language, this translation can never be outdated
if not source_translation:
@@ -125,6 +128,10 @@
return False
return self_revision.last_updated < source_revision.last_updated
+ @property
+ def is_up_to_date(self):
+ return not self.currently_in_translation and not self.is_outdated
+
@property
def combined_text(self):
"""
@@ -139,12 +146,16 @@
return cls.objects.filter(page__region=region, language=language).distinct('page')
@classmethod
- def get_outdated_translations(cls, region, language):
- return [t for t in cls.objects.filter(page__region=region, language=language).distinct('page') if t.is_outdated]
+ def get_up_to_date_translations(cls, region, language):
+ return [t for t in cls.objects.filter(page__region=region, language=language).distinct('page') if t.is_up_to_date]
@classmethod
- def get_up_to_date_translations(cls, region, language):
- return [t for t in cls.objects.filter(page__region=region, language=language).distinct('page') if not t.is_outdated]
+ def get_current_translations(cls, region, language):
+ return [t for t in cls.objects.filter(page__region=region, language=language).distinct('page') if t.currently_in_translation]
+
+ @classmethod
+ def get_outdated_translations(cls, region, language):
+ return [t for t in cls.objects.filter(page__region=region, language=language).distinct('page') if t.is_outdated]
def __str__(self):
if self.id:
diff --git a/backend/cms/views/analytics/translation_coverage_view.py b/backend/cms/views/analytics/translation_coverage_view.py
--- a/backend/cms/views/analytics/translation_coverage_view.py
+++ b/backend/cms/views/analytics/translation_coverage_view.py
@@ -27,7 +27,8 @@
page_translations = PageTranslation.get_translations(region, language)
languages.append({
'translated_name': language.translated_name,
- 'num_page_translations_up_to_date': len([t for t in page_translations if not t.is_outdated]),
+ 'num_page_translations_up_to_date': len([t for t in page_translations if t.is_up_to_date]),
+ 'num_page_translations_currently_in_translation': len([t for t in page_translations if t.currently_in_translation]),
'num_page_translations_outdated': len([t for t in page_translations if t.is_outdated]),
'num_page_translations_missing': num_pages - page_translations.count()
})
| {"golden_diff": "diff --git a/backend/cms/models/pages/page_translation.py b/backend/cms/models/pages/page_translation.py\n--- a/backend/cms/models/pages/page_translation.py\n+++ b/backend/cms/models/pages/page_translation.py\n@@ -111,6 +111,9 @@\n \n @property\n def is_outdated(self):\n+ # If the page translation is currently in translation, it is defined as not outdated\n+ if self.currently_in_translation:\n+ return False\n source_translation = self.source_translation\n # If self.language is the root language, this translation can never be outdated\n if not source_translation:\n@@ -125,6 +128,10 @@\n return False\n return self_revision.last_updated < source_revision.last_updated\n \n+ @property\n+ def is_up_to_date(self):\n+ return not self.currently_in_translation and not self.is_outdated\n+\n @property\n def combined_text(self):\n \"\"\"\n@@ -139,12 +146,16 @@\n return cls.objects.filter(page__region=region, language=language).distinct('page')\n \n @classmethod\n- def get_outdated_translations(cls, region, language):\n- return [t for t in cls.objects.filter(page__region=region, language=language).distinct('page') if t.is_outdated]\n+ def get_up_to_date_translations(cls, region, language):\n+ return [t for t in cls.objects.filter(page__region=region, language=language).distinct('page') if t.is_up_to_date]\n \n @classmethod\n- def get_up_to_date_translations(cls, region, language):\n- return [t for t in cls.objects.filter(page__region=region, language=language).distinct('page') if not t.is_outdated]\n+ def get_current_translations(cls, region, language):\n+ return [t for t in cls.objects.filter(page__region=region, language=language).distinct('page') if t.currently_in_translation]\n+\n+ @classmethod\n+ def get_outdated_translations(cls, region, language):\n+ return [t for t in cls.objects.filter(page__region=region, language=language).distinct('page') if t.is_outdated]\n \n def __str__(self):\n if self.id:\ndiff --git a/backend/cms/views/analytics/translation_coverage_view.py b/backend/cms/views/analytics/translation_coverage_view.py\n--- a/backend/cms/views/analytics/translation_coverage_view.py\n+++ b/backend/cms/views/analytics/translation_coverage_view.py\n@@ -27,7 +27,8 @@\n page_translations = PageTranslation.get_translations(region, language)\n languages.append({\n 'translated_name': language.translated_name,\n- 'num_page_translations_up_to_date': len([t for t in page_translations if not t.is_outdated]),\n+ 'num_page_translations_up_to_date': len([t for t in page_translations if t.is_up_to_date]),\n+ 'num_page_translations_currently_in_translation': len([t for t in page_translations if t.currently_in_translation]),\n 'num_page_translations_outdated': len([t for t in page_translations if t.is_outdated]),\n 'num_page_translations_missing': num_pages - page_translations.count()\n })\n", "issue": "Add currently_in_translation to translation coverage\nThe translation coverage lacks the `currently_in_translation` field, which should also be added.\nAdd currently_in_translation to translation coverage\nThe translation coverage lacks the `currently_in_translation` field, which should also be added.\n", "before_files": [{"content": "\"\"\"Views related to the statistics module\"\"\"\nfrom django.contrib.auth.decorators import login_required\nfrom django.utils.decorators import method_decorator\nfrom django.views.generic import TemplateView\nfrom django.shortcuts import render\n\nfrom ...models import PageTranslation, Region\nfrom ...decorators import region_permission_required\n\n\n@method_decorator(login_required, name='dispatch')\n@method_decorator(region_permission_required, name='dispatch')\nclass TranslationCoverageView(TemplateView):\n \"\"\"\n Class to create the translation coverage statistic\n \"\"\"\n template_name = 'analytics/translation_coverage.html'\n base_context = {'current_menu_item': 'translation_coverage'}\n\n def get(self, request, *args, **kwargs):\n\n region = Region.get_current_region(request)\n num_pages = region.pages.count()\n languages = []\n\n for language in region.languages:\n page_translations = PageTranslation.get_translations(region, language)\n languages.append({\n 'translated_name': language.translated_name,\n 'num_page_translations_up_to_date': len([t for t in page_translations if not t.is_outdated]),\n 'num_page_translations_outdated': len([t for t in page_translations if t.is_outdated]),\n 'num_page_translations_missing': num_pages - page_translations.count()\n })\n\n return render(\n request,\n self.template_name,\n {\n **self.base_context,\n 'languages': languages\n }\n )\n", "path": "backend/cms/views/analytics/translation_coverage_view.py"}, {"content": "\"\"\"Models representing a page and page translation with content\n\"\"\"\nimport logging\n\nfrom django.conf import settings\nfrom django.db import models\nfrom django.utils import timezone\n\nfrom .page import Page\nfrom ..languages.language import Language\nfrom ...constants import status\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass PageTranslation(models.Model):\n \"\"\"Class defining a Translation of a Page\n\n Args:\n models : Class inherit of django-Models\n \"\"\"\n\n page = models.ForeignKey(Page, related_name='translations', on_delete=models.CASCADE)\n language = models.ForeignKey(\n Language,\n related_name='page_translations',\n on_delete=models.CASCADE\n )\n slug = models.SlugField(max_length=200, blank=True, allow_unicode=True)\n title = models.CharField(max_length=250)\n text = models.TextField(blank=True)\n status = models.CharField(max_length=6, choices=status.CHOICES, default=status.DRAFT)\n currently_in_translation = models.BooleanField(default=False)\n version = models.PositiveIntegerField(default=0)\n minor_edit = models.BooleanField(default=False)\n creator = models.ForeignKey(settings.AUTH_USER_MODEL, null=True, on_delete=models.SET_NULL)\n created_date = models.DateTimeField(default=timezone.now)\n last_updated = models.DateTimeField(auto_now=True)\n\n @property\n def foreign_object(self):\n return self.page\n\n @property\n def ancestor_path(self):\n return '/'.join([\n ancestor.get_first_translation([self.language.code]).slug\n for ancestor in self.page.get_ancestors()\n ])\n\n @property\n def permalink(self):\n return '/'.join(filter(None, [\n self.page.region.slug,\n self.language.code,\n self.ancestor_path,\n self.slug\n ]))\n\n @property\n def available_languages(self):\n languages = self.page.languages\n languages.remove(self.language)\n available_languages = {}\n for language in languages:\n other_translation = self.page.get_public_translation(language.code)\n if other_translation:\n available_languages[language.code] = {\n 'id': other_translation.id,\n 'url': other_translation.permalink\n }\n return available_languages\n\n @property\n def source_translation(self):\n source_language_tree_node = self.page.region.language_tree_nodes.get(language=self.language).parent\n if source_language_tree_node:\n return self.page.get_translation(source_language_tree_node.code)\n return None\n\n @property\n def latest_public_revision(self):\n return self.page.translations.filter(\n language=self.language,\n status=status.PUBLIC,\n ).first()\n\n @property\n def latest_major_revision(self):\n return self.page.translations.filter(\n language=self.language,\n minor_edit=False,\n ).first()\n\n @property\n def latest_major_public_revision(self):\n return self.page.translations.filter(\n language=self.language,\n status=status.PUBLIC,\n minor_edit=False,\n ).first()\n\n @property\n def previous_revision(self):\n version = self.version - 1\n return self.page.translations.filter(\n language=self.language,\n version=version,\n ).first()\n\n @property\n def is_outdated(self):\n source_translation = self.source_translation\n # If self.language is the root language, this translation can never be outdated\n if not source_translation:\n return False\n # If the source translation is outdated, this translation can not be up to date\n if source_translation.is_outdated:\n return True\n self_revision = self.latest_major_public_revision\n source_revision = source_translation.latest_major_public_revision\n # If one of the translations has no major public revision, it cannot be outdated\n if not self_revision or not source_revision:\n return False\n return self_revision.last_updated < source_revision.last_updated\n\n @property\n def combined_text(self):\n \"\"\"\n Combines the text from the PageTranslation with the text from the mirrored page.\n \"\"\"\n if self.page.mirrored_page_first:\n return self.page.get_mirrored_text(self.language.code) + self.text\n return self.text + self.page.get_mirrored_text(self.language.code)\n\n @classmethod\n def get_translations(cls, region, language):\n return cls.objects.filter(page__region=region, language=language).distinct('page')\n\n @classmethod\n def get_outdated_translations(cls, region, language):\n return [t for t in cls.objects.filter(page__region=region, language=language).distinct('page') if t.is_outdated]\n\n @classmethod\n def get_up_to_date_translations(cls, region, language):\n return [t for t in cls.objects.filter(page__region=region, language=language).distinct('page') if not t.is_outdated]\n\n def __str__(self):\n if self.id:\n return '(id: {}, page_id: {}, lang: {}, version: {}, slug: {})'.format(self.id, self.page.id, self.language.code, self.version, self.slug)\n return super(PageTranslation, self).__str__()\n\n class Meta:\n ordering = ['page', '-version']\n default_permissions = ()\n", "path": "backend/cms/models/pages/page_translation.py"}], "after_files": [{"content": "\"\"\"Views related to the statistics module\"\"\"\nfrom django.contrib.auth.decorators import login_required\nfrom django.utils.decorators import method_decorator\nfrom django.views.generic import TemplateView\nfrom django.shortcuts import render\n\nfrom ...models import PageTranslation, Region\nfrom ...decorators import region_permission_required\n\n\n@method_decorator(login_required, name='dispatch')\n@method_decorator(region_permission_required, name='dispatch')\nclass TranslationCoverageView(TemplateView):\n \"\"\"\n Class to create the translation coverage statistic\n \"\"\"\n template_name = 'analytics/translation_coverage.html'\n base_context = {'current_menu_item': 'translation_coverage'}\n\n def get(self, request, *args, **kwargs):\n\n region = Region.get_current_region(request)\n num_pages = region.pages.count()\n languages = []\n\n for language in region.languages:\n page_translations = PageTranslation.get_translations(region, language)\n languages.append({\n 'translated_name': language.translated_name,\n 'num_page_translations_up_to_date': len([t for t in page_translations if t.is_up_to_date]),\n 'num_page_translations_currently_in_translation': len([t for t in page_translations if t.currently_in_translation]),\n 'num_page_translations_outdated': len([t for t in page_translations if t.is_outdated]),\n 'num_page_translations_missing': num_pages - page_translations.count()\n })\n\n return render(\n request,\n self.template_name,\n {\n **self.base_context,\n 'languages': languages\n }\n )\n", "path": "backend/cms/views/analytics/translation_coverage_view.py"}, {"content": "\"\"\"Models representing a page and page translation with content\n\"\"\"\nimport logging\n\nfrom django.conf import settings\nfrom django.db import models\nfrom django.utils import timezone\n\nfrom .page import Page\nfrom ..languages.language import Language\nfrom ...constants import status\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass PageTranslation(models.Model):\n \"\"\"Class defining a Translation of a Page\n\n Args:\n models : Class inherit of django-Models\n \"\"\"\n\n page = models.ForeignKey(Page, related_name='translations', on_delete=models.CASCADE)\n language = models.ForeignKey(\n Language,\n related_name='page_translations',\n on_delete=models.CASCADE\n )\n slug = models.SlugField(max_length=200, blank=True, allow_unicode=True)\n title = models.CharField(max_length=250)\n text = models.TextField(blank=True)\n status = models.CharField(max_length=6, choices=status.CHOICES, default=status.DRAFT)\n currently_in_translation = models.BooleanField(default=False)\n version = models.PositiveIntegerField(default=0)\n minor_edit = models.BooleanField(default=False)\n creator = models.ForeignKey(settings.AUTH_USER_MODEL, null=True, on_delete=models.SET_NULL)\n created_date = models.DateTimeField(default=timezone.now)\n last_updated = models.DateTimeField(auto_now=True)\n\n @property\n def foreign_object(self):\n return self.page\n\n @property\n def ancestor_path(self):\n return '/'.join([\n ancestor.get_first_translation([self.language.code]).slug\n for ancestor in self.page.get_ancestors()\n ])\n\n @property\n def permalink(self):\n return '/'.join(filter(None, [\n self.page.region.slug,\n self.language.code,\n self.ancestor_path,\n self.slug\n ]))\n\n @property\n def available_languages(self):\n languages = self.page.languages\n languages.remove(self.language)\n available_languages = {}\n for language in languages:\n other_translation = self.page.get_public_translation(language.code)\n if other_translation:\n available_languages[language.code] = {\n 'id': other_translation.id,\n 'url': other_translation.permalink\n }\n return available_languages\n\n @property\n def source_translation(self):\n source_language_tree_node = self.page.region.language_tree_nodes.get(language=self.language).parent\n if source_language_tree_node:\n return self.page.get_translation(source_language_tree_node.code)\n return None\n\n @property\n def latest_public_revision(self):\n return self.page.translations.filter(\n language=self.language,\n status=status.PUBLIC,\n ).first()\n\n @property\n def latest_major_revision(self):\n return self.page.translations.filter(\n language=self.language,\n minor_edit=False,\n ).first()\n\n @property\n def latest_major_public_revision(self):\n return self.page.translations.filter(\n language=self.language,\n status=status.PUBLIC,\n minor_edit=False,\n ).first()\n\n @property\n def previous_revision(self):\n version = self.version - 1\n return self.page.translations.filter(\n language=self.language,\n version=version,\n ).first()\n\n @property\n def is_outdated(self):\n # If the page translation is currently in translation, it is defined as not outdated\n if self.currently_in_translation:\n return False\n source_translation = self.source_translation\n # If self.language is the root language, this translation can never be outdated\n if not source_translation:\n return False\n # If the source translation is outdated, this translation can not be up to date\n if source_translation.is_outdated:\n return True\n self_revision = self.latest_major_public_revision\n source_revision = source_translation.latest_major_public_revision\n # If one of the translations has no major public revision, it cannot be outdated\n if not self_revision or not source_revision:\n return False\n return self_revision.last_updated < source_revision.last_updated\n\n @property\n def is_up_to_date(self):\n return not self.currently_in_translation and not self.is_outdated\n\n @property\n def combined_text(self):\n \"\"\"\n Combines the text from the PageTranslation with the text from the mirrored page.\n \"\"\"\n if self.page.mirrored_page_first:\n return self.page.get_mirrored_text(self.language.code) + self.text\n return self.text + self.page.get_mirrored_text(self.language.code)\n\n @classmethod\n def get_translations(cls, region, language):\n return cls.objects.filter(page__region=region, language=language).distinct('page')\n\n @classmethod\n def get_up_to_date_translations(cls, region, language):\n return [t for t in cls.objects.filter(page__region=region, language=language).distinct('page') if t.is_up_to_date]\n\n @classmethod\n def get_current_translations(cls, region, language):\n return [t for t in cls.objects.filter(page__region=region, language=language).distinct('page') if t.currently_in_translation]\n\n @classmethod\n def get_outdated_translations(cls, region, language):\n return [t for t in cls.objects.filter(page__region=region, language=language).distinct('page') if t.is_outdated]\n\n def __str__(self):\n if self.id:\n return '(id: {}, page_id: {}, lang: {}, version: {}, slug: {})'.format(self.id, self.page.id, self.language.code, self.version, self.slug)\n return super(PageTranslation, self).__str__()\n\n class Meta:\n ordering = ['page', '-version']\n default_permissions = ()\n", "path": "backend/cms/models/pages/page_translation.py"}]} | 2,163 | 713 |
gh_patches_debug_42864 | rasdani/github-patches | git_diff | sunpy__sunpy-4129 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Maintain coherence between keycomments and the metadict
See #2748
This is probably best implemented by adding the functionality to our `MetaDict` object or something, so that we don't have to do it manually everywhere.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sunpy/util/metadata.py`
Content:
```
1 """
2 This module provides a generalized dictionary class that deals with header
3 parsing and normalization.
4 """
5 from collections import OrderedDict
6
7 __all__ = ['MetaDict']
8
9
10 class MetaDict(OrderedDict):
11 """
12 A class to hold metadata associated with a `sunpy.map.Map
13 <sunpy.map.map_factory.MapFactory.__call__>` derivative.
14
15 This class handles everything in lower case. This allows case
16 insensitive indexing.
17 """
18
19 def __init__(self, *args):
20 """
21 Creates a new MapHeader instance.
22 """
23 # Store all keys as upper-case to allow for case-insensitive indexing
24 # OrderedDict can be instantiated from a list of lists or a tuple of tuples
25 tags = dict()
26 if args:
27 args = list(args)
28 adict = args[0]
29 if isinstance(adict, list) or isinstance(adict, tuple):
30 tags = OrderedDict((k.upper(), v) for k, v in adict)
31 elif isinstance(adict, dict):
32 tags = OrderedDict((k.upper(), v) for k, v in adict.items())
33 else:
34 raise TypeError("Can not create a MetaDict from this type input")
35 args[0] = tags
36
37 super().__init__(*args)
38
39 def __contains__(self, key):
40 """
41 Override ``__contains__``.
42 """
43 return OrderedDict.__contains__(self, key.lower())
44
45 def __getitem__(self, key):
46 """
47 Override ``[]`` indexing.
48 """
49 return OrderedDict.__getitem__(self, key.lower())
50
51 def __setitem__(self, key, value):
52 """
53 Override ``[]`` indexing.
54 """
55 return OrderedDict.__setitem__(self, key.lower(), value)
56
57 def get(self, key, default=None):
58 """
59 Override ``.get()`` indexing.
60 """
61 return OrderedDict.get(self, key.lower(), default)
62
63 def has_key(self, key):
64 """
65 Override ``.has_key()`` to perform case-insensitively.
66 """
67 return key.lower() in self
68
69 def pop(self, key, default=None):
70 """
71 Override ``.pop()`` to perform case-insensitively.
72 """
73 return OrderedDict.pop(self, key.lower(), default)
74
75 def update(self, d2):
76 """
77 Override ``.update()`` to perform case-insensitively.
78 """
79 return OrderedDict.update(self, OrderedDict((k.lower(), v) for k, v in d2.items()))
80
81 def setdefault(self, key, default=None):
82 """
83 Override ``.setdefault()`` to perform case-insensitively.
84 """
85 return OrderedDict.setdefault(self, key.lower(), default)
86
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sunpy/util/metadata.py b/sunpy/util/metadata.py
--- a/sunpy/util/metadata.py
+++ b/sunpy/util/metadata.py
@@ -1,6 +1,6 @@
"""
This module provides a generalized dictionary class that deals with header
-parsing and normalization.
+parsing, normalization, and maintaining coherence between keys and keycomments.
"""
from collections import OrderedDict
@@ -14,28 +14,67 @@
This class handles everything in lower case. This allows case
insensitive indexing.
+
+ If the key 'keycomments' exists, its value must be a dictionary mapping
+ keys in the `MetaDict` to their comments. The casing of keys in the
+ keycomments dictionary is not significant. If a key is removed from the
+ `MetaDict`, it will also be removed from the keycomments dictionary.
+ Additionally, any extraneous keycomments will be removed when the
+ `MetaDict` is instantiated.
"""
def __init__(self, *args):
"""
- Creates a new MapHeader instance.
+ Creates a new MetaDict instance.
"""
- # Store all keys as upper-case to allow for case-insensitive indexing
+ # Store all keys as lower-case to allow for case-insensitive indexing
# OrderedDict can be instantiated from a list of lists or a tuple of tuples
tags = dict()
if args:
args = list(args)
adict = args[0]
if isinstance(adict, list) or isinstance(adict, tuple):
- tags = OrderedDict((k.upper(), v) for k, v in adict)
+ tags = OrderedDict((k.lower(), v) for k, v in adict)
elif isinstance(adict, dict):
- tags = OrderedDict((k.upper(), v) for k, v in adict.items())
+ tags = OrderedDict((k.lower(), v) for k, v in adict.items())
else:
raise TypeError("Can not create a MetaDict from this type input")
args[0] = tags
super().__init__(*args)
+ # Use `copy=True` to avoid mutating the caller's keycomments
+ # dictionary (if they provided one).
+ self._prune_keycomments(copy=True)
+
+ def _prune_keycomments(self, copy=False):
+ """
+ Remove keycomments for keys that are not contained in the MetaDict.
+
+ Parameters
+ ----------
+ copy : `bool`, optional
+ Make a copy of the current keycomments dict before removing keys.
+ """
+ if 'keycomments' not in self:
+ return
+
+ keycomments = self['keycomments']
+
+ if not isinstance(keycomments, dict):
+ raise TypeError(
+ "'keycomments' key must have a value of type `dict`. Found "
+ "the following type: %r" % type(keycomments))
+
+ if copy:
+ keycomments = keycomments.copy()
+
+ for key in list(keycomments.keys()):
+ if key not in self:
+ del keycomments[key]
+
+ self['keycomments'] = keycomments
+
def __contains__(self, key):
"""
Override ``__contains__``.
@@ -54,6 +93,15 @@
"""
return OrderedDict.__setitem__(self, key.lower(), value)
+ # Note: `OrderedDict.popitem()` does not need to be overridden to prune
+ # keycomments because it calls `__delitem__` internally.
+ def __delitem__(self, key):
+ """
+ Override ``del dict[key]`` key deletion.
+ """
+ OrderedDict.__delitem__(self, key.lower())
+ self._prune_keycomments()
+
def get(self, key, default=None):
"""
Override ``.get()`` indexing.
@@ -70,7 +118,11 @@
"""
Override ``.pop()`` to perform case-insensitively.
"""
- return OrderedDict.pop(self, key.lower(), default)
+ has_key = key in self
+ result = OrderedDict.pop(self, key.lower(), default)
+ if has_key:
+ self._prune_keycomments()
+ return result
def update(self, d2):
"""
| {"golden_diff": "diff --git a/sunpy/util/metadata.py b/sunpy/util/metadata.py\n--- a/sunpy/util/metadata.py\n+++ b/sunpy/util/metadata.py\n@@ -1,6 +1,6 @@\n \"\"\"\n This module provides a generalized dictionary class that deals with header\n-parsing and normalization.\n+parsing, normalization, and maintaining coherence between keys and keycomments.\n \"\"\"\n from collections import OrderedDict\n \n@@ -14,28 +14,67 @@\n \n This class handles everything in lower case. This allows case\n insensitive indexing.\n+\n+ If the key 'keycomments' exists, its value must be a dictionary mapping\n+ keys in the `MetaDict` to their comments. The casing of keys in the\n+ keycomments dictionary is not significant. If a key is removed from the\n+ `MetaDict`, it will also be removed from the keycomments dictionary.\n+ Additionally, any extraneous keycomments will be removed when the\n+ `MetaDict` is instantiated.\n \"\"\"\n \n def __init__(self, *args):\n \"\"\"\n- Creates a new MapHeader instance.\n+ Creates a new MetaDict instance.\n \"\"\"\n- # Store all keys as upper-case to allow for case-insensitive indexing\n+ # Store all keys as lower-case to allow for case-insensitive indexing\n # OrderedDict can be instantiated from a list of lists or a tuple of tuples\n tags = dict()\n if args:\n args = list(args)\n adict = args[0]\n if isinstance(adict, list) or isinstance(adict, tuple):\n- tags = OrderedDict((k.upper(), v) for k, v in adict)\n+ tags = OrderedDict((k.lower(), v) for k, v in adict)\n elif isinstance(adict, dict):\n- tags = OrderedDict((k.upper(), v) for k, v in adict.items())\n+ tags = OrderedDict((k.lower(), v) for k, v in adict.items())\n else:\n raise TypeError(\"Can not create a MetaDict from this type input\")\n args[0] = tags\n \n super().__init__(*args)\n \n+ # Use `copy=True` to avoid mutating the caller's keycomments\n+ # dictionary (if they provided one).\n+ self._prune_keycomments(copy=True)\n+\n+ def _prune_keycomments(self, copy=False):\n+ \"\"\"\n+ Remove keycomments for keys that are not contained in the MetaDict.\n+\n+ Parameters\n+ ----------\n+ copy : `bool`, optional\n+ Make a copy of the current keycomments dict before removing keys.\n+ \"\"\"\n+ if 'keycomments' not in self:\n+ return\n+\n+ keycomments = self['keycomments']\n+\n+ if not isinstance(keycomments, dict):\n+ raise TypeError(\n+ \"'keycomments' key must have a value of type `dict`. Found \"\n+ \"the following type: %r\" % type(keycomments))\n+\n+ if copy:\n+ keycomments = keycomments.copy()\n+\n+ for key in list(keycomments.keys()):\n+ if key not in self:\n+ del keycomments[key]\n+\n+ self['keycomments'] = keycomments\n+\n def __contains__(self, key):\n \"\"\"\n Override ``__contains__``.\n@@ -54,6 +93,15 @@\n \"\"\"\n return OrderedDict.__setitem__(self, key.lower(), value)\n \n+ # Note: `OrderedDict.popitem()` does not need to be overridden to prune\n+ # keycomments because it calls `__delitem__` internally.\n+ def __delitem__(self, key):\n+ \"\"\"\n+ Override ``del dict[key]`` key deletion.\n+ \"\"\"\n+ OrderedDict.__delitem__(self, key.lower())\n+ self._prune_keycomments()\n+\n def get(self, key, default=None):\n \"\"\"\n Override ``.get()`` indexing.\n@@ -70,7 +118,11 @@\n \"\"\"\n Override ``.pop()`` to perform case-insensitively.\n \"\"\"\n- return OrderedDict.pop(self, key.lower(), default)\n+ has_key = key in self\n+ result = OrderedDict.pop(self, key.lower(), default)\n+ if has_key:\n+ self._prune_keycomments()\n+ return result\n \n def update(self, d2):\n \"\"\"\n", "issue": "Maintain coherence between keycomments and the metadict\nSee #2748 \r\n\r\nThis is probably best implemented by adding the functionality to our `MetaDict` object or something, so that we don't have to do it manually everywhere.\n", "before_files": [{"content": "\"\"\"\nThis module provides a generalized dictionary class that deals with header\nparsing and normalization.\n\"\"\"\nfrom collections import OrderedDict\n\n__all__ = ['MetaDict']\n\n\nclass MetaDict(OrderedDict):\n \"\"\"\n A class to hold metadata associated with a `sunpy.map.Map\n <sunpy.map.map_factory.MapFactory.__call__>` derivative.\n\n This class handles everything in lower case. This allows case\n insensitive indexing.\n \"\"\"\n\n def __init__(self, *args):\n \"\"\"\n Creates a new MapHeader instance.\n \"\"\"\n # Store all keys as upper-case to allow for case-insensitive indexing\n # OrderedDict can be instantiated from a list of lists or a tuple of tuples\n tags = dict()\n if args:\n args = list(args)\n adict = args[0]\n if isinstance(adict, list) or isinstance(adict, tuple):\n tags = OrderedDict((k.upper(), v) for k, v in adict)\n elif isinstance(adict, dict):\n tags = OrderedDict((k.upper(), v) for k, v in adict.items())\n else:\n raise TypeError(\"Can not create a MetaDict from this type input\")\n args[0] = tags\n\n super().__init__(*args)\n\n def __contains__(self, key):\n \"\"\"\n Override ``__contains__``.\n \"\"\"\n return OrderedDict.__contains__(self, key.lower())\n\n def __getitem__(self, key):\n \"\"\"\n Override ``[]`` indexing.\n \"\"\"\n return OrderedDict.__getitem__(self, key.lower())\n\n def __setitem__(self, key, value):\n \"\"\"\n Override ``[]`` indexing.\n \"\"\"\n return OrderedDict.__setitem__(self, key.lower(), value)\n\n def get(self, key, default=None):\n \"\"\"\n Override ``.get()`` indexing.\n \"\"\"\n return OrderedDict.get(self, key.lower(), default)\n\n def has_key(self, key):\n \"\"\"\n Override ``.has_key()`` to perform case-insensitively.\n \"\"\"\n return key.lower() in self\n\n def pop(self, key, default=None):\n \"\"\"\n Override ``.pop()`` to perform case-insensitively.\n \"\"\"\n return OrderedDict.pop(self, key.lower(), default)\n\n def update(self, d2):\n \"\"\"\n Override ``.update()`` to perform case-insensitively.\n \"\"\"\n return OrderedDict.update(self, OrderedDict((k.lower(), v) for k, v in d2.items()))\n\n def setdefault(self, key, default=None):\n \"\"\"\n Override ``.setdefault()`` to perform case-insensitively.\n \"\"\"\n return OrderedDict.setdefault(self, key.lower(), default)\n", "path": "sunpy/util/metadata.py"}], "after_files": [{"content": "\"\"\"\nThis module provides a generalized dictionary class that deals with header\nparsing, normalization, and maintaining coherence between keys and keycomments.\n\"\"\"\nfrom collections import OrderedDict\n\n__all__ = ['MetaDict']\n\n\nclass MetaDict(OrderedDict):\n \"\"\"\n A class to hold metadata associated with a `sunpy.map.Map\n <sunpy.map.map_factory.MapFactory.__call__>` derivative.\n\n This class handles everything in lower case. This allows case\n insensitive indexing.\n\n If the key 'keycomments' exists, its value must be a dictionary mapping\n keys in the `MetaDict` to their comments. The casing of keys in the\n keycomments dictionary is not significant. If a key is removed from the\n `MetaDict`, it will also be removed from the keycomments dictionary.\n Additionally, any extraneous keycomments will be removed when the\n `MetaDict` is instantiated.\n \"\"\"\n\n def __init__(self, *args):\n \"\"\"\n Creates a new MetaDict instance.\n \"\"\"\n # Store all keys as lower-case to allow for case-insensitive indexing\n # OrderedDict can be instantiated from a list of lists or a tuple of tuples\n tags = dict()\n if args:\n args = list(args)\n adict = args[0]\n if isinstance(adict, list) or isinstance(adict, tuple):\n tags = OrderedDict((k.lower(), v) for k, v in adict)\n elif isinstance(adict, dict):\n tags = OrderedDict((k.lower(), v) for k, v in adict.items())\n else:\n raise TypeError(\"Can not create a MetaDict from this type input\")\n args[0] = tags\n\n super().__init__(*args)\n\n # Use `copy=True` to avoid mutating the caller's keycomments\n # dictionary (if they provided one).\n self._prune_keycomments(copy=True)\n\n def _prune_keycomments(self, copy=False):\n \"\"\"\n Remove keycomments for keys that are not contained in the MetaDict.\n\n Parameters\n ----------\n copy : `bool`, optional\n Make a copy of the current keycomments dict before removing keys.\n \"\"\"\n if 'keycomments' not in self:\n return\n\n keycomments = self['keycomments']\n\n if not isinstance(keycomments, dict):\n raise TypeError(\n \"'keycomments' key must have a value of type `dict`. Found \"\n \"the following type: %r\" % type(keycomments))\n\n if copy:\n keycomments = keycomments.copy()\n\n for key in list(keycomments.keys()):\n if key not in self:\n del keycomments[key]\n\n self['keycomments'] = keycomments\n\n def __contains__(self, key):\n \"\"\"\n Override ``__contains__``.\n \"\"\"\n return OrderedDict.__contains__(self, key.lower())\n\n def __getitem__(self, key):\n \"\"\"\n Override ``[]`` indexing.\n \"\"\"\n return OrderedDict.__getitem__(self, key.lower())\n\n def __setitem__(self, key, value):\n \"\"\"\n Override ``[]`` indexing.\n \"\"\"\n return OrderedDict.__setitem__(self, key.lower(), value)\n\n # Note: `OrderedDict.popitem()` does not need to be overridden to prune\n # keycomments because it calls `__delitem__` internally.\n def __delitem__(self, key):\n \"\"\"\n Override ``del dict[key]`` key deletion.\n \"\"\"\n OrderedDict.__delitem__(self, key.lower())\n self._prune_keycomments()\n\n def get(self, key, default=None):\n \"\"\"\n Override ``.get()`` indexing.\n \"\"\"\n return OrderedDict.get(self, key.lower(), default)\n\n def has_key(self, key):\n \"\"\"\n Override ``.has_key()`` to perform case-insensitively.\n \"\"\"\n return key.lower() in self\n\n def pop(self, key, default=None):\n \"\"\"\n Override ``.pop()`` to perform case-insensitively.\n \"\"\"\n has_key = key in self\n result = OrderedDict.pop(self, key.lower(), default)\n if has_key:\n self._prune_keycomments()\n return result\n\n def update(self, d2):\n \"\"\"\n Override ``.update()`` to perform case-insensitively.\n \"\"\"\n return OrderedDict.update(self, OrderedDict((k.lower(), v) for k, v in d2.items()))\n\n def setdefault(self, key, default=None):\n \"\"\"\n Override ``.setdefault()`` to perform case-insensitively.\n \"\"\"\n return OrderedDict.setdefault(self, key.lower(), default)\n", "path": "sunpy/util/metadata.py"}]} | 1,048 | 951 |
gh_patches_debug_14578 | rasdani/github-patches | git_diff | vllm-project__vllm-2151 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
torch.distributed.all_reduce does not free memory
I've visualized the memory usage:
* llama 7B, TP=1
<img width="3346" alt="Screenshot 2023-12-16 at 11 14 03 PM" src="https://github.com/vllm-project/vllm/assets/46394894/e6ed7069-2190-4823-8f25-8e27bd94fe35">
The activation memory is reused after every layer.
* llama-70B, TP=8
<img width="3247" alt="Screenshot 2023-12-16 at 11 20 10 PM" src="https://github.com/vllm-project/vllm/assets/46394894/b5f492bb-7262-4c06-a040-7796e0f7fc06">
**However, when using TP, the activation memory for all reduce is not reused**
_Originally posted by @WoosukKwon in https://github.com/vllm-project/vllm/pull/2031#discussion_r1429046645_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vllm/worker/worker.py`
Content:
```
1 """A GPU worker class."""
2 import os
3 from typing import Dict, List, Optional, Tuple
4
5 import torch
6 import torch.distributed
7
8 from vllm.config import (CacheConfig, ModelConfig, ParallelConfig,
9 SchedulerConfig)
10 from vllm.model_executor import set_random_seed
11 from vllm.model_executor.parallel_utils import cupy_utils
12 from vllm.model_executor.parallel_utils.parallel_state import (
13 initialize_model_parallel)
14 from vllm.sequence import SamplerOutput, SequenceGroupMetadata
15 from vllm.worker.cache_engine import CacheEngine
16 from vllm.worker.model_runner import ModelRunner
17
18
19 class Worker:
20 """A worker class that executes (a partition of) the model on a GPU.
21
22 Each worker is associated with a single GPU. The worker is responsible for
23 maintaining the KV cache and executing the model on the GPU. In case of
24 distributed inference, each worker is assigned a partition of the model.
25 """
26
27 def __init__(
28 self,
29 model_config: ModelConfig,
30 parallel_config: ParallelConfig,
31 scheduler_config: SchedulerConfig,
32 rank: Optional[int] = None,
33 distributed_init_method: Optional[str] = None,
34 ) -> None:
35 self.model_config = model_config
36 self.parallel_config = parallel_config
37 self.scheduler_config = scheduler_config
38 self.rank = rank
39 self.distributed_init_method = distributed_init_method
40
41 self.model_runner = ModelRunner(model_config, parallel_config,
42 scheduler_config)
43 # Uninitialized cache engine. Will be initialized by
44 # self.init_cache_engine().
45 self.cache_config = None
46 self.cache_engine = None
47 self.cache_events = None
48 self.gpu_cache = None
49
50 def init_model(self, cupy_port: Optional[int] = None):
51 # This env var set by Ray causes exceptions with graph building.
52 os.environ.pop("NCCL_ASYNC_ERROR_HANDLING", None)
53 # Env vars will be set by Ray.
54 self.rank = self.rank if self.rank is not None else int(
55 os.getenv("RANK", "-1"))
56 local_rank = int(os.getenv("LOCAL_RANK", "0"))
57 self.device = torch.device(f"cuda:{local_rank}")
58 if self.rank < 0:
59 raise ValueError("Invalid or unspecified rank.")
60 torch.cuda.set_device(self.device)
61
62 _check_if_gpu_supports_dtype(self.model_config.dtype)
63
64 # Initialize the distributed environment.
65 _init_distributed_environment(self.parallel_config, self.rank,
66 cupy_port, self.distributed_init_method)
67
68 # Initialize the model.
69 set_random_seed(self.model_config.seed)
70
71 def load_model(self):
72 self.model_runner.load_model()
73
74 @torch.inference_mode()
75 def profile_num_available_blocks(
76 self,
77 block_size: int,
78 gpu_memory_utilization: float,
79 cpu_swap_space: int,
80 ) -> Tuple[int, int]:
81 # Profile the memory usage of the model and get the maximum number of
82 # cache blocks that can be allocated with the remaining free memory.
83 torch.cuda.empty_cache()
84
85 # Execute a forward pass with dummy inputs to profile the memory usage
86 # of the model.
87 self.model_runner.profile_run()
88
89 # Calculate the number of blocks that can be allocated with the
90 # profiled peak memory.
91 torch.cuda.synchronize()
92 free_gpu_memory, total_gpu_memory = torch.cuda.mem_get_info()
93 peak_memory = total_gpu_memory - free_gpu_memory
94
95 cache_block_size = CacheEngine.get_cache_block_size(
96 block_size, self.model_config, self.parallel_config)
97 num_gpu_blocks = int(
98 (total_gpu_memory * gpu_memory_utilization - peak_memory) //
99 cache_block_size)
100 num_cpu_blocks = int(cpu_swap_space // cache_block_size)
101 num_gpu_blocks = max(num_gpu_blocks, 0)
102 num_cpu_blocks = max(num_cpu_blocks, 0)
103 torch.cuda.empty_cache()
104 return num_gpu_blocks, num_cpu_blocks
105
106 def init_cache_engine(self, cache_config: CacheConfig) -> None:
107 self.cache_config = cache_config
108 self.cache_engine = CacheEngine(self.cache_config, self.model_config,
109 self.parallel_config)
110 self.cache_events = self.cache_engine.events
111 self.gpu_cache = self.cache_engine.gpu_cache
112 self.model_runner.set_block_size(self.cache_engine.block_size)
113
114 def warm_up_model(self) -> None:
115 if not self.model_config.enforce_eager:
116 self.model_runner.capture_model(self.gpu_cache)
117 # Reset the seed to ensure that the random state is not affected by
118 # the model initialization and profiling.
119 set_random_seed(self.model_config.seed)
120
121 @torch.inference_mode()
122 def execute_model(
123 self,
124 seq_group_metadata_list: List[SequenceGroupMetadata],
125 blocks_to_swap_in: Dict[int, int],
126 blocks_to_swap_out: Dict[int, int],
127 blocks_to_copy: Dict[int, List[int]],
128 ) -> SamplerOutput:
129 # Issue cache operations.
130 issued_cache_op = False
131 if blocks_to_swap_in:
132 self.cache_engine.swap_in(blocks_to_swap_in)
133 issued_cache_op = True
134 if blocks_to_swap_out:
135 self.cache_engine.swap_out(blocks_to_swap_out)
136 issued_cache_op = True
137 if blocks_to_copy:
138 self.cache_engine.copy(blocks_to_copy)
139 issued_cache_op = True
140
141 cache_events = self.cache_events if issued_cache_op else None
142
143 # Wait for cache operations to finish.
144 # TODO(woosuk): Profile swapping overhead and optimize if needed.
145 if cache_events is not None:
146 for event in cache_events:
147 event.wait()
148 # If there is no input, we don't need to execute the model.
149 if not seq_group_metadata_list:
150 return {}
151
152 output = self.model_runner.execute_model(seq_group_metadata_list,
153 self.gpu_cache)
154 return output
155
156
157 def _init_distributed_environment(
158 parallel_config: ParallelConfig,
159 rank: int,
160 cupy_port: Optional[int],
161 distributed_init_method: Optional[str] = None,
162 ) -> None:
163 """Initialize the distributed environment."""
164 if torch.distributed.is_initialized():
165 torch_world_size = torch.distributed.get_world_size()
166 if torch_world_size != parallel_config.world_size:
167 raise RuntimeError(
168 "torch.distributed is already initialized but the torch world "
169 "size does not match parallel_config.world_size "
170 f"({torch_world_size} vs. {parallel_config.world_size}).")
171 elif not distributed_init_method:
172 raise ValueError(
173 "distributed_init_method must be set if torch.distributed "
174 "is not already initialized")
175 else:
176 torch.distributed.init_process_group(
177 backend="nccl",
178 world_size=parallel_config.world_size,
179 rank=rank,
180 init_method=distributed_init_method,
181 )
182
183 if cupy_utils.is_initialized():
184 cupy_world_size = cupy_utils.get_world_size()
185 if cupy_world_size != parallel_config.world_size:
186 raise RuntimeError(
187 "cupy.distributed is already initialized but the cupy world "
188 "size does not match parallel_config.world_size "
189 f"({cupy_world_size} vs. {parallel_config.world_size}).")
190 elif parallel_config.world_size > 1:
191 # NOTE(woosuk): We don't initialize CuPy process group when world size
192 # is 1.
193 # TODO(woosuk): Support multi-node connection.
194 cupy_utils.init_process_group(
195 world_size=parallel_config.world_size,
196 rank=rank,
197 host="localhost",
198 port=cupy_port,
199 )
200
201 if parallel_config.world_size > 1:
202 # A small all_reduce for warmup.
203 torch.distributed.all_reduce(torch.zeros(1).cuda())
204 cupy_utils.all_reduce(torch.zeros(1).cuda())
205
206 initialize_model_parallel(parallel_config.tensor_parallel_size,
207 parallel_config.pipeline_parallel_size)
208
209
210 def _check_if_gpu_supports_dtype(torch_dtype: torch.dtype):
211 # Check if the GPU supports the dtype.
212 if torch_dtype == torch.bfloat16:
213 compute_capability = torch.cuda.get_device_capability()
214 if compute_capability[0] < 8:
215 gpu_name = torch.cuda.get_device_name()
216 raise ValueError(
217 "Bfloat16 is only supported on GPUs with compute capability "
218 f"of at least 8.0. Your {gpu_name} GPU has compute capability "
219 f"{compute_capability[0]}.{compute_capability[1]}.")
220
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vllm/worker/worker.py b/vllm/worker/worker.py
--- a/vllm/worker/worker.py
+++ b/vllm/worker/worker.py
@@ -48,6 +48,14 @@
self.gpu_cache = None
def init_model(self, cupy_port: Optional[int] = None):
+ # torch.distributed.all_reduce does not free the input tensor until
+ # the synchronization point. This causes the memory usage to grow
+ # as the number of all_reduce calls increases. This env var disables
+ # this behavior.
+ # Related issue:
+ # https://discuss.pytorch.org/t/cuda-allocation-lifetime-for-inputs-to-distributed-all-reduce/191573
+ os.environ["TORCH_NCCL_AVOID_RECORD_STREAMS"] = "1"
+
# This env var set by Ray causes exceptions with graph building.
os.environ.pop("NCCL_ASYNC_ERROR_HANDLING", None)
# Env vars will be set by Ray.
| {"golden_diff": "diff --git a/vllm/worker/worker.py b/vllm/worker/worker.py\n--- a/vllm/worker/worker.py\n+++ b/vllm/worker/worker.py\n@@ -48,6 +48,14 @@\n self.gpu_cache = None\n \n def init_model(self, cupy_port: Optional[int] = None):\n+ # torch.distributed.all_reduce does not free the input tensor until\n+ # the synchronization point. This causes the memory usage to grow\n+ # as the number of all_reduce calls increases. This env var disables\n+ # this behavior.\n+ # Related issue:\n+ # https://discuss.pytorch.org/t/cuda-allocation-lifetime-for-inputs-to-distributed-all-reduce/191573\n+ os.environ[\"TORCH_NCCL_AVOID_RECORD_STREAMS\"] = \"1\"\n+\n # This env var set by Ray causes exceptions with graph building.\n os.environ.pop(\"NCCL_ASYNC_ERROR_HANDLING\", None)\n # Env vars will be set by Ray.\n", "issue": "torch.distributed.all_reduce does not free memory\n I've visualized the memory usage:\r\n\r\n* llama 7B, TP=1\r\n<img width=\"3346\" alt=\"Screenshot 2023-12-16 at 11 14 03\u202fPM\" src=\"https://github.com/vllm-project/vllm/assets/46394894/e6ed7069-2190-4823-8f25-8e27bd94fe35\">\r\n\r\nThe activation memory is reused after every layer.\r\n\r\n* llama-70B, TP=8\r\n<img width=\"3247\" alt=\"Screenshot 2023-12-16 at 11 20 10\u202fPM\" src=\"https://github.com/vllm-project/vllm/assets/46394894/b5f492bb-7262-4c06-a040-7796e0f7fc06\">\r\n\r\n**However, when using TP, the activation memory for all reduce is not reused**\r\n\r\n_Originally posted by @WoosukKwon in https://github.com/vllm-project/vllm/pull/2031#discussion_r1429046645_\r\n \n", "before_files": [{"content": "\"\"\"A GPU worker class.\"\"\"\nimport os\nfrom typing import Dict, List, Optional, Tuple\n\nimport torch\nimport torch.distributed\n\nfrom vllm.config import (CacheConfig, ModelConfig, ParallelConfig,\n SchedulerConfig)\nfrom vllm.model_executor import set_random_seed\nfrom vllm.model_executor.parallel_utils import cupy_utils\nfrom vllm.model_executor.parallel_utils.parallel_state import (\n initialize_model_parallel)\nfrom vllm.sequence import SamplerOutput, SequenceGroupMetadata\nfrom vllm.worker.cache_engine import CacheEngine\nfrom vllm.worker.model_runner import ModelRunner\n\n\nclass Worker:\n \"\"\"A worker class that executes (a partition of) the model on a GPU.\n\n Each worker is associated with a single GPU. The worker is responsible for\n maintaining the KV cache and executing the model on the GPU. In case of\n distributed inference, each worker is assigned a partition of the model.\n \"\"\"\n\n def __init__(\n self,\n model_config: ModelConfig,\n parallel_config: ParallelConfig,\n scheduler_config: SchedulerConfig,\n rank: Optional[int] = None,\n distributed_init_method: Optional[str] = None,\n ) -> None:\n self.model_config = model_config\n self.parallel_config = parallel_config\n self.scheduler_config = scheduler_config\n self.rank = rank\n self.distributed_init_method = distributed_init_method\n\n self.model_runner = ModelRunner(model_config, parallel_config,\n scheduler_config)\n # Uninitialized cache engine. Will be initialized by\n # self.init_cache_engine().\n self.cache_config = None\n self.cache_engine = None\n self.cache_events = None\n self.gpu_cache = None\n\n def init_model(self, cupy_port: Optional[int] = None):\n # This env var set by Ray causes exceptions with graph building.\n os.environ.pop(\"NCCL_ASYNC_ERROR_HANDLING\", None)\n # Env vars will be set by Ray.\n self.rank = self.rank if self.rank is not None else int(\n os.getenv(\"RANK\", \"-1\"))\n local_rank = int(os.getenv(\"LOCAL_RANK\", \"0\"))\n self.device = torch.device(f\"cuda:{local_rank}\")\n if self.rank < 0:\n raise ValueError(\"Invalid or unspecified rank.\")\n torch.cuda.set_device(self.device)\n\n _check_if_gpu_supports_dtype(self.model_config.dtype)\n\n # Initialize the distributed environment.\n _init_distributed_environment(self.parallel_config, self.rank,\n cupy_port, self.distributed_init_method)\n\n # Initialize the model.\n set_random_seed(self.model_config.seed)\n\n def load_model(self):\n self.model_runner.load_model()\n\n @torch.inference_mode()\n def profile_num_available_blocks(\n self,\n block_size: int,\n gpu_memory_utilization: float,\n cpu_swap_space: int,\n ) -> Tuple[int, int]:\n # Profile the memory usage of the model and get the maximum number of\n # cache blocks that can be allocated with the remaining free memory.\n torch.cuda.empty_cache()\n\n # Execute a forward pass with dummy inputs to profile the memory usage\n # of the model.\n self.model_runner.profile_run()\n\n # Calculate the number of blocks that can be allocated with the\n # profiled peak memory.\n torch.cuda.synchronize()\n free_gpu_memory, total_gpu_memory = torch.cuda.mem_get_info()\n peak_memory = total_gpu_memory - free_gpu_memory\n\n cache_block_size = CacheEngine.get_cache_block_size(\n block_size, self.model_config, self.parallel_config)\n num_gpu_blocks = int(\n (total_gpu_memory * gpu_memory_utilization - peak_memory) //\n cache_block_size)\n num_cpu_blocks = int(cpu_swap_space // cache_block_size)\n num_gpu_blocks = max(num_gpu_blocks, 0)\n num_cpu_blocks = max(num_cpu_blocks, 0)\n torch.cuda.empty_cache()\n return num_gpu_blocks, num_cpu_blocks\n\n def init_cache_engine(self, cache_config: CacheConfig) -> None:\n self.cache_config = cache_config\n self.cache_engine = CacheEngine(self.cache_config, self.model_config,\n self.parallel_config)\n self.cache_events = self.cache_engine.events\n self.gpu_cache = self.cache_engine.gpu_cache\n self.model_runner.set_block_size(self.cache_engine.block_size)\n\n def warm_up_model(self) -> None:\n if not self.model_config.enforce_eager:\n self.model_runner.capture_model(self.gpu_cache)\n # Reset the seed to ensure that the random state is not affected by\n # the model initialization and profiling.\n set_random_seed(self.model_config.seed)\n\n @torch.inference_mode()\n def execute_model(\n self,\n seq_group_metadata_list: List[SequenceGroupMetadata],\n blocks_to_swap_in: Dict[int, int],\n blocks_to_swap_out: Dict[int, int],\n blocks_to_copy: Dict[int, List[int]],\n ) -> SamplerOutput:\n # Issue cache operations.\n issued_cache_op = False\n if blocks_to_swap_in:\n self.cache_engine.swap_in(blocks_to_swap_in)\n issued_cache_op = True\n if blocks_to_swap_out:\n self.cache_engine.swap_out(blocks_to_swap_out)\n issued_cache_op = True\n if blocks_to_copy:\n self.cache_engine.copy(blocks_to_copy)\n issued_cache_op = True\n\n cache_events = self.cache_events if issued_cache_op else None\n\n # Wait for cache operations to finish.\n # TODO(woosuk): Profile swapping overhead and optimize if needed.\n if cache_events is not None:\n for event in cache_events:\n event.wait()\n # If there is no input, we don't need to execute the model.\n if not seq_group_metadata_list:\n return {}\n\n output = self.model_runner.execute_model(seq_group_metadata_list,\n self.gpu_cache)\n return output\n\n\ndef _init_distributed_environment(\n parallel_config: ParallelConfig,\n rank: int,\n cupy_port: Optional[int],\n distributed_init_method: Optional[str] = None,\n) -> None:\n \"\"\"Initialize the distributed environment.\"\"\"\n if torch.distributed.is_initialized():\n torch_world_size = torch.distributed.get_world_size()\n if torch_world_size != parallel_config.world_size:\n raise RuntimeError(\n \"torch.distributed is already initialized but the torch world \"\n \"size does not match parallel_config.world_size \"\n f\"({torch_world_size} vs. {parallel_config.world_size}).\")\n elif not distributed_init_method:\n raise ValueError(\n \"distributed_init_method must be set if torch.distributed \"\n \"is not already initialized\")\n else:\n torch.distributed.init_process_group(\n backend=\"nccl\",\n world_size=parallel_config.world_size,\n rank=rank,\n init_method=distributed_init_method,\n )\n\n if cupy_utils.is_initialized():\n cupy_world_size = cupy_utils.get_world_size()\n if cupy_world_size != parallel_config.world_size:\n raise RuntimeError(\n \"cupy.distributed is already initialized but the cupy world \"\n \"size does not match parallel_config.world_size \"\n f\"({cupy_world_size} vs. {parallel_config.world_size}).\")\n elif parallel_config.world_size > 1:\n # NOTE(woosuk): We don't initialize CuPy process group when world size\n # is 1.\n # TODO(woosuk): Support multi-node connection.\n cupy_utils.init_process_group(\n world_size=parallel_config.world_size,\n rank=rank,\n host=\"localhost\",\n port=cupy_port,\n )\n\n if parallel_config.world_size > 1:\n # A small all_reduce for warmup.\n torch.distributed.all_reduce(torch.zeros(1).cuda())\n cupy_utils.all_reduce(torch.zeros(1).cuda())\n\n initialize_model_parallel(parallel_config.tensor_parallel_size,\n parallel_config.pipeline_parallel_size)\n\n\ndef _check_if_gpu_supports_dtype(torch_dtype: torch.dtype):\n # Check if the GPU supports the dtype.\n if torch_dtype == torch.bfloat16:\n compute_capability = torch.cuda.get_device_capability()\n if compute_capability[0] < 8:\n gpu_name = torch.cuda.get_device_name()\n raise ValueError(\n \"Bfloat16 is only supported on GPUs with compute capability \"\n f\"of at least 8.0. Your {gpu_name} GPU has compute capability \"\n f\"{compute_capability[0]}.{compute_capability[1]}.\")\n", "path": "vllm/worker/worker.py"}], "after_files": [{"content": "\"\"\"A GPU worker class.\"\"\"\nimport os\nfrom typing import Dict, List, Optional, Tuple\n\nimport torch\nimport torch.distributed\n\nfrom vllm.config import (CacheConfig, ModelConfig, ParallelConfig,\n SchedulerConfig)\nfrom vllm.model_executor import set_random_seed\nfrom vllm.model_executor.parallel_utils import cupy_utils\nfrom vllm.model_executor.parallel_utils.parallel_state import (\n initialize_model_parallel)\nfrom vllm.sequence import SamplerOutput, SequenceGroupMetadata\nfrom vllm.worker.cache_engine import CacheEngine\nfrom vllm.worker.model_runner import ModelRunner\n\n\nclass Worker:\n \"\"\"A worker class that executes (a partition of) the model on a GPU.\n\n Each worker is associated with a single GPU. The worker is responsible for\n maintaining the KV cache and executing the model on the GPU. In case of\n distributed inference, each worker is assigned a partition of the model.\n \"\"\"\n\n def __init__(\n self,\n model_config: ModelConfig,\n parallel_config: ParallelConfig,\n scheduler_config: SchedulerConfig,\n rank: Optional[int] = None,\n distributed_init_method: Optional[str] = None,\n ) -> None:\n self.model_config = model_config\n self.parallel_config = parallel_config\n self.scheduler_config = scheduler_config\n self.rank = rank\n self.distributed_init_method = distributed_init_method\n\n self.model_runner = ModelRunner(model_config, parallel_config,\n scheduler_config)\n # Uninitialized cache engine. Will be initialized by\n # self.init_cache_engine().\n self.cache_config = None\n self.cache_engine = None\n self.cache_events = None\n self.gpu_cache = None\n\n def init_model(self, cupy_port: Optional[int] = None):\n # torch.distributed.all_reduce does not free the input tensor until\n # the synchronization point. This causes the memory usage to grow\n # as the number of all_reduce calls increases. This env var disables\n # this behavior.\n # Related issue:\n # https://discuss.pytorch.org/t/cuda-allocation-lifetime-for-inputs-to-distributed-all-reduce/191573\n os.environ[\"TORCH_NCCL_AVOID_RECORD_STREAMS\"] = \"1\"\n\n # This env var set by Ray causes exceptions with graph building.\n os.environ.pop(\"NCCL_ASYNC_ERROR_HANDLING\", None)\n # Env vars will be set by Ray.\n self.rank = self.rank if self.rank is not None else int(\n os.getenv(\"RANK\", \"-1\"))\n local_rank = int(os.getenv(\"LOCAL_RANK\", \"0\"))\n self.device = torch.device(f\"cuda:{local_rank}\")\n if self.rank < 0:\n raise ValueError(\"Invalid or unspecified rank.\")\n torch.cuda.set_device(self.device)\n\n _check_if_gpu_supports_dtype(self.model_config.dtype)\n\n # Initialize the distributed environment.\n _init_distributed_environment(self.parallel_config, self.rank,\n cupy_port, self.distributed_init_method)\n\n # Initialize the model.\n set_random_seed(self.model_config.seed)\n\n def load_model(self):\n self.model_runner.load_model()\n\n @torch.inference_mode()\n def profile_num_available_blocks(\n self,\n block_size: int,\n gpu_memory_utilization: float,\n cpu_swap_space: int,\n ) -> Tuple[int, int]:\n # Profile the memory usage of the model and get the maximum number of\n # cache blocks that can be allocated with the remaining free memory.\n torch.cuda.empty_cache()\n\n # Execute a forward pass with dummy inputs to profile the memory usage\n # of the model.\n self.model_runner.profile_run()\n\n # Calculate the number of blocks that can be allocated with the\n # profiled peak memory.\n torch.cuda.synchronize()\n free_gpu_memory, total_gpu_memory = torch.cuda.mem_get_info()\n peak_memory = total_gpu_memory - free_gpu_memory\n\n cache_block_size = CacheEngine.get_cache_block_size(\n block_size, self.model_config, self.parallel_config)\n num_gpu_blocks = int(\n (total_gpu_memory * gpu_memory_utilization - peak_memory) //\n cache_block_size)\n num_cpu_blocks = int(cpu_swap_space // cache_block_size)\n num_gpu_blocks = max(num_gpu_blocks, 0)\n num_cpu_blocks = max(num_cpu_blocks, 0)\n torch.cuda.empty_cache()\n return num_gpu_blocks, num_cpu_blocks\n\n def init_cache_engine(self, cache_config: CacheConfig) -> None:\n self.cache_config = cache_config\n self.cache_engine = CacheEngine(self.cache_config, self.model_config,\n self.parallel_config)\n self.cache_events = self.cache_engine.events\n self.gpu_cache = self.cache_engine.gpu_cache\n self.model_runner.set_block_size(self.cache_engine.block_size)\n\n def warm_up_model(self) -> None:\n if not self.model_config.enforce_eager:\n self.model_runner.capture_model(self.gpu_cache)\n # Reset the seed to ensure that the random state is not affected by\n # the model initialization and profiling.\n set_random_seed(self.model_config.seed)\n\n @torch.inference_mode()\n def execute_model(\n self,\n seq_group_metadata_list: List[SequenceGroupMetadata],\n blocks_to_swap_in: Dict[int, int],\n blocks_to_swap_out: Dict[int, int],\n blocks_to_copy: Dict[int, List[int]],\n ) -> SamplerOutput:\n # Issue cache operations.\n issued_cache_op = False\n if blocks_to_swap_in:\n self.cache_engine.swap_in(blocks_to_swap_in)\n issued_cache_op = True\n if blocks_to_swap_out:\n self.cache_engine.swap_out(blocks_to_swap_out)\n issued_cache_op = True\n if blocks_to_copy:\n self.cache_engine.copy(blocks_to_copy)\n issued_cache_op = True\n\n cache_events = self.cache_events if issued_cache_op else None\n\n # Wait for cache operations to finish.\n # TODO(woosuk): Profile swapping overhead and optimize if needed.\n if cache_events is not None:\n for event in cache_events:\n event.wait()\n # If there is no input, we don't need to execute the model.\n if not seq_group_metadata_list:\n return {}\n\n output = self.model_runner.execute_model(seq_group_metadata_list,\n self.gpu_cache)\n return output\n\n\ndef _init_distributed_environment(\n parallel_config: ParallelConfig,\n rank: int,\n cupy_port: Optional[int],\n distributed_init_method: Optional[str] = None,\n) -> None:\n \"\"\"Initialize the distributed environment.\"\"\"\n if torch.distributed.is_initialized():\n torch_world_size = torch.distributed.get_world_size()\n if torch_world_size != parallel_config.world_size:\n raise RuntimeError(\n \"torch.distributed is already initialized but the torch world \"\n \"size does not match parallel_config.world_size \"\n f\"({torch_world_size} vs. {parallel_config.world_size}).\")\n elif not distributed_init_method:\n raise ValueError(\n \"distributed_init_method must be set if torch.distributed \"\n \"is not already initialized\")\n else:\n torch.distributed.init_process_group(\n backend=\"nccl\",\n world_size=parallel_config.world_size,\n rank=rank,\n init_method=distributed_init_method,\n )\n\n if cupy_utils.is_initialized():\n cupy_world_size = cupy_utils.get_world_size()\n if cupy_world_size != parallel_config.world_size:\n raise RuntimeError(\n \"cupy.distributed is already initialized but the cupy world \"\n \"size does not match parallel_config.world_size \"\n f\"({cupy_world_size} vs. {parallel_config.world_size}).\")\n elif parallel_config.world_size > 1:\n # NOTE(woosuk): We don't initialize CuPy process group when world size\n # is 1.\n # TODO(woosuk): Support multi-node connection.\n cupy_utils.init_process_group(\n world_size=parallel_config.world_size,\n rank=rank,\n host=\"localhost\",\n port=cupy_port,\n )\n\n if parallel_config.world_size > 1:\n # A small all_reduce for warmup.\n torch.distributed.all_reduce(torch.zeros(1).cuda())\n cupy_utils.all_reduce(torch.zeros(1).cuda())\n\n initialize_model_parallel(parallel_config.tensor_parallel_size,\n parallel_config.pipeline_parallel_size)\n\n\ndef _check_if_gpu_supports_dtype(torch_dtype: torch.dtype):\n # Check if the GPU supports the dtype.\n if torch_dtype == torch.bfloat16:\n compute_capability = torch.cuda.get_device_capability()\n if compute_capability[0] < 8:\n gpu_name = torch.cuda.get_device_name()\n raise ValueError(\n \"Bfloat16 is only supported on GPUs with compute capability \"\n f\"of at least 8.0. Your {gpu_name} GPU has compute capability \"\n f\"{compute_capability[0]}.{compute_capability[1]}.\")\n", "path": "vllm/worker/worker.py"}]} | 2,915 | 232 |
gh_patches_debug_31866 | rasdani/github-patches | git_diff | intel__dffml-566 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
feature: add a load function in high level
Add a load function in `high_level.py` so the data from sources can be loaded.
After completing this, #555 can be worked on.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dffml/high_level.py`
Content:
```
1 """
2 High level abstraction interfaces to DFFML. These are probably going to be used
3 in a lot of quick and dirty python files.
4 """
5 import pathlib
6 from typing import Union, Dict, Any
7
8 from .record import Record
9 from .source.source import Sources, BaseSource
10 from .source.memory import MemorySource, MemorySourceConfig
11
12
13 def _records_to_sources(*args):
14 """
15 Create a memory source out of any records passed as a variable length list.
16 Add all sources found in the variable length list to a list of sources, and
17 the created source containing records, and return that list of sources.
18 """
19 # If the first arg is an instance of sources, append the rest to that.
20 if args and isinstance(args[0], Sources):
21 sources = args[0]
22 else:
23 sources = Sources(
24 *[arg for arg in args if isinstance(arg, BaseSource)]
25 )
26 # Records to add to memory source
27 records = []
28 # Make args mutable
29 args = list(args)
30 # Convert dicts to records
31 for i, arg in enumerate(args):
32 if isinstance(arg, dict):
33 arg = Record(i, data={"features": arg})
34 if isinstance(arg, Record):
35 records.append(arg)
36 if isinstance(arg, str) and "." in arg:
37 filepath = pathlib.Path(arg)
38 source = BaseSource.load(filepath.suffix.replace(".", ""))
39 sources.append(source(filename=arg))
40 # Create memory source if there are any records
41 if records:
42 sources.append(MemorySource(MemorySourceConfig(records=records)))
43 return sources
44
45
46 async def save(source: BaseSource, *args: Record) -> None:
47 """
48 Update a source's knowledge about given records.
49
50 For each record given, call
51 :py:func:`update <dffml.source.source.BaseSourceContext.update>` on the
52 source. Effectively saving all the records to the source.
53
54 Parameters
55 ----------
56 source : BaseSource
57 Data source to use. See :doc:`/plugins/dffml_source` for sources and
58 options.
59 *args : list
60 Records to be saved.
61
62 Examples
63 --------
64
65 >>> source = CSVSource(filename="save.csv", allowempty=True, readwrite=True)
66 >>>
67 >>> async def main():
68 ... await save(
69 ... source,
70 ... Record(
71 ... "myrecord",
72 ... data={
73 ... "features": {"Years": 0, "Expertise": 1, "Trust": 0.1},
74 ... "prediction": {"Salary": {"value": 10, "confidence": 1.0}},
75 ... }
76 ... )
77 ... )
78 ... print(pathlib.Path("save.csv").read_text().strip())
79 >>>
80 >>> asyncio.run(main())
81 key,tag,Expertise,Trust,Years,prediction_Salary,confidence_Salary
82 myrecord,untagged,1,0.1,0,10,1.0
83 """
84 async with source:
85 async with source() as sctx:
86 for record in args:
87 await sctx.update(record)
88
89
90 async def train(model, *args: Union[BaseSource, Record, Dict[str, Any]]):
91 """
92 Train a machine learning model.
93
94 Provide records to the model to train it. The model should be already
95 instantiated.
96
97 Parameters
98 ----------
99 model : Model
100 Machine Learning model to use. See :doc:`/plugins/dffml_model` for
101 models options.
102 *args : list
103 Input data for training. Could be a ``dict``, :py:class:`Record`,
104 filename, one of the data :doc:`/plugins/dffml_source`, or a filename
105 with the extension being one of the data sources.
106
107 Examples
108 --------
109
110 >>> model = LinearRegressionModel(
111 ... features=Features(
112 ... DefFeature("Years", int, 1),
113 ... DefFeature("Expertise", int, 1),
114 ... DefFeature("Trust", float, 1),
115 ... ),
116 ... predict=DefFeature("Salary", int, 1),
117 ... )
118 >>>
119 >>> async def main():
120 ... await train(
121 ... model,
122 ... {"Years": 0, "Expertise": 1, "Trust": 0.1, "Salary": 10},
123 ... {"Years": 1, "Expertise": 3, "Trust": 0.2, "Salary": 20},
124 ... {"Years": 2, "Expertise": 5, "Trust": 0.3, "Salary": 30},
125 ... {"Years": 3, "Expertise": 7, "Trust": 0.4, "Salary": 40},
126 ... )
127 >>>
128 >>> asyncio.run(main())
129 """
130 sources = _records_to_sources(*args)
131 async with sources as sources, model as model:
132 async with sources() as sctx, model() as mctx:
133 return await mctx.train(sctx)
134
135
136 async def accuracy(
137 model, *args: Union[BaseSource, Record, Dict[str, Any]]
138 ) -> float:
139 """
140 Assess the accuracy of a machine learning model.
141
142 Provide records to the model to assess the percent accuracy of its
143 prediction abilities. The model should be already instantiated and trained.
144
145 Parameters
146 ----------
147 model : Model
148 Machine Learning model to use. See :doc:`/plugins/dffml_model` for
149 models options.
150 *args : list
151 Input data for training. Could be a ``dict``, :py:class:`Record`,
152 filename, one of the data :doc:`/plugins/dffml_source`, or a filename
153 with the extension being one of the data sources.
154
155 Returns
156 -------
157 float
158 A decimal value representing the percent of the time the model made the
159 correct prediction. For some models this has another meaning. Please see
160 the documentation for the model your using for further details.
161
162 Examples
163 --------
164
165 >>> model = LinearRegressionModel(
166 ... features=Features(
167 ... DefFeature("Years", int, 1),
168 ... DefFeature("Expertise", int, 1),
169 ... DefFeature("Trust", float, 1),
170 ... ),
171 ... predict=DefFeature("Salary", int, 1),
172 ... )
173 >>>
174 >>> async def main():
175 ... print(
176 ... "Accuracy:",
177 ... await accuracy(
178 ... model,
179 ... {"Years": 4, "Expertise": 9, "Trust": 0.5, "Salary": 50},
180 ... {"Years": 5, "Expertise": 11, "Trust": 0.6, "Salary": 60},
181 ... ),
182 ... )
183 >>>
184 >>> asyncio.run(main())
185 Accuracy: 1.0
186 """
187 sources = _records_to_sources(*args)
188 async with sources as sources, model as model:
189 async with sources() as sctx, model() as mctx:
190 return float(await mctx.accuracy(sctx))
191
192
193 async def predict(
194 model,
195 *args: Union[BaseSource, Record, Dict[str, Any]],
196 update: bool = False,
197 keep_record: bool = False,
198 ):
199 """
200 Make a prediction using a machine learning model.
201
202 The model must be trained before using it to make a prediction.
203
204 Parameters
205 ----------
206 model : Model
207 Machine Learning model to use. See :doc:`/plugins/dffml_model` for
208 models options.
209 *args : list
210 Input data for prediction. Could be a ``dict``, :py:class:`Record`,
211 filename, or one of the data :doc:`/plugins/dffml_source`.
212 update : boolean, optional
213 If ``True`` prediction data within records will be written back to all
214 sources given. Defaults to ``False``.
215 keep_record : boolean, optional
216 If ``True`` the results will be kept as their ``Record`` objects instead
217 of being converted to a ``(record.key, features, predictions)`` tuple.
218 Defaults to ``False``.
219
220 Returns
221 -------
222 asynciterator
223 ``Record`` objects or ``(record.key, features, predictions)`` tuple.
224
225 Examples
226 --------
227
228 >>> model = LinearRegressionModel(
229 ... features=Features(
230 ... DefFeature("Years", int, 1),
231 ... DefFeature("Expertise", int, 1),
232 ... DefFeature("Trust", float, 1),
233 ... ),
234 ... predict=DefFeature("Salary", int, 1),
235 ... )
236 >>>
237 >>> async def main():
238 ... async for i, features, prediction in predict(
239 ... model,
240 ... {"Years": 6, "Expertise": 13, "Trust": 0.7},
241 ... {"Years": 7, "Expertise": 15, "Trust": 0.8},
242 ... ):
243 ... features["Salary"] = round(prediction["Salary"]["value"])
244 ... print(features)
245 >>>
246 >>> asyncio.run(main())
247 {'Years': 6, 'Expertise': 13, 'Trust': 0.7, 'Salary': 70.0}
248 {'Years': 7, 'Expertise': 15, 'Trust': 0.8, 'Salary': 80.0}
249 """
250 sources = _records_to_sources(*args)
251 async with sources as sources, model as model:
252 async with sources() as sctx, model() as mctx:
253 async for record in mctx.predict(sctx.records()):
254 yield record if keep_record else (
255 record.key,
256 record.features(),
257 record.predictions(),
258 )
259 if update:
260 await sctx.update(record)
261
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/dffml/high_level.py b/dffml/high_level.py
--- a/dffml/high_level.py
+++ b/dffml/high_level.py
@@ -3,7 +3,7 @@
in a lot of quick and dirty python files.
"""
import pathlib
-from typing import Union, Dict, Any
+from typing import Union, Dict, Any, AsyncIterator
from .record import Record
from .source.source import Sources, BaseSource
@@ -258,3 +258,60 @@
)
if update:
await sctx.update(record)
+
+
+async def load(source: BaseSource, *args: str) -> AsyncIterator[Record]:
+ """
+ Yields records from a source.
+
+ Yields all the records from the source, if record keys are given then only
+ those records are yielded.
+
+ Parameters
+ ----------
+ source : BaseSource
+ Data source to use. See :doc:`/plugins/dffml_source` for sources and
+ options.
+ *args : str
+ Records to be returned. If empty, all the records in a source will be returned.
+
+ Returns
+ -------
+ asynciterator
+ :py:class:`Record <dffml.record.Record>` object
+
+ Examples
+ --------
+
+ >>> source = CSVSource(filename="save.csv", allowempty=True, readwrite=True)
+ >>>
+ >>> async def main():
+ ... await save(
+ ... source,
+ ... Record("1", data={"features": {"A": 0, "B": 1}}),
+ ... Record("2", data={"features": {"A": 3, "B": 4}}),
+ ... )
+ ...
+ ... # All records in source
+ ... async for record in load(source):
+ ... print(record.export())
+ ...
+ ... # For specific records in a source
+ ... async for record in load(source, "1"):
+ ... print(record.export())
+ >>>
+ >>> asyncio.run(main())
+ {'key': '1', 'features': {'A': 0, 'B': 1}, 'extra': {}}
+ {'key': '2', 'features': {'A': 3, 'B': 4}, 'extra': {}}
+ {'key': '1', 'features': {'A': 0, 'B': 1}, 'extra': {}}
+ """
+ async with source:
+ async with source() as sctx:
+ if args:
+ # If specific records are to be loaded
+ for record in args:
+ yield await sctx.record(record)
+ else:
+ # All the records are loaded
+ async for record in sctx.records():
+ yield record
| {"golden_diff": "diff --git a/dffml/high_level.py b/dffml/high_level.py\n--- a/dffml/high_level.py\n+++ b/dffml/high_level.py\n@@ -3,7 +3,7 @@\n in a lot of quick and dirty python files.\n \"\"\"\n import pathlib\n-from typing import Union, Dict, Any\n+from typing import Union, Dict, Any, AsyncIterator\n \n from .record import Record\n from .source.source import Sources, BaseSource\n@@ -258,3 +258,60 @@\n )\n if update:\n await sctx.update(record)\n+\n+\n+async def load(source: BaseSource, *args: str) -> AsyncIterator[Record]:\n+ \"\"\"\n+ Yields records from a source.\n+\n+ Yields all the records from the source, if record keys are given then only\n+ those records are yielded.\n+\n+ Parameters\n+ ----------\n+ source : BaseSource\n+ Data source to use. See :doc:`/plugins/dffml_source` for sources and\n+ options.\n+ *args : str\n+ Records to be returned. If empty, all the records in a source will be returned.\n+\n+ Returns\n+ -------\n+ asynciterator\n+ :py:class:`Record <dffml.record.Record>` object\n+\n+ Examples\n+ --------\n+\n+ >>> source = CSVSource(filename=\"save.csv\", allowempty=True, readwrite=True)\n+ >>>\n+ >>> async def main():\n+ ... await save(\n+ ... source,\n+ ... Record(\"1\", data={\"features\": {\"A\": 0, \"B\": 1}}),\n+ ... Record(\"2\", data={\"features\": {\"A\": 3, \"B\": 4}}),\n+ ... )\n+ ...\n+ ... # All records in source\n+ ... async for record in load(source):\n+ ... print(record.export())\n+ ...\n+ ... # For specific records in a source\n+ ... async for record in load(source, \"1\"):\n+ ... print(record.export())\n+ >>>\n+ >>> asyncio.run(main())\n+ {'key': '1', 'features': {'A': 0, 'B': 1}, 'extra': {}}\n+ {'key': '2', 'features': {'A': 3, 'B': 4}, 'extra': {}}\n+ {'key': '1', 'features': {'A': 0, 'B': 1}, 'extra': {}}\n+ \"\"\"\n+ async with source:\n+ async with source() as sctx:\n+ if args:\n+ # If specific records are to be loaded\n+ for record in args:\n+ yield await sctx.record(record)\n+ else:\n+ # All the records are loaded\n+ async for record in sctx.records():\n+ yield record\n", "issue": "feature: add a load function in high level\nAdd a load function in `high_level.py` so the data from sources can be loaded.\r\nAfter completing this, #555 can be worked on.\n", "before_files": [{"content": "\"\"\"\nHigh level abstraction interfaces to DFFML. These are probably going to be used\nin a lot of quick and dirty python files.\n\"\"\"\nimport pathlib\nfrom typing import Union, Dict, Any\n\nfrom .record import Record\nfrom .source.source import Sources, BaseSource\nfrom .source.memory import MemorySource, MemorySourceConfig\n\n\ndef _records_to_sources(*args):\n \"\"\"\n Create a memory source out of any records passed as a variable length list.\n Add all sources found in the variable length list to a list of sources, and\n the created source containing records, and return that list of sources.\n \"\"\"\n # If the first arg is an instance of sources, append the rest to that.\n if args and isinstance(args[0], Sources):\n sources = args[0]\n else:\n sources = Sources(\n *[arg for arg in args if isinstance(arg, BaseSource)]\n )\n # Records to add to memory source\n records = []\n # Make args mutable\n args = list(args)\n # Convert dicts to records\n for i, arg in enumerate(args):\n if isinstance(arg, dict):\n arg = Record(i, data={\"features\": arg})\n if isinstance(arg, Record):\n records.append(arg)\n if isinstance(arg, str) and \".\" in arg:\n filepath = pathlib.Path(arg)\n source = BaseSource.load(filepath.suffix.replace(\".\", \"\"))\n sources.append(source(filename=arg))\n # Create memory source if there are any records\n if records:\n sources.append(MemorySource(MemorySourceConfig(records=records)))\n return sources\n\n\nasync def save(source: BaseSource, *args: Record) -> None:\n \"\"\"\n Update a source's knowledge about given records.\n\n For each record given, call\n :py:func:`update <dffml.source.source.BaseSourceContext.update>` on the\n source. Effectively saving all the records to the source.\n\n Parameters\n ----------\n source : BaseSource\n Data source to use. See :doc:`/plugins/dffml_source` for sources and\n options.\n *args : list\n Records to be saved.\n\n Examples\n --------\n\n >>> source = CSVSource(filename=\"save.csv\", allowempty=True, readwrite=True)\n >>>\n >>> async def main():\n ... await save(\n ... source,\n ... Record(\n ... \"myrecord\",\n ... data={\n ... \"features\": {\"Years\": 0, \"Expertise\": 1, \"Trust\": 0.1},\n ... \"prediction\": {\"Salary\": {\"value\": 10, \"confidence\": 1.0}},\n ... }\n ... )\n ... )\n ... print(pathlib.Path(\"save.csv\").read_text().strip())\n >>>\n >>> asyncio.run(main())\n key,tag,Expertise,Trust,Years,prediction_Salary,confidence_Salary\n myrecord,untagged,1,0.1,0,10,1.0\n \"\"\"\n async with source:\n async with source() as sctx:\n for record in args:\n await sctx.update(record)\n\n\nasync def train(model, *args: Union[BaseSource, Record, Dict[str, Any]]):\n \"\"\"\n Train a machine learning model.\n\n Provide records to the model to train it. The model should be already\n instantiated.\n\n Parameters\n ----------\n model : Model\n Machine Learning model to use. See :doc:`/plugins/dffml_model` for\n models options.\n *args : list\n Input data for training. Could be a ``dict``, :py:class:`Record`,\n filename, one of the data :doc:`/plugins/dffml_source`, or a filename\n with the extension being one of the data sources.\n\n Examples\n --------\n\n >>> model = LinearRegressionModel(\n ... features=Features(\n ... DefFeature(\"Years\", int, 1),\n ... DefFeature(\"Expertise\", int, 1),\n ... DefFeature(\"Trust\", float, 1),\n ... ),\n ... predict=DefFeature(\"Salary\", int, 1),\n ... )\n >>>\n >>> async def main():\n ... await train(\n ... model,\n ... {\"Years\": 0, \"Expertise\": 1, \"Trust\": 0.1, \"Salary\": 10},\n ... {\"Years\": 1, \"Expertise\": 3, \"Trust\": 0.2, \"Salary\": 20},\n ... {\"Years\": 2, \"Expertise\": 5, \"Trust\": 0.3, \"Salary\": 30},\n ... {\"Years\": 3, \"Expertise\": 7, \"Trust\": 0.4, \"Salary\": 40},\n ... )\n >>>\n >>> asyncio.run(main())\n \"\"\"\n sources = _records_to_sources(*args)\n async with sources as sources, model as model:\n async with sources() as sctx, model() as mctx:\n return await mctx.train(sctx)\n\n\nasync def accuracy(\n model, *args: Union[BaseSource, Record, Dict[str, Any]]\n) -> float:\n \"\"\"\n Assess the accuracy of a machine learning model.\n\n Provide records to the model to assess the percent accuracy of its\n prediction abilities. The model should be already instantiated and trained.\n\n Parameters\n ----------\n model : Model\n Machine Learning model to use. See :doc:`/plugins/dffml_model` for\n models options.\n *args : list\n Input data for training. Could be a ``dict``, :py:class:`Record`,\n filename, one of the data :doc:`/plugins/dffml_source`, or a filename\n with the extension being one of the data sources.\n\n Returns\n -------\n float\n A decimal value representing the percent of the time the model made the\n correct prediction. For some models this has another meaning. Please see\n the documentation for the model your using for further details.\n\n Examples\n --------\n\n >>> model = LinearRegressionModel(\n ... features=Features(\n ... DefFeature(\"Years\", int, 1),\n ... DefFeature(\"Expertise\", int, 1),\n ... DefFeature(\"Trust\", float, 1),\n ... ),\n ... predict=DefFeature(\"Salary\", int, 1),\n ... )\n >>>\n >>> async def main():\n ... print(\n ... \"Accuracy:\",\n ... await accuracy(\n ... model,\n ... {\"Years\": 4, \"Expertise\": 9, \"Trust\": 0.5, \"Salary\": 50},\n ... {\"Years\": 5, \"Expertise\": 11, \"Trust\": 0.6, \"Salary\": 60},\n ... ),\n ... )\n >>>\n >>> asyncio.run(main())\n Accuracy: 1.0\n \"\"\"\n sources = _records_to_sources(*args)\n async with sources as sources, model as model:\n async with sources() as sctx, model() as mctx:\n return float(await mctx.accuracy(sctx))\n\n\nasync def predict(\n model,\n *args: Union[BaseSource, Record, Dict[str, Any]],\n update: bool = False,\n keep_record: bool = False,\n):\n \"\"\"\n Make a prediction using a machine learning model.\n\n The model must be trained before using it to make a prediction.\n\n Parameters\n ----------\n model : Model\n Machine Learning model to use. See :doc:`/plugins/dffml_model` for\n models options.\n *args : list\n Input data for prediction. Could be a ``dict``, :py:class:`Record`,\n filename, or one of the data :doc:`/plugins/dffml_source`.\n update : boolean, optional\n If ``True`` prediction data within records will be written back to all\n sources given. Defaults to ``False``.\n keep_record : boolean, optional\n If ``True`` the results will be kept as their ``Record`` objects instead\n of being converted to a ``(record.key, features, predictions)`` tuple.\n Defaults to ``False``.\n\n Returns\n -------\n asynciterator\n ``Record`` objects or ``(record.key, features, predictions)`` tuple.\n\n Examples\n --------\n\n >>> model = LinearRegressionModel(\n ... features=Features(\n ... DefFeature(\"Years\", int, 1),\n ... DefFeature(\"Expertise\", int, 1),\n ... DefFeature(\"Trust\", float, 1),\n ... ),\n ... predict=DefFeature(\"Salary\", int, 1),\n ... )\n >>>\n >>> async def main():\n ... async for i, features, prediction in predict(\n ... model,\n ... {\"Years\": 6, \"Expertise\": 13, \"Trust\": 0.7},\n ... {\"Years\": 7, \"Expertise\": 15, \"Trust\": 0.8},\n ... ):\n ... features[\"Salary\"] = round(prediction[\"Salary\"][\"value\"])\n ... print(features)\n >>>\n >>> asyncio.run(main())\n {'Years': 6, 'Expertise': 13, 'Trust': 0.7, 'Salary': 70.0}\n {'Years': 7, 'Expertise': 15, 'Trust': 0.8, 'Salary': 80.0}\n \"\"\"\n sources = _records_to_sources(*args)\n async with sources as sources, model as model:\n async with sources() as sctx, model() as mctx:\n async for record in mctx.predict(sctx.records()):\n yield record if keep_record else (\n record.key,\n record.features(),\n record.predictions(),\n )\n if update:\n await sctx.update(record)\n", "path": "dffml/high_level.py"}], "after_files": [{"content": "\"\"\"\nHigh level abstraction interfaces to DFFML. These are probably going to be used\nin a lot of quick and dirty python files.\n\"\"\"\nimport pathlib\nfrom typing import Union, Dict, Any, AsyncIterator\n\nfrom .record import Record\nfrom .source.source import Sources, BaseSource\nfrom .source.memory import MemorySource, MemorySourceConfig\n\n\ndef _records_to_sources(*args):\n \"\"\"\n Create a memory source out of any records passed as a variable length list.\n Add all sources found in the variable length list to a list of sources, and\n the created source containing records, and return that list of sources.\n \"\"\"\n # If the first arg is an instance of sources, append the rest to that.\n if args and isinstance(args[0], Sources):\n sources = args[0]\n else:\n sources = Sources(\n *[arg for arg in args if isinstance(arg, BaseSource)]\n )\n # Records to add to memory source\n records = []\n # Make args mutable\n args = list(args)\n # Convert dicts to records\n for i, arg in enumerate(args):\n if isinstance(arg, dict):\n arg = Record(i, data={\"features\": arg})\n if isinstance(arg, Record):\n records.append(arg)\n if isinstance(arg, str) and \".\" in arg:\n filepath = pathlib.Path(arg)\n source = BaseSource.load(filepath.suffix.replace(\".\", \"\"))\n sources.append(source(filename=arg))\n # Create memory source if there are any records\n if records:\n sources.append(MemorySource(MemorySourceConfig(records=records)))\n return sources\n\n\nasync def save(source: BaseSource, *args: Record) -> None:\n \"\"\"\n Update a source's knowledge about given records.\n\n For each record given, call\n :py:func:`update <dffml.source.source.BaseSourceContext.update>` on the\n source. Effectively saving all the records to the source.\n\n Parameters\n ----------\n source : BaseSource\n Data source to use. See :doc:`/plugins/dffml_source` for sources and\n options.\n *args : list\n Records to be saved.\n\n Examples\n --------\n\n >>> source = CSVSource(filename=\"save.csv\", allowempty=True, readwrite=True)\n >>>\n >>> async def main():\n ... await save(\n ... source,\n ... Record(\n ... \"myrecord\",\n ... data={\n ... \"features\": {\"Years\": 0, \"Expertise\": 1, \"Trust\": 0.1},\n ... \"prediction\": {\"Salary\": {\"value\": 10, \"confidence\": 1.0}},\n ... }\n ... )\n ... )\n ... print(pathlib.Path(\"save.csv\").read_text().strip())\n >>>\n >>> asyncio.run(main())\n key,tag,Expertise,Trust,Years,prediction_Salary,confidence_Salary\n myrecord,untagged,1,0.1,0,10,1.0\n \"\"\"\n async with source:\n async with source() as sctx:\n for record in args:\n await sctx.update(record)\n\n\nasync def train(model, *args: Union[BaseSource, Record, Dict[str, Any]]):\n \"\"\"\n Train a machine learning model.\n\n Provide records to the model to train it. The model should be already\n instantiated.\n\n Parameters\n ----------\n model : Model\n Machine Learning model to use. See :doc:`/plugins/dffml_model` for\n models options.\n *args : list\n Input data for training. Could be a ``dict``, :py:class:`Record`,\n filename, one of the data :doc:`/plugins/dffml_source`, or a filename\n with the extension being one of the data sources.\n\n Examples\n --------\n\n >>> model = LinearRegressionModel(\n ... features=Features(\n ... DefFeature(\"Years\", int, 1),\n ... DefFeature(\"Expertise\", int, 1),\n ... DefFeature(\"Trust\", float, 1),\n ... ),\n ... predict=DefFeature(\"Salary\", int, 1),\n ... )\n >>>\n >>> async def main():\n ... await train(\n ... model,\n ... {\"Years\": 0, \"Expertise\": 1, \"Trust\": 0.1, \"Salary\": 10},\n ... {\"Years\": 1, \"Expertise\": 3, \"Trust\": 0.2, \"Salary\": 20},\n ... {\"Years\": 2, \"Expertise\": 5, \"Trust\": 0.3, \"Salary\": 30},\n ... {\"Years\": 3, \"Expertise\": 7, \"Trust\": 0.4, \"Salary\": 40},\n ... )\n >>>\n >>> asyncio.run(main())\n \"\"\"\n sources = _records_to_sources(*args)\n async with sources as sources, model as model:\n async with sources() as sctx, model() as mctx:\n return await mctx.train(sctx)\n\n\nasync def accuracy(\n model, *args: Union[BaseSource, Record, Dict[str, Any]]\n) -> float:\n \"\"\"\n Assess the accuracy of a machine learning model.\n\n Provide records to the model to assess the percent accuracy of its\n prediction abilities. The model should be already instantiated and trained.\n\n Parameters\n ----------\n model : Model\n Machine Learning model to use. See :doc:`/plugins/dffml_model` for\n models options.\n *args : list\n Input data for training. Could be a ``dict``, :py:class:`Record`,\n filename, one of the data :doc:`/plugins/dffml_source`, or a filename\n with the extension being one of the data sources.\n\n Returns\n -------\n float\n A decimal value representing the percent of the time the model made the\n correct prediction. For some models this has another meaning. Please see\n the documentation for the model your using for further details.\n\n Examples\n --------\n\n >>> model = LinearRegressionModel(\n ... features=Features(\n ... DefFeature(\"Years\", int, 1),\n ... DefFeature(\"Expertise\", int, 1),\n ... DefFeature(\"Trust\", float, 1),\n ... ),\n ... predict=DefFeature(\"Salary\", int, 1),\n ... )\n >>>\n >>> async def main():\n ... print(\n ... \"Accuracy:\",\n ... await accuracy(\n ... model,\n ... {\"Years\": 4, \"Expertise\": 9, \"Trust\": 0.5, \"Salary\": 50},\n ... {\"Years\": 5, \"Expertise\": 11, \"Trust\": 0.6, \"Salary\": 60},\n ... ),\n ... )\n >>>\n >>> asyncio.run(main())\n Accuracy: 1.0\n \"\"\"\n sources = _records_to_sources(*args)\n async with sources as sources, model as model:\n async with sources() as sctx, model() as mctx:\n return float(await mctx.accuracy(sctx))\n\n\nasync def predict(\n model,\n *args: Union[BaseSource, Record, Dict[str, Any]],\n update: bool = False,\n keep_record: bool = False,\n):\n \"\"\"\n Make a prediction using a machine learning model.\n\n The model must be trained before using it to make a prediction.\n\n Parameters\n ----------\n model : Model\n Machine Learning model to use. See :doc:`/plugins/dffml_model` for\n models options.\n *args : list\n Input data for prediction. Could be a ``dict``, :py:class:`Record`,\n filename, or one of the data :doc:`/plugins/dffml_source`.\n update : boolean, optional\n If ``True`` prediction data within records will be written back to all\n sources given. Defaults to ``False``.\n keep_record : boolean, optional\n If ``True`` the results will be kept as their ``Record`` objects instead\n of being converted to a ``(record.key, features, predictions)`` tuple.\n Defaults to ``False``.\n\n Returns\n -------\n asynciterator\n ``Record`` objects or ``(record.key, features, predictions)`` tuple.\n\n Examples\n --------\n\n >>> model = LinearRegressionModel(\n ... features=Features(\n ... DefFeature(\"Years\", int, 1),\n ... DefFeature(\"Expertise\", int, 1),\n ... DefFeature(\"Trust\", float, 1),\n ... ),\n ... predict=DefFeature(\"Salary\", int, 1),\n ... )\n >>>\n >>> async def main():\n ... async for i, features, prediction in predict(\n ... model,\n ... {\"Years\": 6, \"Expertise\": 13, \"Trust\": 0.7},\n ... {\"Years\": 7, \"Expertise\": 15, \"Trust\": 0.8},\n ... ):\n ... features[\"Salary\"] = round(prediction[\"Salary\"][\"value\"])\n ... print(features)\n >>>\n >>> asyncio.run(main())\n {'Years': 6, 'Expertise': 13, 'Trust': 0.7, 'Salary': 70.0}\n {'Years': 7, 'Expertise': 15, 'Trust': 0.8, 'Salary': 80.0}\n \"\"\"\n sources = _records_to_sources(*args)\n async with sources as sources, model as model:\n async with sources() as sctx, model() as mctx:\n async for record in mctx.predict(sctx.records()):\n yield record if keep_record else (\n record.key,\n record.features(),\n record.predictions(),\n )\n if update:\n await sctx.update(record)\n\n\nasync def load(source: BaseSource, *args: str) -> AsyncIterator[Record]:\n \"\"\"\n Yields records from a source.\n\n Yields all the records from the source, if record keys are given then only\n those records are yielded.\n\n Parameters\n ----------\n source : BaseSource\n Data source to use. See :doc:`/plugins/dffml_source` for sources and\n options.\n *args : str\n Records to be returned. If empty, all the records in a source will be returned.\n\n Returns\n -------\n asynciterator\n :py:class:`Record <dffml.record.Record>` object\n\n Examples\n --------\n\n >>> source = CSVSource(filename=\"save.csv\", allowempty=True, readwrite=True)\n >>>\n >>> async def main():\n ... await save(\n ... source,\n ... Record(\"1\", data={\"features\": {\"A\": 0, \"B\": 1}}),\n ... Record(\"2\", data={\"features\": {\"A\": 3, \"B\": 4}}),\n ... )\n ...\n ... # All records in source\n ... async for record in load(source):\n ... print(record.export())\n ...\n ... # For specific records in a source\n ... async for record in load(source, \"1\"):\n ... print(record.export())\n >>>\n >>> asyncio.run(main())\n {'key': '1', 'features': {'A': 0, 'B': 1}, 'extra': {}}\n {'key': '2', 'features': {'A': 3, 'B': 4}, 'extra': {}}\n {'key': '1', 'features': {'A': 0, 'B': 1}, 'extra': {}}\n \"\"\"\n async with source:\n async with source() as sctx:\n if args:\n # If specific records are to be loaded\n for record in args:\n yield await sctx.record(record)\n else:\n # All the records are loaded\n async for record in sctx.records():\n yield record\n", "path": "dffml/high_level.py"}]} | 3,145 | 634 |
gh_patches_debug_5108 | rasdani/github-patches | git_diff | roboflow__supervision-289 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wrong COCO format while exporting a `sv.DetectionDataset` using its `as_coco` method
### The Issue:
I export some annotations using:
```python
sv.DetectionDataset(
classes=CLASSES,
images=images,
annotations=annotations
).as_coco(
annotations_path=ANNOTATIONS_PATH,
min_image_area_percentage=0,
max_image_area_percentage=1.,
approximation_percentage=0
)
```
The resulting `.json` is created successfully.
Then, loading the annotations using `pycocotools` and plotting annotations of an image:
```python
from pycocotools.coco import COCO
coco=COCO(ANNOTATIONS_PATH)
imgId = 42 # an arbitrary image id
file_name = coco.loadImgs(imgId)[0]["file_name"]
img = plt.imread(file_name)
plt.imshow(img)
plt.axis("off")
annIds = coco.getAnnIds(imgIds=imgId, iscrowd=None)
anns = coco.loadAnns(annIds)
coco.showAnns(anns, draw_bbox=True)
```
which plots wrong annotations for the given `imgId`: the annotations (box & mask) is seen bottom-right and it seems that there is a mismatch between the image and its annotations.

Actually, for any `imgId`, we have the same issue: the image is shown correctly but the annotations are not. Interestingly, the annotations (`annIds`) are always the same for different `imgId`'s. In my case, there is only 1 box & mask per image and for various values of `imgId`, the `annIds` are always equal to `1`.
### The Cause:
The `annIds` are not unique among all the annotations (the "id" in "annotations" in COCO format).
I presume they have to be unique among all annotation id's ([see the following issue](https://github.com/cocodataset/cocoapi/issues/95#issuecomment-448038572)).
However, the current implementation resets the ***annotation id*** to 1 for each image:
https://github.com/roboflow/supervision/blob/4f79d29934ebfbb59d4cb789b70e8e451f545332/supervision/dataset/formats/coco.py#L202
### The (possible) Fix:
Simply changing `label_id` to `annotation_id` in the following line will assign a unique id for each annotation:
https://github.com/roboflow/supervision/blob/4f79d29934ebfbb59d4cb789b70e8e451f545332/supervision/dataset/formats/coco.py#L218
I would be glad to open a PR.
Wrong COCO format while exporting a `sv.DetectionDataset` using its `as_coco` method
### The Issue:
I export some annotations using:
```python
sv.DetectionDataset(
classes=CLASSES,
images=images,
annotations=annotations
).as_coco(
annotations_path=ANNOTATIONS_PATH,
min_image_area_percentage=0,
max_image_area_percentage=1.,
approximation_percentage=0
)
```
The resulting `.json` is created successfully.
Then, loading the annotations using `pycocotools` and plotting annotations of an image:
```python
from pycocotools.coco import COCO
coco=COCO(ANNOTATIONS_PATH)
imgId = 42 # an arbitrary image id
file_name = coco.loadImgs(imgId)[0]["file_name"]
img = plt.imread(file_name)
plt.imshow(img)
plt.axis("off")
annIds = coco.getAnnIds(imgIds=imgId, iscrowd=None)
anns = coco.loadAnns(annIds)
coco.showAnns(anns, draw_bbox=True)
```
which plots wrong annotations for the given `imgId`: the annotations (box & mask) is seen bottom-right and it seems that there is a mismatch between the image and its annotations.

Actually, for any `imgId`, we have the same issue: the image is shown correctly but the annotations are not. Interestingly, the annotations (`annIds`) are always the same for different `imgId`'s. In my case, there is only 1 box & mask per image and for various values of `imgId`, the `annIds` are always equal to `1`.
### The Cause:
The `annIds` are not unique among all the annotations (the "id" in "annotations" in COCO format).
I presume they have to be unique among all annotation id's ([see the following issue](https://github.com/cocodataset/cocoapi/issues/95#issuecomment-448038572)).
However, the current implementation resets the ***annotation id*** to 1 for each image:
https://github.com/roboflow/supervision/blob/4f79d29934ebfbb59d4cb789b70e8e451f545332/supervision/dataset/formats/coco.py#L202
### The (possible) Fix:
Simply changing `label_id` to `annotation_id` in the following line will assign a unique id for each annotation:
https://github.com/roboflow/supervision/blob/4f79d29934ebfbb59d4cb789b70e8e451f545332/supervision/dataset/formats/coco.py#L218
I would be glad to open a PR.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `supervision/dataset/formats/coco.py`
Content:
```
1 import os
2 from datetime import datetime
3 from pathlib import Path
4 from typing import Dict, List, Tuple
5
6 import cv2
7 import numpy as np
8
9 from supervision.dataset.utils import (
10 approximate_mask_with_polygons,
11 map_detections_class_id,
12 )
13 from supervision.detection.core import Detections
14 from supervision.detection.utils import polygon_to_mask
15 from supervision.utils.file import read_json_file, save_json_file
16
17
18 def coco_categories_to_classes(coco_categories: List[dict]) -> List[str]:
19 return [
20 category["name"]
21 for category in sorted(coco_categories, key=lambda category: category["id"])
22 ]
23
24
25 def build_coco_class_index_mapping(
26 coco_categories: List[dict], target_classes: List[str]
27 ) -> Dict[int, int]:
28 source_class_to_index = {
29 category["name"]: category["id"] for category in coco_categories
30 }
31 return {
32 source_class_to_index[target_class_name]: target_class_index
33 for target_class_index, target_class_name in enumerate(target_classes)
34 }
35
36
37 def classes_to_coco_categories(classes: List[str]) -> List[dict]:
38 return [
39 {
40 "id": class_id,
41 "name": class_name,
42 "supercategory": "common-objects",
43 }
44 for class_id, class_name in enumerate(classes)
45 ]
46
47
48 def group_coco_annotations_by_image_id(
49 coco_annotations: List[dict],
50 ) -> Dict[int, List[dict]]:
51 annotations = {}
52 for annotation in coco_annotations:
53 image_id = annotation["image_id"]
54 if image_id not in annotations:
55 annotations[image_id] = []
56 annotations[image_id].append(annotation)
57 return annotations
58
59
60 def _polygons_to_masks(
61 polygons: List[np.ndarray], resolution_wh: Tuple[int, int]
62 ) -> np.ndarray:
63 return np.array(
64 [
65 polygon_to_mask(polygon=polygon, resolution_wh=resolution_wh)
66 for polygon in polygons
67 ],
68 dtype=bool,
69 )
70
71
72 def coco_annotations_to_detections(
73 image_annotations: List[dict], resolution_wh: Tuple[int, int], with_masks: bool
74 ) -> Detections:
75 if not image_annotations:
76 return Detections.empty()
77
78 class_ids = [
79 image_annotation["category_id"] for image_annotation in image_annotations
80 ]
81 xyxy = [image_annotation["bbox"] for image_annotation in image_annotations]
82 xyxy = np.asarray(xyxy)
83 xyxy[:, 2:4] += xyxy[:, 0:2]
84
85 if with_masks:
86 polygons = [
87 np.reshape(
88 np.asarray(image_annotation["segmentation"], dtype=np.int32), (-1, 2)
89 )
90 for image_annotation in image_annotations
91 ]
92 mask = _polygons_to_masks(polygons=polygons, resolution_wh=resolution_wh)
93 return Detections(
94 class_id=np.asarray(class_ids, dtype=int), xyxy=xyxy, mask=mask
95 )
96
97 return Detections(xyxy=xyxy, class_id=np.asarray(class_ids, dtype=int))
98
99
100 def detections_to_coco_annotations(
101 detections: Detections,
102 image_id: int,
103 annotation_id: int,
104 min_image_area_percentage: float = 0.0,
105 max_image_area_percentage: float = 1.0,
106 approximation_percentage: float = 0.75,
107 ) -> Tuple[List[Dict], int]:
108 coco_annotations = []
109 for xyxy, mask, _, class_id, _ in detections:
110 box_width, box_height = xyxy[2] - xyxy[0], xyxy[3] - xyxy[1]
111 polygon = []
112 if mask is not None:
113 polygon = list(
114 approximate_mask_with_polygons(
115 mask=mask,
116 min_image_area_percentage=min_image_area_percentage,
117 max_image_area_percentage=max_image_area_percentage,
118 approximation_percentage=approximation_percentage,
119 )[0].flatten()
120 )
121 coco_annotation = {
122 "id": annotation_id,
123 "image_id": image_id,
124 "category_id": int(class_id),
125 "bbox": [xyxy[0], xyxy[1], box_width, box_height],
126 "area": box_width * box_height,
127 "segmentation": [polygon] if polygon else [],
128 "iscrowd": 0,
129 }
130 coco_annotations.append(coco_annotation)
131 annotation_id += 1
132 return coco_annotations, annotation_id
133
134
135 def load_coco_annotations(
136 images_directory_path: str,
137 annotations_path: str,
138 force_masks: bool = False,
139 ) -> Tuple[List[str], Dict[str, np.ndarray], Dict[str, Detections]]:
140 coco_data = read_json_file(file_path=annotations_path)
141 classes = coco_categories_to_classes(coco_categories=coco_data["categories"])
142 class_index_mapping = build_coco_class_index_mapping(
143 coco_categories=coco_data["categories"], target_classes=classes
144 )
145 coco_images = coco_data["images"]
146 coco_annotations_groups = group_coco_annotations_by_image_id(
147 coco_annotations=coco_data["annotations"]
148 )
149
150 images = {}
151 annotations = {}
152
153 for coco_image in coco_images:
154 image_name, image_width, image_height = (
155 coco_image["file_name"],
156 coco_image["width"],
157 coco_image["height"],
158 )
159 image_annotations = coco_annotations_groups.get(coco_image["id"], [])
160 image_path = os.path.join(images_directory_path, image_name)
161
162 image = cv2.imread(str(image_path))
163 annotation = coco_annotations_to_detections(
164 image_annotations=image_annotations,
165 resolution_wh=(image_width, image_height),
166 with_masks=force_masks,
167 )
168 annotation = map_detections_class_id(
169 source_to_target_mapping=class_index_mapping,
170 detections=annotation,
171 )
172
173 images[image_name] = image
174 annotations[image_name] = annotation
175
176 return classes, images, annotations
177
178
179 def save_coco_annotations(
180 annotation_path: str,
181 images: Dict[str, np.ndarray],
182 annotations: Dict[str, Detections],
183 classes: List[str],
184 min_image_area_percentage: float = 0.0,
185 max_image_area_percentage: float = 1.0,
186 approximation_percentage: float = 0.75,
187 ) -> None:
188 Path(annotation_path).parent.mkdir(parents=True, exist_ok=True)
189 info = {}
190 licenses = [
191 {
192 "id": 1,
193 "url": "https://creativecommons.org/licenses/by/4.0/",
194 "name": "CC BY 4.0",
195 }
196 ]
197
198 coco_annotations = []
199 coco_images = []
200 coco_categories = classes_to_coco_categories(classes=classes)
201
202 image_id, annotation_id = 1, 1
203 for image_name, image in images.items():
204 image_height, image_width, _ = image.shape
205
206 coco_image = {
207 "id": image_id,
208 "license": 1,
209 "file_name": image_name,
210 "height": image_height,
211 "width": image_width,
212 "date_captured": datetime.now().strftime("%m/%d/%Y,%H:%M:%S"),
213 }
214
215 coco_images.append(coco_image)
216 detections = annotations[image_name]
217
218 coco_annotation, label_id = detections_to_coco_annotations(
219 detections=detections,
220 image_id=image_id,
221 annotation_id=annotation_id,
222 min_image_area_percentage=min_image_area_percentage,
223 max_image_area_percentage=max_image_area_percentage,
224 approximation_percentage=approximation_percentage,
225 )
226
227 coco_annotations.extend(coco_annotation)
228 image_id += 1
229
230 annotation_dict = {
231 "info": info,
232 "licenses": licenses,
233 "categories": coco_categories,
234 "images": coco_images,
235 "annotations": coco_annotations,
236 }
237 save_json_file(annotation_dict, file_path=annotation_path)
238
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/supervision/dataset/formats/coco.py b/supervision/dataset/formats/coco.py
--- a/supervision/dataset/formats/coco.py
+++ b/supervision/dataset/formats/coco.py
@@ -215,7 +215,7 @@
coco_images.append(coco_image)
detections = annotations[image_name]
- coco_annotation, label_id = detections_to_coco_annotations(
+ coco_annotation, annotation_id = detections_to_coco_annotations(
detections=detections,
image_id=image_id,
annotation_id=annotation_id,
| {"golden_diff": "diff --git a/supervision/dataset/formats/coco.py b/supervision/dataset/formats/coco.py\n--- a/supervision/dataset/formats/coco.py\n+++ b/supervision/dataset/formats/coco.py\n@@ -215,7 +215,7 @@\n coco_images.append(coco_image)\n detections = annotations[image_name]\n \n- coco_annotation, label_id = detections_to_coco_annotations(\n+ coco_annotation, annotation_id = detections_to_coco_annotations(\n detections=detections,\n image_id=image_id,\n annotation_id=annotation_id,\n", "issue": "Wrong COCO format while exporting a `sv.DetectionDataset` using its `as_coco` method\n### The Issue:\r\nI export some annotations using:\r\n ```python\r\nsv.DetectionDataset(\r\n classes=CLASSES,\r\n images=images,\r\n annotations=annotations\r\n).as_coco(\r\n annotations_path=ANNOTATIONS_PATH,\r\n min_image_area_percentage=0,\r\n max_image_area_percentage=1.,\r\n approximation_percentage=0\r\n)\r\n```\r\nThe resulting `.json` is created successfully. \r\nThen, loading the annotations using `pycocotools` and plotting annotations of an image:\r\n```python\r\nfrom pycocotools.coco import COCO\r\ncoco=COCO(ANNOTATIONS_PATH)\r\n\r\nimgId = 42 # an arbitrary image id\r\nfile_name = coco.loadImgs(imgId)[0][\"file_name\"]\r\n\r\nimg = plt.imread(file_name)\r\nplt.imshow(img)\r\nplt.axis(\"off\")\r\n\r\nannIds = coco.getAnnIds(imgIds=imgId, iscrowd=None)\r\nanns = coco.loadAnns(annIds)\r\ncoco.showAnns(anns, draw_bbox=True)\r\n```\r\nwhich plots wrong annotations for the given `imgId`: the annotations (box & mask) is seen bottom-right and it seems that there is a mismatch between the image and its annotations. \r\n\r\n\r\nActually, for any `imgId`, we have the same issue: the image is shown correctly but the annotations are not. Interestingly, the annotations (`annIds`) are always the same for different `imgId`'s. In my case, there is only 1 box & mask per image and for various values of `imgId`, the `annIds` are always equal to `1`.\r\n\r\n### The Cause:\r\nThe `annIds` are not unique among all the annotations (the \"id\" in \"annotations\" in COCO format). \r\nI presume they have to be unique among all annotation id's ([see the following issue](https://github.com/cocodataset/cocoapi/issues/95#issuecomment-448038572)). \r\n\r\nHowever, the current implementation resets the ***annotation id*** to 1 for each image:\r\nhttps://github.com/roboflow/supervision/blob/4f79d29934ebfbb59d4cb789b70e8e451f545332/supervision/dataset/formats/coco.py#L202\r\n\r\n\r\n### The (possible) Fix:\r\nSimply changing `label_id` to `annotation_id` in the following line will assign a unique id for each annotation:\r\nhttps://github.com/roboflow/supervision/blob/4f79d29934ebfbb59d4cb789b70e8e451f545332/supervision/dataset/formats/coco.py#L218\r\n\r\nI would be glad to open a PR.\r\n\nWrong COCO format while exporting a `sv.DetectionDataset` using its `as_coco` method\n### The Issue:\r\nI export some annotations using:\r\n ```python\r\nsv.DetectionDataset(\r\n classes=CLASSES,\r\n images=images,\r\n annotations=annotations\r\n).as_coco(\r\n annotations_path=ANNOTATIONS_PATH,\r\n min_image_area_percentage=0,\r\n max_image_area_percentage=1.,\r\n approximation_percentage=0\r\n)\r\n```\r\nThe resulting `.json` is created successfully. \r\nThen, loading the annotations using `pycocotools` and plotting annotations of an image:\r\n```python\r\nfrom pycocotools.coco import COCO\r\ncoco=COCO(ANNOTATIONS_PATH)\r\n\r\nimgId = 42 # an arbitrary image id\r\nfile_name = coco.loadImgs(imgId)[0][\"file_name\"]\r\n\r\nimg = plt.imread(file_name)\r\nplt.imshow(img)\r\nplt.axis(\"off\")\r\n\r\nannIds = coco.getAnnIds(imgIds=imgId, iscrowd=None)\r\nanns = coco.loadAnns(annIds)\r\ncoco.showAnns(anns, draw_bbox=True)\r\n```\r\nwhich plots wrong annotations for the given `imgId`: the annotations (box & mask) is seen bottom-right and it seems that there is a mismatch between the image and its annotations. \r\n\r\n\r\nActually, for any `imgId`, we have the same issue: the image is shown correctly but the annotations are not. Interestingly, the annotations (`annIds`) are always the same for different `imgId`'s. In my case, there is only 1 box & mask per image and for various values of `imgId`, the `annIds` are always equal to `1`.\r\n\r\n### The Cause:\r\nThe `annIds` are not unique among all the annotations (the \"id\" in \"annotations\" in COCO format). \r\nI presume they have to be unique among all annotation id's ([see the following issue](https://github.com/cocodataset/cocoapi/issues/95#issuecomment-448038572)). \r\n\r\nHowever, the current implementation resets the ***annotation id*** to 1 for each image:\r\nhttps://github.com/roboflow/supervision/blob/4f79d29934ebfbb59d4cb789b70e8e451f545332/supervision/dataset/formats/coco.py#L202\r\n\r\n\r\n### The (possible) Fix:\r\nSimply changing `label_id` to `annotation_id` in the following line will assign a unique id for each annotation:\r\nhttps://github.com/roboflow/supervision/blob/4f79d29934ebfbb59d4cb789b70e8e451f545332/supervision/dataset/formats/coco.py#L218\r\n\r\nI would be glad to open a PR.\r\n\n", "before_files": [{"content": "import os\nfrom datetime import datetime\nfrom pathlib import Path\nfrom typing import Dict, List, Tuple\n\nimport cv2\nimport numpy as np\n\nfrom supervision.dataset.utils import (\n approximate_mask_with_polygons,\n map_detections_class_id,\n)\nfrom supervision.detection.core import Detections\nfrom supervision.detection.utils import polygon_to_mask\nfrom supervision.utils.file import read_json_file, save_json_file\n\n\ndef coco_categories_to_classes(coco_categories: List[dict]) -> List[str]:\n return [\n category[\"name\"]\n for category in sorted(coco_categories, key=lambda category: category[\"id\"])\n ]\n\n\ndef build_coco_class_index_mapping(\n coco_categories: List[dict], target_classes: List[str]\n) -> Dict[int, int]:\n source_class_to_index = {\n category[\"name\"]: category[\"id\"] for category in coco_categories\n }\n return {\n source_class_to_index[target_class_name]: target_class_index\n for target_class_index, target_class_name in enumerate(target_classes)\n }\n\n\ndef classes_to_coco_categories(classes: List[str]) -> List[dict]:\n return [\n {\n \"id\": class_id,\n \"name\": class_name,\n \"supercategory\": \"common-objects\",\n }\n for class_id, class_name in enumerate(classes)\n ]\n\n\ndef group_coco_annotations_by_image_id(\n coco_annotations: List[dict],\n) -> Dict[int, List[dict]]:\n annotations = {}\n for annotation in coco_annotations:\n image_id = annotation[\"image_id\"]\n if image_id not in annotations:\n annotations[image_id] = []\n annotations[image_id].append(annotation)\n return annotations\n\n\ndef _polygons_to_masks(\n polygons: List[np.ndarray], resolution_wh: Tuple[int, int]\n) -> np.ndarray:\n return np.array(\n [\n polygon_to_mask(polygon=polygon, resolution_wh=resolution_wh)\n for polygon in polygons\n ],\n dtype=bool,\n )\n\n\ndef coco_annotations_to_detections(\n image_annotations: List[dict], resolution_wh: Tuple[int, int], with_masks: bool\n) -> Detections:\n if not image_annotations:\n return Detections.empty()\n\n class_ids = [\n image_annotation[\"category_id\"] for image_annotation in image_annotations\n ]\n xyxy = [image_annotation[\"bbox\"] for image_annotation in image_annotations]\n xyxy = np.asarray(xyxy)\n xyxy[:, 2:4] += xyxy[:, 0:2]\n\n if with_masks:\n polygons = [\n np.reshape(\n np.asarray(image_annotation[\"segmentation\"], dtype=np.int32), (-1, 2)\n )\n for image_annotation in image_annotations\n ]\n mask = _polygons_to_masks(polygons=polygons, resolution_wh=resolution_wh)\n return Detections(\n class_id=np.asarray(class_ids, dtype=int), xyxy=xyxy, mask=mask\n )\n\n return Detections(xyxy=xyxy, class_id=np.asarray(class_ids, dtype=int))\n\n\ndef detections_to_coco_annotations(\n detections: Detections,\n image_id: int,\n annotation_id: int,\n min_image_area_percentage: float = 0.0,\n max_image_area_percentage: float = 1.0,\n approximation_percentage: float = 0.75,\n) -> Tuple[List[Dict], int]:\n coco_annotations = []\n for xyxy, mask, _, class_id, _ in detections:\n box_width, box_height = xyxy[2] - xyxy[0], xyxy[3] - xyxy[1]\n polygon = []\n if mask is not None:\n polygon = list(\n approximate_mask_with_polygons(\n mask=mask,\n min_image_area_percentage=min_image_area_percentage,\n max_image_area_percentage=max_image_area_percentage,\n approximation_percentage=approximation_percentage,\n )[0].flatten()\n )\n coco_annotation = {\n \"id\": annotation_id,\n \"image_id\": image_id,\n \"category_id\": int(class_id),\n \"bbox\": [xyxy[0], xyxy[1], box_width, box_height],\n \"area\": box_width * box_height,\n \"segmentation\": [polygon] if polygon else [],\n \"iscrowd\": 0,\n }\n coco_annotations.append(coco_annotation)\n annotation_id += 1\n return coco_annotations, annotation_id\n\n\ndef load_coco_annotations(\n images_directory_path: str,\n annotations_path: str,\n force_masks: bool = False,\n) -> Tuple[List[str], Dict[str, np.ndarray], Dict[str, Detections]]:\n coco_data = read_json_file(file_path=annotations_path)\n classes = coco_categories_to_classes(coco_categories=coco_data[\"categories\"])\n class_index_mapping = build_coco_class_index_mapping(\n coco_categories=coco_data[\"categories\"], target_classes=classes\n )\n coco_images = coco_data[\"images\"]\n coco_annotations_groups = group_coco_annotations_by_image_id(\n coco_annotations=coco_data[\"annotations\"]\n )\n\n images = {}\n annotations = {}\n\n for coco_image in coco_images:\n image_name, image_width, image_height = (\n coco_image[\"file_name\"],\n coco_image[\"width\"],\n coco_image[\"height\"],\n )\n image_annotations = coco_annotations_groups.get(coco_image[\"id\"], [])\n image_path = os.path.join(images_directory_path, image_name)\n\n image = cv2.imread(str(image_path))\n annotation = coco_annotations_to_detections(\n image_annotations=image_annotations,\n resolution_wh=(image_width, image_height),\n with_masks=force_masks,\n )\n annotation = map_detections_class_id(\n source_to_target_mapping=class_index_mapping,\n detections=annotation,\n )\n\n images[image_name] = image\n annotations[image_name] = annotation\n\n return classes, images, annotations\n\n\ndef save_coco_annotations(\n annotation_path: str,\n images: Dict[str, np.ndarray],\n annotations: Dict[str, Detections],\n classes: List[str],\n min_image_area_percentage: float = 0.0,\n max_image_area_percentage: float = 1.0,\n approximation_percentage: float = 0.75,\n) -> None:\n Path(annotation_path).parent.mkdir(parents=True, exist_ok=True)\n info = {}\n licenses = [\n {\n \"id\": 1,\n \"url\": \"https://creativecommons.org/licenses/by/4.0/\",\n \"name\": \"CC BY 4.0\",\n }\n ]\n\n coco_annotations = []\n coco_images = []\n coco_categories = classes_to_coco_categories(classes=classes)\n\n image_id, annotation_id = 1, 1\n for image_name, image in images.items():\n image_height, image_width, _ = image.shape\n\n coco_image = {\n \"id\": image_id,\n \"license\": 1,\n \"file_name\": image_name,\n \"height\": image_height,\n \"width\": image_width,\n \"date_captured\": datetime.now().strftime(\"%m/%d/%Y,%H:%M:%S\"),\n }\n\n coco_images.append(coco_image)\n detections = annotations[image_name]\n\n coco_annotation, label_id = detections_to_coco_annotations(\n detections=detections,\n image_id=image_id,\n annotation_id=annotation_id,\n min_image_area_percentage=min_image_area_percentage,\n max_image_area_percentage=max_image_area_percentage,\n approximation_percentage=approximation_percentage,\n )\n\n coco_annotations.extend(coco_annotation)\n image_id += 1\n\n annotation_dict = {\n \"info\": info,\n \"licenses\": licenses,\n \"categories\": coco_categories,\n \"images\": coco_images,\n \"annotations\": coco_annotations,\n }\n save_json_file(annotation_dict, file_path=annotation_path)\n", "path": "supervision/dataset/formats/coco.py"}], "after_files": [{"content": "import os\nfrom datetime import datetime\nfrom pathlib import Path\nfrom typing import Dict, List, Tuple\n\nimport cv2\nimport numpy as np\n\nfrom supervision.dataset.utils import (\n approximate_mask_with_polygons,\n map_detections_class_id,\n)\nfrom supervision.detection.core import Detections\nfrom supervision.detection.utils import polygon_to_mask\nfrom supervision.utils.file import read_json_file, save_json_file\n\n\ndef coco_categories_to_classes(coco_categories: List[dict]) -> List[str]:\n return [\n category[\"name\"]\n for category in sorted(coco_categories, key=lambda category: category[\"id\"])\n ]\n\n\ndef build_coco_class_index_mapping(\n coco_categories: List[dict], target_classes: List[str]\n) -> Dict[int, int]:\n source_class_to_index = {\n category[\"name\"]: category[\"id\"] for category in coco_categories\n }\n return {\n source_class_to_index[target_class_name]: target_class_index\n for target_class_index, target_class_name in enumerate(target_classes)\n }\n\n\ndef classes_to_coco_categories(classes: List[str]) -> List[dict]:\n return [\n {\n \"id\": class_id,\n \"name\": class_name,\n \"supercategory\": \"common-objects\",\n }\n for class_id, class_name in enumerate(classes)\n ]\n\n\ndef group_coco_annotations_by_image_id(\n coco_annotations: List[dict],\n) -> Dict[int, List[dict]]:\n annotations = {}\n for annotation in coco_annotations:\n image_id = annotation[\"image_id\"]\n if image_id not in annotations:\n annotations[image_id] = []\n annotations[image_id].append(annotation)\n return annotations\n\n\ndef _polygons_to_masks(\n polygons: List[np.ndarray], resolution_wh: Tuple[int, int]\n) -> np.ndarray:\n return np.array(\n [\n polygon_to_mask(polygon=polygon, resolution_wh=resolution_wh)\n for polygon in polygons\n ],\n dtype=bool,\n )\n\n\ndef coco_annotations_to_detections(\n image_annotations: List[dict], resolution_wh: Tuple[int, int], with_masks: bool\n) -> Detections:\n if not image_annotations:\n return Detections.empty()\n\n class_ids = [\n image_annotation[\"category_id\"] for image_annotation in image_annotations\n ]\n xyxy = [image_annotation[\"bbox\"] for image_annotation in image_annotations]\n xyxy = np.asarray(xyxy)\n xyxy[:, 2:4] += xyxy[:, 0:2]\n\n if with_masks:\n polygons = [\n np.reshape(\n np.asarray(image_annotation[\"segmentation\"], dtype=np.int32), (-1, 2)\n )\n for image_annotation in image_annotations\n ]\n mask = _polygons_to_masks(polygons=polygons, resolution_wh=resolution_wh)\n return Detections(\n class_id=np.asarray(class_ids, dtype=int), xyxy=xyxy, mask=mask\n )\n\n return Detections(xyxy=xyxy, class_id=np.asarray(class_ids, dtype=int))\n\n\ndef detections_to_coco_annotations(\n detections: Detections,\n image_id: int,\n annotation_id: int,\n min_image_area_percentage: float = 0.0,\n max_image_area_percentage: float = 1.0,\n approximation_percentage: float = 0.75,\n) -> Tuple[List[Dict], int]:\n coco_annotations = []\n for xyxy, mask, _, class_id, _ in detections:\n box_width, box_height = xyxy[2] - xyxy[0], xyxy[3] - xyxy[1]\n polygon = []\n if mask is not None:\n polygon = list(\n approximate_mask_with_polygons(\n mask=mask,\n min_image_area_percentage=min_image_area_percentage,\n max_image_area_percentage=max_image_area_percentage,\n approximation_percentage=approximation_percentage,\n )[0].flatten()\n )\n coco_annotation = {\n \"id\": annotation_id,\n \"image_id\": image_id,\n \"category_id\": int(class_id),\n \"bbox\": [xyxy[0], xyxy[1], box_width, box_height],\n \"area\": box_width * box_height,\n \"segmentation\": [polygon] if polygon else [],\n \"iscrowd\": 0,\n }\n coco_annotations.append(coco_annotation)\n annotation_id += 1\n return coco_annotations, annotation_id\n\n\ndef load_coco_annotations(\n images_directory_path: str,\n annotations_path: str,\n force_masks: bool = False,\n) -> Tuple[List[str], Dict[str, np.ndarray], Dict[str, Detections]]:\n coco_data = read_json_file(file_path=annotations_path)\n classes = coco_categories_to_classes(coco_categories=coco_data[\"categories\"])\n class_index_mapping = build_coco_class_index_mapping(\n coco_categories=coco_data[\"categories\"], target_classes=classes\n )\n coco_images = coco_data[\"images\"]\n coco_annotations_groups = group_coco_annotations_by_image_id(\n coco_annotations=coco_data[\"annotations\"]\n )\n\n images = {}\n annotations = {}\n\n for coco_image in coco_images:\n image_name, image_width, image_height = (\n coco_image[\"file_name\"],\n coco_image[\"width\"],\n coco_image[\"height\"],\n )\n image_annotations = coco_annotations_groups.get(coco_image[\"id\"], [])\n image_path = os.path.join(images_directory_path, image_name)\n\n image = cv2.imread(str(image_path))\n annotation = coco_annotations_to_detections(\n image_annotations=image_annotations,\n resolution_wh=(image_width, image_height),\n with_masks=force_masks,\n )\n annotation = map_detections_class_id(\n source_to_target_mapping=class_index_mapping,\n detections=annotation,\n )\n\n images[image_name] = image\n annotations[image_name] = annotation\n\n return classes, images, annotations\n\n\ndef save_coco_annotations(\n annotation_path: str,\n images: Dict[str, np.ndarray],\n annotations: Dict[str, Detections],\n classes: List[str],\n min_image_area_percentage: float = 0.0,\n max_image_area_percentage: float = 1.0,\n approximation_percentage: float = 0.75,\n) -> None:\n Path(annotation_path).parent.mkdir(parents=True, exist_ok=True)\n info = {}\n licenses = [\n {\n \"id\": 1,\n \"url\": \"https://creativecommons.org/licenses/by/4.0/\",\n \"name\": \"CC BY 4.0\",\n }\n ]\n\n coco_annotations = []\n coco_images = []\n coco_categories = classes_to_coco_categories(classes=classes)\n\n image_id, annotation_id = 1, 1\n for image_name, image in images.items():\n image_height, image_width, _ = image.shape\n\n coco_image = {\n \"id\": image_id,\n \"license\": 1,\n \"file_name\": image_name,\n \"height\": image_height,\n \"width\": image_width,\n \"date_captured\": datetime.now().strftime(\"%m/%d/%Y,%H:%M:%S\"),\n }\n\n coco_images.append(coco_image)\n detections = annotations[image_name]\n\n coco_annotation, annotation_id = detections_to_coco_annotations(\n detections=detections,\n image_id=image_id,\n annotation_id=annotation_id,\n min_image_area_percentage=min_image_area_percentage,\n max_image_area_percentage=max_image_area_percentage,\n approximation_percentage=approximation_percentage,\n )\n\n coco_annotations.extend(coco_annotation)\n image_id += 1\n\n annotation_dict = {\n \"info\": info,\n \"licenses\": licenses,\n \"categories\": coco_categories,\n \"images\": coco_images,\n \"annotations\": coco_annotations,\n }\n save_json_file(annotation_dict, file_path=annotation_path)\n", "path": "supervision/dataset/formats/coco.py"}]} | 3,931 | 132 |
gh_patches_debug_65041 | rasdani/github-patches | git_diff | PokemonGoF__PokemonGo-Bot-4931 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Error in Telegram: "AttributeError: 'module' object has no attribute 'now'"
### Expected Behavior
<!-- Tell us what you expect to happen -->
Bot running with Telegram enabled
### Actual Behavior
<!-- Tell us what is happening -->
Bot not starting due to error message
### Your FULL config.json (remove your username, password, gmapkey and any other private info)
<!-- Provide your FULL config file, feel free to use services such as pastebin.com to reduce clutter -->
http://pastebin.com/5nQC2ceh
### Output when issue occurred
<!-- Provide a reasonable sample from your output log (not just the error message), feel free to use services such as pastebin.com to reduce clutter -->
Traceback (most recent call last):
File "pokecli.py", line 781, in <module>
main()
File "pokecli.py", line 128, in main
bot = start_bot(bot, config)
File "pokecli.py", line 88, in start_bot
initialize_task(bot, config)
File "pokecli.py", line 79, in initialize_task
tree = TreeConfigBuilder(bot, config.raw_tasks).build()
File "/PokemonGo-Bot/pokemongo_bot/tree_config_builder.py", line 79, in build
instance = worker(self.bot, task_config)
File "/PokemonGo-Bot/pokemongo_bot/base_task.py", line 23, in **init**
self.initialize()
File "/PokemonGo-Bot/pokemongo_bot/cell_workers/telegram_task.py", line 42, in initialize
self.next_job=datetime.now() + timedelta(seconds=self.min_interval)
AttributeError: 'module' object has no attribute 'now'
### Steps to Reproduce
<!-- Tell us the steps you have taken to reproduce the issue -->
Start the bot with the above config.
### Other Information
OS: CentOS
<!-- Tell us what Operating system you're using -->
Branch: dev
<!-- dev or master -->
Git Commit: 9e81c6ed90d79e181599ec7f0a0cfa2ecd4d09f5
<!-- run 'git log -n 1 --pretty=format:"%H"' -->
Python Version: Python 2.7.5
<!-- run 'python -V' and paste it here) -->
Any other relevant files/configs (eg: path files)
<!-- Anything else which may be of relevance -->
<!-- ===============END OF ISSUE SECTION=============== -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pokemongo_bot/cell_workers/telegram_task.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import datetime
3 import telegram
4 import os
5 import logging
6 import json
7 from pokemongo_bot.base_task import BaseTask
8 from pokemongo_bot.base_dir import _base_dir
9 from pokemongo_bot.event_handlers import TelegramHandler
10
11 from pprint import pprint
12 import re
13
14 class FileIOException(Exception):
15 pass
16
17 class TelegramTask(BaseTask):
18 SUPPORTED_TASK_API_VERSION = 1
19 update_id = None
20 tbot = None
21 min_interval=None
22 next_job=None
23
24 def initialize(self):
25 if not self.enabled:
26 return
27 api_key = self.bot.config.telegram_token
28 if api_key == None:
29 self.emit_event(
30 'config_error',
31 formatted='api_key not defined.'
32 )
33 return
34 self.tbot = telegram.Bot(api_key)
35 if self.config.get('master',None):
36 self.bot.event_manager.add_handler(TelegramHandler(self.tbot,self.config.get('master',None),self.config.get('alert_catch')))
37 try:
38 self.update_id = self.tbot.getUpdates()[0].update_id
39 except IndexError:
40 self.update_id = None
41 self.min_interval=self.config.get('min_interval',120)
42 self.next_job=datetime.now() + timedelta(seconds=self.min_interval)
43 def work(self):
44 if not self.enabled:
45 return
46 if datetime.now()<self.next_job:
47 return
48 self.next_job=datetime.now() + timedelta(seconds=self.min_interval)
49 for update in self.tbot.getUpdates(offset=self.update_id, timeout=10):
50 self.update_id = update.update_id+1
51 if update.message:
52 self.bot.logger.info("message from {} ({}): {}".format(update.message.from_user.username, update.message.from_user.id, update.message.text))
53 if self.config.get('master',None) and self.config.get('master',None) not in [update.message.from_user.id, "@{}".format(update.message.from_user.username)]:
54 self.emit_event(
55 'debug',
56 formatted="Master wrong: expecting {}, got {}({})".format(self.config.get('master',None), update.message.from_user.username, update.message.from_user.id))
57 continue
58 else:
59 if not re.match(r'^[0-9]+$', "{}".format(self.config['master'])): # master was not numeric...
60 self.config['master'] = update.message.chat_id
61 idx = (i for i,v in enumerate(self.bot.event_manager._handlers) if type(v) is TelegramHandler).next()
62 self.bot.event_manager._handlers[idx] = TelegramHandler(self.tbot,self.config['master'], self.config.get('alert_catch'))
63
64
65
66 if update.message.text == "/info":
67 stats = self._get_player_stats()
68 if stats:
69 with self.bot.database as conn:
70 cur = conn.cursor()
71 cur.execute("SELECT DISTINCT COUNT(encounter_id) FROM catch_log WHERE dated >= datetime('now','-1 day')")
72 catch_day = cur.fetchone()[0]
73 cur.execute("SELECT DISTINCT COUNT(pokestop) FROM pokestop_log WHERE dated >= datetime('now','-1 day')")
74 ps_day = cur.fetchone()[0]
75 res = (
76 "*"+self.bot.config.username+"*",
77 "_Level:_ "+str(stats["level"]),
78 "_XP:_ "+str(stats["experience"])+"/"+str(stats["next_level_xp"]),
79 "_Pokemons Captured:_ "+str(stats["pokemons_captured"])+" ("+str(catch_day)+" _last 24h_)",
80 "_Poke Stop Visits:_ "+str(stats["poke_stop_visits"])+" ("+str(ps_day)+" _last 24h_)",
81 "_KM Walked:_ "+str(stats["km_walked"])
82 )
83 self.tbot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text="\n".join(res))
84 self.tbot.send_location(chat_id=update.message.chat_id, latitude=self.bot.api._position_lat, longitude=self.bot.api._position_lng)
85 else:
86 self.tbot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text="Stats not loaded yet\n")
87 elif update.message.text == "/start" or update.message.text == "/help":
88 res = (
89 "Commands: ",
90 "/info - info about bot"
91 )
92 self.tbot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text="\n".join(res))
93
94 def _get_player_stats(self):
95 """
96 Helper method parsing the bot inventory object and returning the player stats object.
97 :return: The player stats object.
98 :rtype: dict
99 """
100 web_inventory = os.path.join(_base_dir, "web", "inventory-%s.json" % self.bot.config.username)
101
102 try:
103 with open(web_inventory, "r") as infile:
104 json_inventory = json.load(infile)
105 except ValueError as e:
106 # Unable to read json from web inventory
107 # File may be corrupt. Create a new one.
108 self.bot.logger.info('[x] Error while opening inventory file for read: %s' % e)
109 json_inventory = []
110 except:
111 raise FileIOException("Unexpected error reading from {}".web_inventory)
112
113 return next((x["inventory_item_data"]["player_stats"]
114 for x in json_inventory
115 if x.get("inventory_item_data", {}).get("player_stats", {})),
116 None)
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pokemongo_bot/cell_workers/telegram_task.py b/pokemongo_bot/cell_workers/telegram_task.py
--- a/pokemongo_bot/cell_workers/telegram_task.py
+++ b/pokemongo_bot/cell_workers/telegram_task.py
@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
-import datetime
+from datetime import datetime
+from datetime import timedelta
import telegram
import os
import logging
| {"golden_diff": "diff --git a/pokemongo_bot/cell_workers/telegram_task.py b/pokemongo_bot/cell_workers/telegram_task.py\n--- a/pokemongo_bot/cell_workers/telegram_task.py\n+++ b/pokemongo_bot/cell_workers/telegram_task.py\n@@ -1,5 +1,6 @@\n # -*- coding: utf-8 -*-\n-import datetime\n+from datetime import datetime\n+from datetime import timedelta\n import telegram\n import os\n import logging\n", "issue": "[BUG] Error in Telegram: \"AttributeError: 'module' object has no attribute 'now'\"\n### Expected Behavior\n\n<!-- Tell us what you expect to happen -->\n\nBot running with Telegram enabled\n### Actual Behavior\n\n<!-- Tell us what is happening -->\n\nBot not starting due to error message\n### Your FULL config.json (remove your username, password, gmapkey and any other private info)\n\n<!-- Provide your FULL config file, feel free to use services such as pastebin.com to reduce clutter -->\n\nhttp://pastebin.com/5nQC2ceh\n### Output when issue occurred\n\n<!-- Provide a reasonable sample from your output log (not just the error message), feel free to use services such as pastebin.com to reduce clutter -->\n\nTraceback (most recent call last):\n File \"pokecli.py\", line 781, in <module>\n main()\n File \"pokecli.py\", line 128, in main\n bot = start_bot(bot, config)\n File \"pokecli.py\", line 88, in start_bot\n initialize_task(bot, config)\n File \"pokecli.py\", line 79, in initialize_task\n tree = TreeConfigBuilder(bot, config.raw_tasks).build()\n File \"/PokemonGo-Bot/pokemongo_bot/tree_config_builder.py\", line 79, in build\n instance = worker(self.bot, task_config)\n File \"/PokemonGo-Bot/pokemongo_bot/base_task.py\", line 23, in **init**\n self.initialize()\n File \"/PokemonGo-Bot/pokemongo_bot/cell_workers/telegram_task.py\", line 42, in initialize\n self.next_job=datetime.now() + timedelta(seconds=self.min_interval)\nAttributeError: 'module' object has no attribute 'now'\n### Steps to Reproduce\n\n<!-- Tell us the steps you have taken to reproduce the issue -->\n\nStart the bot with the above config.\n### Other Information\n\nOS: CentOS\n\n<!-- Tell us what Operating system you're using --> \n\nBranch: dev\n\n<!-- dev or master --> \n\nGit Commit: 9e81c6ed90d79e181599ec7f0a0cfa2ecd4d09f5\n\n<!-- run 'git log -n 1 --pretty=format:\"%H\"' --> \n\nPython Version: Python 2.7.5\n\n<!-- run 'python -V' and paste it here) --> \n\nAny other relevant files/configs (eg: path files) \n\n<!-- Anything else which may be of relevance -->\n\n<!-- ===============END OF ISSUE SECTION=============== -->\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport datetime\nimport telegram\nimport os\nimport logging\nimport json\nfrom pokemongo_bot.base_task import BaseTask\nfrom pokemongo_bot.base_dir import _base_dir\nfrom pokemongo_bot.event_handlers import TelegramHandler\n\nfrom pprint import pprint\nimport re\n\nclass FileIOException(Exception):\n pass\n\nclass TelegramTask(BaseTask):\n SUPPORTED_TASK_API_VERSION = 1\n update_id = None\n tbot = None\n min_interval=None\n next_job=None\n \n def initialize(self):\n if not self.enabled:\n return\n api_key = self.bot.config.telegram_token\n if api_key == None:\n self.emit_event(\n 'config_error',\n formatted='api_key not defined.'\n )\n return\n self.tbot = telegram.Bot(api_key)\n if self.config.get('master',None):\n self.bot.event_manager.add_handler(TelegramHandler(self.tbot,self.config.get('master',None),self.config.get('alert_catch')))\n try:\n self.update_id = self.tbot.getUpdates()[0].update_id\n except IndexError:\n self.update_id = None\n self.min_interval=self.config.get('min_interval',120)\n self.next_job=datetime.now() + timedelta(seconds=self.min_interval)\n def work(self):\n if not self.enabled:\n return\n if datetime.now()<self.next_job:\n return\n self.next_job=datetime.now() + timedelta(seconds=self.min_interval)\n for update in self.tbot.getUpdates(offset=self.update_id, timeout=10):\n self.update_id = update.update_id+1\n if update.message:\n self.bot.logger.info(\"message from {} ({}): {}\".format(update.message.from_user.username, update.message.from_user.id, update.message.text))\n if self.config.get('master',None) and self.config.get('master',None) not in [update.message.from_user.id, \"@{}\".format(update.message.from_user.username)]:\n self.emit_event( \n 'debug', \n formatted=\"Master wrong: expecting {}, got {}({})\".format(self.config.get('master',None), update.message.from_user.username, update.message.from_user.id))\n continue\n else:\n if not re.match(r'^[0-9]+$', \"{}\".format(self.config['master'])): # master was not numeric...\n self.config['master'] = update.message.chat_id\n idx = (i for i,v in enumerate(self.bot.event_manager._handlers) if type(v) is TelegramHandler).next()\n self.bot.event_manager._handlers[idx] = TelegramHandler(self.tbot,self.config['master'], self.config.get('alert_catch'))\n \n\n\n if update.message.text == \"/info\":\n stats = self._get_player_stats()\n if stats:\n with self.bot.database as conn:\n cur = conn.cursor()\n cur.execute(\"SELECT DISTINCT COUNT(encounter_id) FROM catch_log WHERE dated >= datetime('now','-1 day')\")\n catch_day = cur.fetchone()[0]\n cur.execute(\"SELECT DISTINCT COUNT(pokestop) FROM pokestop_log WHERE dated >= datetime('now','-1 day')\")\n ps_day = cur.fetchone()[0]\n res = (\n \"*\"+self.bot.config.username+\"*\",\n \"_Level:_ \"+str(stats[\"level\"]),\n \"_XP:_ \"+str(stats[\"experience\"])+\"/\"+str(stats[\"next_level_xp\"]),\n \"_Pokemons Captured:_ \"+str(stats[\"pokemons_captured\"])+\" (\"+str(catch_day)+\" _last 24h_)\",\n \"_Poke Stop Visits:_ \"+str(stats[\"poke_stop_visits\"])+\" (\"+str(ps_day)+\" _last 24h_)\",\n \"_KM Walked:_ \"+str(stats[\"km_walked\"])\n )\n self.tbot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text=\"\\n\".join(res))\n self.tbot.send_location(chat_id=update.message.chat_id, latitude=self.bot.api._position_lat, longitude=self.bot.api._position_lng)\n else:\n self.tbot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text=\"Stats not loaded yet\\n\")\n elif update.message.text == \"/start\" or update.message.text == \"/help\":\n res = (\n \"Commands: \",\n \"/info - info about bot\"\n )\n self.tbot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text=\"\\n\".join(res))\n\n def _get_player_stats(self):\n \"\"\"\n Helper method parsing the bot inventory object and returning the player stats object.\n :return: The player stats object.\n :rtype: dict\n \"\"\"\n web_inventory = os.path.join(_base_dir, \"web\", \"inventory-%s.json\" % self.bot.config.username)\n \n try:\n with open(web_inventory, \"r\") as infile:\n json_inventory = json.load(infile)\n except ValueError as e:\n # Unable to read json from web inventory\n # File may be corrupt. Create a new one. \n self.bot.logger.info('[x] Error while opening inventory file for read: %s' % e)\n json_inventory = []\n except:\n raise FileIOException(\"Unexpected error reading from {}\".web_inventory)\n \n return next((x[\"inventory_item_data\"][\"player_stats\"]\n for x in json_inventory\n if x.get(\"inventory_item_data\", {}).get(\"player_stats\", {})),\n None)\n", "path": "pokemongo_bot/cell_workers/telegram_task.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom datetime import datetime\nfrom datetime import timedelta\nimport telegram\nimport os\nimport logging\nimport json\nfrom pokemongo_bot.base_task import BaseTask\nfrom pokemongo_bot.base_dir import _base_dir\nfrom pokemongo_bot.event_handlers import TelegramHandler\n\nfrom pprint import pprint\nimport re\n\nclass FileIOException(Exception):\n pass\n\nclass TelegramTask(BaseTask):\n SUPPORTED_TASK_API_VERSION = 1\n update_id = None\n tbot = None\n min_interval=None\n next_job=None\n \n def initialize(self):\n if not self.enabled:\n return\n api_key = self.bot.config.telegram_token\n if api_key == None:\n self.emit_event(\n 'config_error',\n formatted='api_key not defined.'\n )\n return\n self.tbot = telegram.Bot(api_key)\n if self.config.get('master',None):\n self.bot.event_manager.add_handler(TelegramHandler(self.tbot,self.config.get('master',None),self.config.get('alert_catch')))\n try:\n self.update_id = self.tbot.getUpdates()[0].update_id\n except IndexError:\n self.update_id = None\n self.min_interval=self.config.get('min_interval',120)\n self.next_job=datetime.now() + timedelta(seconds=self.min_interval)\n def work(self):\n if not self.enabled:\n return\n if datetime.now()<self.next_job:\n return\n self.next_job=datetime.now() + timedelta(seconds=self.min_interval)\n for update in self.tbot.getUpdates(offset=self.update_id, timeout=10):\n self.update_id = update.update_id+1\n if update.message:\n self.bot.logger.info(\"message from {} ({}): {}\".format(update.message.from_user.username, update.message.from_user.id, update.message.text))\n if self.config.get('master',None) and self.config.get('master',None) not in [update.message.from_user.id, \"@{}\".format(update.message.from_user.username)]:\n self.emit_event( \n 'debug', \n formatted=\"Master wrong: expecting {}, got {}({})\".format(self.config.get('master',None), update.message.from_user.username, update.message.from_user.id))\n continue\n else:\n if not re.match(r'^[0-9]+$', \"{}\".format(self.config['master'])): # master was not numeric...\n self.config['master'] = update.message.chat_id\n idx = (i for i,v in enumerate(self.bot.event_manager._handlers) if type(v) is TelegramHandler).next()\n self.bot.event_manager._handlers[idx] = TelegramHandler(self.tbot,self.config['master'], self.config.get('alert_catch'))\n \n\n\n if update.message.text == \"/info\":\n stats = self._get_player_stats()\n if stats:\n with self.bot.database as conn:\n cur = conn.cursor()\n cur.execute(\"SELECT DISTINCT COUNT(encounter_id) FROM catch_log WHERE dated >= datetime('now','-1 day')\")\n catch_day = cur.fetchone()[0]\n cur.execute(\"SELECT DISTINCT COUNT(pokestop) FROM pokestop_log WHERE dated >= datetime('now','-1 day')\")\n ps_day = cur.fetchone()[0]\n res = (\n \"*\"+self.bot.config.username+\"*\",\n \"_Level:_ \"+str(stats[\"level\"]),\n \"_XP:_ \"+str(stats[\"experience\"])+\"/\"+str(stats[\"next_level_xp\"]),\n \"_Pokemons Captured:_ \"+str(stats[\"pokemons_captured\"])+\" (\"+str(catch_day)+\" _last 24h_)\",\n \"_Poke Stop Visits:_ \"+str(stats[\"poke_stop_visits\"])+\" (\"+str(ps_day)+\" _last 24h_)\",\n \"_KM Walked:_ \"+str(stats[\"km_walked\"])\n )\n self.tbot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text=\"\\n\".join(res))\n self.tbot.send_location(chat_id=update.message.chat_id, latitude=self.bot.api._position_lat, longitude=self.bot.api._position_lng)\n else:\n self.tbot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text=\"Stats not loaded yet\\n\")\n elif update.message.text == \"/start\" or update.message.text == \"/help\":\n res = (\n \"Commands: \",\n \"/info - info about bot\"\n )\n self.tbot.sendMessage(chat_id=update.message.chat_id, parse_mode='Markdown', text=\"\\n\".join(res))\n\n def _get_player_stats(self):\n \"\"\"\n Helper method parsing the bot inventory object and returning the player stats object.\n :return: The player stats object.\n :rtype: dict\n \"\"\"\n web_inventory = os.path.join(_base_dir, \"web\", \"inventory-%s.json\" % self.bot.config.username)\n \n try:\n with open(web_inventory, \"r\") as infile:\n json_inventory = json.load(infile)\n except ValueError as e:\n # Unable to read json from web inventory\n # File may be corrupt. Create a new one. \n self.bot.logger.info('[x] Error while opening inventory file for read: %s' % e)\n json_inventory = []\n except:\n raise FileIOException(\"Unexpected error reading from {}\".web_inventory)\n \n return next((x[\"inventory_item_data\"][\"player_stats\"]\n for x in json_inventory\n if x.get(\"inventory_item_data\", {}).get(\"player_stats\", {})),\n None)\n", "path": "pokemongo_bot/cell_workers/telegram_task.py"}]} | 2,204 | 102 |
gh_patches_debug_6721 | rasdani/github-patches | git_diff | microsoft__Qcodes-5565 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot exit ipython with running qcodes.Monitor()
### Steps to reproduce
iPython version 8.10, qcodes version 0.42, Windows 10 Pro 22H2
1. open ipython
2. from qcodes import Monitor
monitor = Monitor()
3. exit ipython
It hangs instead of closing iPython. The terminal must be terminated.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/qcodes/monitor/monitor.py`
Content:
```
1 #! /usr/bin/env python
2 # vim:fenc=utf-8
3 #
4 # Copyright © 2017 unga <[email protected]>
5 #
6 # Distributed under terms of the MIT license.
7 """
8 Monitor a set of parameters in a background thread
9 stream output over websocket
10
11 To start monitor, run this file, or if qcodes is installed as a module:
12
13 ``% python -m qcodes.monitor.monitor``
14
15 Add parameters to monitor in your measurement by creating a new monitor with a
16 list of parameters to monitor:
17
18 ``monitor = qcodes.Monitor(param1, param2, param3, ...)``
19 """
20 from __future__ import annotations
21
22 import asyncio
23 import json
24 import logging
25 import os
26 import socketserver
27 import time
28 import webbrowser
29 from asyncio import CancelledError
30 from collections import defaultdict
31 from collections.abc import Awaitable, Sequence
32 from contextlib import suppress
33 from importlib.resources import as_file, files
34 from threading import Event, Thread
35 from typing import Any, Callable
36
37 import websockets
38 import websockets.exceptions
39 import websockets.server
40
41 from qcodes.parameters import Parameter
42
43 WEBSOCKET_PORT = 5678
44 SERVER_PORT = 3000
45
46 log = logging.getLogger(__name__)
47
48
49 def _get_metadata(
50 *parameters: Parameter, use_root_instrument: bool = True
51 ) -> dict[str, Any]:
52 """
53 Return a dictionary that contains the parameter metadata grouped by the
54 instrument it belongs to.
55 """
56 metadata_timestamp = time.time()
57 # group metadata by instrument
58 metas: dict[Any, Any] = defaultdict(list)
59 for parameter in parameters:
60 # Get the latest value from the parameter,
61 # respecting the max_val_age parameter
62 meta: dict[str, float | str | None] = {}
63 meta["value"] = str(parameter.get_latest())
64 timestamp = parameter.get_latest.get_timestamp()
65 if timestamp is not None:
66 meta["ts"] = timestamp.timestamp()
67 else:
68 meta["ts"] = None
69 meta["name"] = parameter.label or parameter.name
70 meta["unit"] = parameter.unit
71
72 # find the base instrument that this parameter belongs to
73 if use_root_instrument:
74 baseinst = parameter.root_instrument
75 else:
76 baseinst = parameter.instrument
77 if baseinst is None:
78 metas["Unbound Parameter"].append(meta)
79 else:
80 metas[str(baseinst)].append(meta)
81
82 # Create list of parameters, grouped by instrument
83 parameters_out = []
84 for instrument in metas:
85 temp = {"instrument": instrument, "parameters": metas[instrument]}
86 parameters_out.append(temp)
87
88 state = {"ts": metadata_timestamp, "parameters": parameters_out}
89 return state
90
91
92 def _handler(
93 parameters: Sequence[Parameter], interval: float, use_root_instrument: bool = True
94 ) -> Callable[[websockets.server.WebSocketServerProtocol, str], Awaitable[None]]:
95 """
96 Return the websockets server handler.
97 """
98
99 async def server_func(
100 websocket: websockets.server.WebSocketServerProtocol, _: str
101 ) -> None:
102 """
103 Create a websockets handler that sends parameter values to a listener
104 every "interval" seconds.
105 """
106 while True:
107 try:
108 # Update the parameter values
109 try:
110 meta = _get_metadata(
111 *parameters, use_root_instrument=use_root_instrument
112 )
113 except ValueError:
114 log.exception("Error getting parameters")
115 break
116 log.debug("sending.. to %r", websocket)
117 await websocket.send(json.dumps(meta))
118 # Wait for interval seconds and then send again
119 await asyncio.sleep(interval)
120 except (CancelledError, websockets.exceptions.ConnectionClosed):
121 log.debug("Got CancelledError or ConnectionClosed",
122 exc_info=True)
123 break
124 log.debug("Closing websockets connection")
125
126 return server_func
127
128
129 class Monitor(Thread):
130 """
131 QCodes Monitor - WebSockets server to monitor qcodes parameters.
132 """
133 running = None
134
135 def __init__(
136 self,
137 *parameters: Parameter,
138 interval: float = 1,
139 use_root_instrument: bool = True,
140 ):
141 """
142 Monitor qcodes parameters.
143
144 Args:
145 *parameters: Parameters to monitor.
146 interval: How often one wants to refresh the values.
147 use_root_instrument: Defines if parameters are grouped according to
148 parameter.root_instrument or parameter.instrument
149 """
150 super().__init__()
151
152 # Check that all values are valid parameters
153 for parameter in parameters:
154 if not isinstance(parameter, Parameter):
155 raise TypeError(f"We can only monitor QCodes "
156 f"Parameters, not {type(parameter)}")
157
158 self.loop: asyncio.AbstractEventLoop | None = None
159 self._stop_loop_future: asyncio.Future | None = None
160 self._parameters = parameters
161 self.loop_is_closed = Event()
162 self.server_is_started = Event()
163 self.handler = _handler(
164 parameters, interval=interval, use_root_instrument=use_root_instrument
165 )
166 log.debug("Start monitoring thread")
167 if Monitor.running:
168 # stop the old server
169 log.debug("Stopping and restarting server")
170 Monitor.running.stop()
171 self.start()
172
173 # Wait until the loop is running
174 self.server_is_started.wait(timeout=5)
175 if not self.server_is_started.is_set():
176 raise RuntimeError("Failed to start server")
177 Monitor.running = self
178
179 def run(self) -> None:
180 """
181 Start the event loop and run forever.
182 """
183 log.debug("Running Websocket server")
184
185 async def run_loop() -> None:
186 self.loop = asyncio.get_running_loop()
187 self._stop_loop_future = self.loop.create_future()
188
189 async with websockets.server.serve(
190 self.handler, "127.0.0.1", WEBSOCKET_PORT, close_timeout=1
191 ):
192 self.server_is_started.set()
193 try:
194 await self._stop_loop_future
195 except asyncio.CancelledError:
196 log.debug("Websocket server thread shutting down")
197
198 try:
199 asyncio.run(run_loop())
200 finally:
201 self.loop_is_closed.set()
202
203 def update_all(self) -> None:
204 """
205 Update all parameters in the monitor.
206 """
207 for parameter in self._parameters:
208 # call get if it can be called without arguments
209 with suppress(TypeError):
210 parameter.get()
211
212 def stop(self) -> None:
213 """
214 Shutdown the server, close the event loop and join the thread.
215 Setting active Monitor to ``None``.
216 """
217 self.join()
218 Monitor.running = None
219
220 def join(self, timeout: float | None = None) -> None:
221 """
222 Overwrite ``Thread.join`` to make sure server is stopped before
223 joining avoiding a potential deadlock.
224 """
225 log.debug("Shutting down server")
226 if not self.is_alive():
227 # we run this check before trying to run to prevent a cryptic
228 # error message
229 log.debug("monitor is dead")
230 return
231 try:
232 if self.loop is not None and self._stop_loop_future is not None:
233 log.debug("Instructing server to stop event loop.")
234 self.loop.call_soon_threadsafe(self._stop_loop_future.cancel)
235 else:
236 log.debug("No event loop found. Cannot stop event loop.")
237 except RuntimeError:
238 # the above may throw a runtime error if the loop is already
239 # stopped in which case there is nothing more to do
240 log.exception("Could not close loop")
241 self.loop_is_closed.wait(timeout=5)
242 if not self.loop_is_closed.is_set():
243 raise RuntimeError("Failed to join loop")
244 log.debug("Loop reported closed")
245 super().join(timeout=timeout)
246 log.debug("Monitor Thread has joined")
247
248 @staticmethod
249 def show() -> None:
250 """
251 Overwrite this method to show/raise your monitor GUI
252 F.ex.
253
254 ::
255
256 import webbrowser
257 url = "localhost:3000"
258 # Open URL in new window, raising the window if possible.
259 webbrowser.open_new(url)
260
261 """
262 webbrowser.open(f"http://localhost:{SERVER_PORT}")
263
264
265 def main() -> None:
266 import http.server
267
268 # If this file is run, create a simple webserver that serves a simple
269 # website that can be used to view monitored parameters.
270 # # https://github.com/python/mypy/issues/4182
271 parent_module = ".".join(__loader__.name.split(".")[:-1]) # type: ignore[name-defined]
272
273 static_dir = files(parent_module).joinpath("dist")
274 try:
275 with as_file(static_dir) as extracted_dir:
276 os.chdir(extracted_dir)
277 log.info("Starting HTTP Server at http://localhost:%i", SERVER_PORT)
278 with socketserver.TCPServer(
279 ("", SERVER_PORT), http.server.SimpleHTTPRequestHandler
280 ) as httpd:
281 log.debug("serving directory %s", static_dir)
282 webbrowser.open(f"http://localhost:{SERVER_PORT}")
283 httpd.serve_forever()
284 except KeyboardInterrupt:
285 log.info("Shutting Down HTTP Server")
286
287
288 if __name__ == "__main__":
289 main()
290
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/qcodes/monitor/monitor.py b/src/qcodes/monitor/monitor.py
--- a/src/qcodes/monitor/monitor.py
+++ b/src/qcodes/monitor/monitor.py
@@ -147,7 +147,7 @@
use_root_instrument: Defines if parameters are grouped according to
parameter.root_instrument or parameter.instrument
"""
- super().__init__()
+ super().__init__(daemon=True)
# Check that all values are valid parameters
for parameter in parameters:
| {"golden_diff": "diff --git a/src/qcodes/monitor/monitor.py b/src/qcodes/monitor/monitor.py\n--- a/src/qcodes/monitor/monitor.py\n+++ b/src/qcodes/monitor/monitor.py\n@@ -147,7 +147,7 @@\n use_root_instrument: Defines if parameters are grouped according to\n parameter.root_instrument or parameter.instrument\n \"\"\"\n- super().__init__()\n+ super().__init__(daemon=True)\n \n # Check that all values are valid parameters\n for parameter in parameters:\n", "issue": "Cannot exit ipython with running qcodes.Monitor()\n### Steps to reproduce\r\niPython version 8.10, qcodes version 0.42, Windows 10 Pro 22H2\r\n\r\n1. open ipython\r\n2. from qcodes import Monitor\r\nmonitor = Monitor()\r\n3. exit ipython\r\n\r\nIt hangs instead of closing iPython. The terminal must be terminated.\r\n\n", "before_files": [{"content": "#! /usr/bin/env python\n# vim:fenc=utf-8\n#\n# Copyright \u00a9 2017 unga <[email protected]>\n#\n# Distributed under terms of the MIT license.\n\"\"\"\nMonitor a set of parameters in a background thread\nstream output over websocket\n\nTo start monitor, run this file, or if qcodes is installed as a module:\n\n``% python -m qcodes.monitor.monitor``\n\nAdd parameters to monitor in your measurement by creating a new monitor with a\nlist of parameters to monitor:\n\n``monitor = qcodes.Monitor(param1, param2, param3, ...)``\n\"\"\"\nfrom __future__ import annotations\n\nimport asyncio\nimport json\nimport logging\nimport os\nimport socketserver\nimport time\nimport webbrowser\nfrom asyncio import CancelledError\nfrom collections import defaultdict\nfrom collections.abc import Awaitable, Sequence\nfrom contextlib import suppress\nfrom importlib.resources import as_file, files\nfrom threading import Event, Thread\nfrom typing import Any, Callable\n\nimport websockets\nimport websockets.exceptions\nimport websockets.server\n\nfrom qcodes.parameters import Parameter\n\nWEBSOCKET_PORT = 5678\nSERVER_PORT = 3000\n\nlog = logging.getLogger(__name__)\n\n\ndef _get_metadata(\n *parameters: Parameter, use_root_instrument: bool = True\n) -> dict[str, Any]:\n \"\"\"\n Return a dictionary that contains the parameter metadata grouped by the\n instrument it belongs to.\n \"\"\"\n metadata_timestamp = time.time()\n # group metadata by instrument\n metas: dict[Any, Any] = defaultdict(list)\n for parameter in parameters:\n # Get the latest value from the parameter,\n # respecting the max_val_age parameter\n meta: dict[str, float | str | None] = {}\n meta[\"value\"] = str(parameter.get_latest())\n timestamp = parameter.get_latest.get_timestamp()\n if timestamp is not None:\n meta[\"ts\"] = timestamp.timestamp()\n else:\n meta[\"ts\"] = None\n meta[\"name\"] = parameter.label or parameter.name\n meta[\"unit\"] = parameter.unit\n\n # find the base instrument that this parameter belongs to\n if use_root_instrument:\n baseinst = parameter.root_instrument\n else:\n baseinst = parameter.instrument\n if baseinst is None:\n metas[\"Unbound Parameter\"].append(meta)\n else:\n metas[str(baseinst)].append(meta)\n\n # Create list of parameters, grouped by instrument\n parameters_out = []\n for instrument in metas:\n temp = {\"instrument\": instrument, \"parameters\": metas[instrument]}\n parameters_out.append(temp)\n\n state = {\"ts\": metadata_timestamp, \"parameters\": parameters_out}\n return state\n\n\ndef _handler(\n parameters: Sequence[Parameter], interval: float, use_root_instrument: bool = True\n) -> Callable[[websockets.server.WebSocketServerProtocol, str], Awaitable[None]]:\n \"\"\"\n Return the websockets server handler.\n \"\"\"\n\n async def server_func(\n websocket: websockets.server.WebSocketServerProtocol, _: str\n ) -> None:\n \"\"\"\n Create a websockets handler that sends parameter values to a listener\n every \"interval\" seconds.\n \"\"\"\n while True:\n try:\n # Update the parameter values\n try:\n meta = _get_metadata(\n *parameters, use_root_instrument=use_root_instrument\n )\n except ValueError:\n log.exception(\"Error getting parameters\")\n break\n log.debug(\"sending.. to %r\", websocket)\n await websocket.send(json.dumps(meta))\n # Wait for interval seconds and then send again\n await asyncio.sleep(interval)\n except (CancelledError, websockets.exceptions.ConnectionClosed):\n log.debug(\"Got CancelledError or ConnectionClosed\",\n exc_info=True)\n break\n log.debug(\"Closing websockets connection\")\n\n return server_func\n\n\nclass Monitor(Thread):\n \"\"\"\n QCodes Monitor - WebSockets server to monitor qcodes parameters.\n \"\"\"\n running = None\n\n def __init__(\n self,\n *parameters: Parameter,\n interval: float = 1,\n use_root_instrument: bool = True,\n ):\n \"\"\"\n Monitor qcodes parameters.\n\n Args:\n *parameters: Parameters to monitor.\n interval: How often one wants to refresh the values.\n use_root_instrument: Defines if parameters are grouped according to\n parameter.root_instrument or parameter.instrument\n \"\"\"\n super().__init__()\n\n # Check that all values are valid parameters\n for parameter in parameters:\n if not isinstance(parameter, Parameter):\n raise TypeError(f\"We can only monitor QCodes \"\n f\"Parameters, not {type(parameter)}\")\n\n self.loop: asyncio.AbstractEventLoop | None = None\n self._stop_loop_future: asyncio.Future | None = None\n self._parameters = parameters\n self.loop_is_closed = Event()\n self.server_is_started = Event()\n self.handler = _handler(\n parameters, interval=interval, use_root_instrument=use_root_instrument\n )\n log.debug(\"Start monitoring thread\")\n if Monitor.running:\n # stop the old server\n log.debug(\"Stopping and restarting server\")\n Monitor.running.stop()\n self.start()\n\n # Wait until the loop is running\n self.server_is_started.wait(timeout=5)\n if not self.server_is_started.is_set():\n raise RuntimeError(\"Failed to start server\")\n Monitor.running = self\n\n def run(self) -> None:\n \"\"\"\n Start the event loop and run forever.\n \"\"\"\n log.debug(\"Running Websocket server\")\n\n async def run_loop() -> None:\n self.loop = asyncio.get_running_loop()\n self._stop_loop_future = self.loop.create_future()\n\n async with websockets.server.serve(\n self.handler, \"127.0.0.1\", WEBSOCKET_PORT, close_timeout=1\n ):\n self.server_is_started.set()\n try:\n await self._stop_loop_future\n except asyncio.CancelledError:\n log.debug(\"Websocket server thread shutting down\")\n\n try:\n asyncio.run(run_loop())\n finally:\n self.loop_is_closed.set()\n\n def update_all(self) -> None:\n \"\"\"\n Update all parameters in the monitor.\n \"\"\"\n for parameter in self._parameters:\n # call get if it can be called without arguments\n with suppress(TypeError):\n parameter.get()\n\n def stop(self) -> None:\n \"\"\"\n Shutdown the server, close the event loop and join the thread.\n Setting active Monitor to ``None``.\n \"\"\"\n self.join()\n Monitor.running = None\n\n def join(self, timeout: float | None = None) -> None:\n \"\"\"\n Overwrite ``Thread.join`` to make sure server is stopped before\n joining avoiding a potential deadlock.\n \"\"\"\n log.debug(\"Shutting down server\")\n if not self.is_alive():\n # we run this check before trying to run to prevent a cryptic\n # error message\n log.debug(\"monitor is dead\")\n return\n try:\n if self.loop is not None and self._stop_loop_future is not None:\n log.debug(\"Instructing server to stop event loop.\")\n self.loop.call_soon_threadsafe(self._stop_loop_future.cancel)\n else:\n log.debug(\"No event loop found. Cannot stop event loop.\")\n except RuntimeError:\n # the above may throw a runtime error if the loop is already\n # stopped in which case there is nothing more to do\n log.exception(\"Could not close loop\")\n self.loop_is_closed.wait(timeout=5)\n if not self.loop_is_closed.is_set():\n raise RuntimeError(\"Failed to join loop\")\n log.debug(\"Loop reported closed\")\n super().join(timeout=timeout)\n log.debug(\"Monitor Thread has joined\")\n\n @staticmethod\n def show() -> None:\n \"\"\"\n Overwrite this method to show/raise your monitor GUI\n F.ex.\n\n ::\n\n import webbrowser\n url = \"localhost:3000\"\n # Open URL in new window, raising the window if possible.\n webbrowser.open_new(url)\n\n \"\"\"\n webbrowser.open(f\"http://localhost:{SERVER_PORT}\")\n\n\ndef main() -> None:\n import http.server\n\n # If this file is run, create a simple webserver that serves a simple\n # website that can be used to view monitored parameters.\n # # https://github.com/python/mypy/issues/4182\n parent_module = \".\".join(__loader__.name.split(\".\")[:-1]) # type: ignore[name-defined]\n\n static_dir = files(parent_module).joinpath(\"dist\")\n try:\n with as_file(static_dir) as extracted_dir:\n os.chdir(extracted_dir)\n log.info(\"Starting HTTP Server at http://localhost:%i\", SERVER_PORT)\n with socketserver.TCPServer(\n (\"\", SERVER_PORT), http.server.SimpleHTTPRequestHandler\n ) as httpd:\n log.debug(\"serving directory %s\", static_dir)\n webbrowser.open(f\"http://localhost:{SERVER_PORT}\")\n httpd.serve_forever()\n except KeyboardInterrupt:\n log.info(\"Shutting Down HTTP Server\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "src/qcodes/monitor/monitor.py"}], "after_files": [{"content": "#! /usr/bin/env python\n# vim:fenc=utf-8\n#\n# Copyright \u00a9 2017 unga <[email protected]>\n#\n# Distributed under terms of the MIT license.\n\"\"\"\nMonitor a set of parameters in a background thread\nstream output over websocket\n\nTo start monitor, run this file, or if qcodes is installed as a module:\n\n``% python -m qcodes.monitor.monitor``\n\nAdd parameters to monitor in your measurement by creating a new monitor with a\nlist of parameters to monitor:\n\n``monitor = qcodes.Monitor(param1, param2, param3, ...)``\n\"\"\"\nfrom __future__ import annotations\n\nimport asyncio\nimport json\nimport logging\nimport os\nimport socketserver\nimport time\nimport webbrowser\nfrom asyncio import CancelledError\nfrom collections import defaultdict\nfrom collections.abc import Awaitable, Sequence\nfrom contextlib import suppress\nfrom importlib.resources import as_file, files\nfrom threading import Event, Thread\nfrom typing import Any, Callable\n\nimport websockets\nimport websockets.exceptions\nimport websockets.server\n\nfrom qcodes.parameters import Parameter\n\nWEBSOCKET_PORT = 5678\nSERVER_PORT = 3000\n\nlog = logging.getLogger(__name__)\n\n\ndef _get_metadata(\n *parameters: Parameter, use_root_instrument: bool = True\n) -> dict[str, Any]:\n \"\"\"\n Return a dictionary that contains the parameter metadata grouped by the\n instrument it belongs to.\n \"\"\"\n metadata_timestamp = time.time()\n # group metadata by instrument\n metas: dict[Any, Any] = defaultdict(list)\n for parameter in parameters:\n # Get the latest value from the parameter,\n # respecting the max_val_age parameter\n meta: dict[str, float | str | None] = {}\n meta[\"value\"] = str(parameter.get_latest())\n timestamp = parameter.get_latest.get_timestamp()\n if timestamp is not None:\n meta[\"ts\"] = timestamp.timestamp()\n else:\n meta[\"ts\"] = None\n meta[\"name\"] = parameter.label or parameter.name\n meta[\"unit\"] = parameter.unit\n\n # find the base instrument that this parameter belongs to\n if use_root_instrument:\n baseinst = parameter.root_instrument\n else:\n baseinst = parameter.instrument\n if baseinst is None:\n metas[\"Unbound Parameter\"].append(meta)\n else:\n metas[str(baseinst)].append(meta)\n\n # Create list of parameters, grouped by instrument\n parameters_out = []\n for instrument in metas:\n temp = {\"instrument\": instrument, \"parameters\": metas[instrument]}\n parameters_out.append(temp)\n\n state = {\"ts\": metadata_timestamp, \"parameters\": parameters_out}\n return state\n\n\ndef _handler(\n parameters: Sequence[Parameter], interval: float, use_root_instrument: bool = True\n) -> Callable[[websockets.server.WebSocketServerProtocol, str], Awaitable[None]]:\n \"\"\"\n Return the websockets server handler.\n \"\"\"\n\n async def server_func(\n websocket: websockets.server.WebSocketServerProtocol, _: str\n ) -> None:\n \"\"\"\n Create a websockets handler that sends parameter values to a listener\n every \"interval\" seconds.\n \"\"\"\n while True:\n try:\n # Update the parameter values\n try:\n meta = _get_metadata(\n *parameters, use_root_instrument=use_root_instrument\n )\n except ValueError:\n log.exception(\"Error getting parameters\")\n break\n log.debug(\"sending.. to %r\", websocket)\n await websocket.send(json.dumps(meta))\n # Wait for interval seconds and then send again\n await asyncio.sleep(interval)\n except (CancelledError, websockets.exceptions.ConnectionClosed):\n log.debug(\"Got CancelledError or ConnectionClosed\",\n exc_info=True)\n break\n log.debug(\"Closing websockets connection\")\n\n return server_func\n\n\nclass Monitor(Thread):\n \"\"\"\n QCodes Monitor - WebSockets server to monitor qcodes parameters.\n \"\"\"\n running = None\n\n def __init__(\n self,\n *parameters: Parameter,\n interval: float = 1,\n use_root_instrument: bool = True,\n ):\n \"\"\"\n Monitor qcodes parameters.\n\n Args:\n *parameters: Parameters to monitor.\n interval: How often one wants to refresh the values.\n use_root_instrument: Defines if parameters are grouped according to\n parameter.root_instrument or parameter.instrument\n \"\"\"\n super().__init__(daemon=True)\n\n # Check that all values are valid parameters\n for parameter in parameters:\n if not isinstance(parameter, Parameter):\n raise TypeError(f\"We can only monitor QCodes \"\n f\"Parameters, not {type(parameter)}\")\n\n self.loop: asyncio.AbstractEventLoop | None = None\n self._stop_loop_future: asyncio.Future | None = None\n self._parameters = parameters\n self.loop_is_closed = Event()\n self.server_is_started = Event()\n self.handler = _handler(\n parameters, interval=interval, use_root_instrument=use_root_instrument\n )\n log.debug(\"Start monitoring thread\")\n if Monitor.running:\n # stop the old server\n log.debug(\"Stopping and restarting server\")\n Monitor.running.stop()\n self.start()\n\n # Wait until the loop is running\n self.server_is_started.wait(timeout=5)\n if not self.server_is_started.is_set():\n raise RuntimeError(\"Failed to start server\")\n Monitor.running = self\n\n def run(self) -> None:\n \"\"\"\n Start the event loop and run forever.\n \"\"\"\n log.debug(\"Running Websocket server\")\n\n async def run_loop() -> None:\n self.loop = asyncio.get_running_loop()\n self._stop_loop_future = self.loop.create_future()\n\n async with websockets.server.serve(\n self.handler, \"127.0.0.1\", WEBSOCKET_PORT, close_timeout=1\n ):\n self.server_is_started.set()\n try:\n await self._stop_loop_future\n except asyncio.CancelledError:\n log.debug(\"Websocket server thread shutting down\")\n\n try:\n asyncio.run(run_loop())\n finally:\n self.loop_is_closed.set()\n\n def update_all(self) -> None:\n \"\"\"\n Update all parameters in the monitor.\n \"\"\"\n for parameter in self._parameters:\n # call get if it can be called without arguments\n with suppress(TypeError):\n parameter.get()\n\n def stop(self) -> None:\n \"\"\"\n Shutdown the server, close the event loop and join the thread.\n Setting active Monitor to ``None``.\n \"\"\"\n self.join()\n Monitor.running = None\n\n def join(self, timeout: float | None = None) -> None:\n \"\"\"\n Overwrite ``Thread.join`` to make sure server is stopped before\n joining avoiding a potential deadlock.\n \"\"\"\n log.debug(\"Shutting down server\")\n if not self.is_alive():\n # we run this check before trying to run to prevent a cryptic\n # error message\n log.debug(\"monitor is dead\")\n return\n try:\n if self.loop is not None and self._stop_loop_future is not None:\n log.debug(\"Instructing server to stop event loop.\")\n self.loop.call_soon_threadsafe(self._stop_loop_future.cancel)\n else:\n log.debug(\"No event loop found. Cannot stop event loop.\")\n except RuntimeError:\n # the above may throw a runtime error if the loop is already\n # stopped in which case there is nothing more to do\n log.exception(\"Could not close loop\")\n self.loop_is_closed.wait(timeout=5)\n if not self.loop_is_closed.is_set():\n raise RuntimeError(\"Failed to join loop\")\n log.debug(\"Loop reported closed\")\n super().join(timeout=timeout)\n log.debug(\"Monitor Thread has joined\")\n\n @staticmethod\n def show() -> None:\n \"\"\"\n Overwrite this method to show/raise your monitor GUI\n F.ex.\n\n ::\n\n import webbrowser\n url = \"localhost:3000\"\n # Open URL in new window, raising the window if possible.\n webbrowser.open_new(url)\n\n \"\"\"\n webbrowser.open(f\"http://localhost:{SERVER_PORT}\")\n\n\ndef main() -> None:\n import http.server\n\n # If this file is run, create a simple webserver that serves a simple\n # website that can be used to view monitored parameters.\n # # https://github.com/python/mypy/issues/4182\n parent_module = \".\".join(__loader__.name.split(\".\")[:-1]) # type: ignore[name-defined]\n\n static_dir = files(parent_module).joinpath(\"dist\")\n try:\n with as_file(static_dir) as extracted_dir:\n os.chdir(extracted_dir)\n log.info(\"Starting HTTP Server at http://localhost:%i\", SERVER_PORT)\n with socketserver.TCPServer(\n (\"\", SERVER_PORT), http.server.SimpleHTTPRequestHandler\n ) as httpd:\n log.debug(\"serving directory %s\", static_dir)\n webbrowser.open(f\"http://localhost:{SERVER_PORT}\")\n httpd.serve_forever()\n except KeyboardInterrupt:\n log.info(\"Shutting Down HTTP Server\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "src/qcodes/monitor/monitor.py"}]} | 3,098 | 115 |
gh_patches_debug_21742 | rasdani/github-patches | git_diff | pypa__pip-7326 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Call for clarification: acceptable values for build-system.requires in pyproject.toml
**Environment**
* pip version: 19.0.3
* Python version: 3.7
* OS: GNU/Linux
<!-- Feel free to add more information about your environment here -->
**Description**
<!-- A clear and concise description of what the bug is. -->
It's not clear nor is clearly specified by PEP518. But I was having a need to have an in-tree build back-end.
So I've hacked it via installing with a relative path.
*And it works!*
Yet, @pradyunsg has pointed out that the way I used probably doesn't conform to PEP 508.
So I tried some other ways to point to the in-tree distribution. And those didn't work.
**How to Reproduce**
<!-- Describe the steps to reproduce this bug. -->
(this works)
```toml
[build-system]
requires = ["./build-aiohttp", ]
build-backend = "build_aiohttp.api"
# not yet supported, so it doesn't influence anything, it's for forward-compat:
backend-path = "./build-aiohttp"
```
But if instead of `"./build-aiohttp"` in `requires` I try any of `"file://build-aiohttp"`, `"file:///./build-aiohttp"`, `"build_aiohttp @ file://./build-aiohttp"`, `"build_aiohttp @ file:./build-aiohttp"` pip fails to recognize those as installables.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
I don't know what to expect. The method which works seems to be undefined in PEPs so I probably shouldn't rely on it.
Pip may either decide to improve the filtering of `requires` option or document it being permissive...
P.S. Oh and, by the way, I was able to test my other PEP517 backend outside of the project tree via
```toml
[build-system]
requires = ["../fortunate_pkg"]
```
so this relative path feature proves to be quite useful for development/debugging purposes.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pip/_internal/pyproject.py`
Content:
```
1 from __future__ import absolute_import
2
3 import io
4 import os
5 import sys
6
7 from pip._vendor import pytoml, six
8
9 from pip._internal.exceptions import InstallationError
10 from pip._internal.utils.typing import MYPY_CHECK_RUNNING
11
12 if MYPY_CHECK_RUNNING:
13 from typing import Any, Tuple, Optional, List
14
15
16 def _is_list_of_str(obj):
17 # type: (Any) -> bool
18 return (
19 isinstance(obj, list) and
20 all(isinstance(item, six.string_types) for item in obj)
21 )
22
23
24 def make_pyproject_path(unpacked_source_directory):
25 # type: (str) -> str
26 path = os.path.join(unpacked_source_directory, 'pyproject.toml')
27
28 # Python2 __file__ should not be unicode
29 if six.PY2 and isinstance(path, six.text_type):
30 path = path.encode(sys.getfilesystemencoding())
31
32 return path
33
34
35 def load_pyproject_toml(
36 use_pep517, # type: Optional[bool]
37 pyproject_toml, # type: str
38 setup_py, # type: str
39 req_name # type: str
40 ):
41 # type: (...) -> Optional[Tuple[List[str], str, List[str]]]
42 """Load the pyproject.toml file.
43
44 Parameters:
45 use_pep517 - Has the user requested PEP 517 processing? None
46 means the user hasn't explicitly specified.
47 pyproject_toml - Location of the project's pyproject.toml file
48 setup_py - Location of the project's setup.py file
49 req_name - The name of the requirement we're processing (for
50 error reporting)
51
52 Returns:
53 None if we should use the legacy code path, otherwise a tuple
54 (
55 requirements from pyproject.toml,
56 name of PEP 517 backend,
57 requirements we should check are installed after setting
58 up the build environment
59 )
60 """
61 has_pyproject = os.path.isfile(pyproject_toml)
62 has_setup = os.path.isfile(setup_py)
63
64 if has_pyproject:
65 with io.open(pyproject_toml, encoding="utf-8") as f:
66 pp_toml = pytoml.load(f)
67 build_system = pp_toml.get("build-system")
68 else:
69 build_system = None
70
71 # The following cases must use PEP 517
72 # We check for use_pep517 being non-None and falsey because that means
73 # the user explicitly requested --no-use-pep517. The value 0 as
74 # opposed to False can occur when the value is provided via an
75 # environment variable or config file option (due to the quirk of
76 # strtobool() returning an integer in pip's configuration code).
77 if has_pyproject and not has_setup:
78 if use_pep517 is not None and not use_pep517:
79 raise InstallationError(
80 "Disabling PEP 517 processing is invalid: "
81 "project does not have a setup.py"
82 )
83 use_pep517 = True
84 elif build_system and "build-backend" in build_system:
85 if use_pep517 is not None and not use_pep517:
86 raise InstallationError(
87 "Disabling PEP 517 processing is invalid: "
88 "project specifies a build backend of {} "
89 "in pyproject.toml".format(
90 build_system["build-backend"]
91 )
92 )
93 use_pep517 = True
94
95 # If we haven't worked out whether to use PEP 517 yet,
96 # and the user hasn't explicitly stated a preference,
97 # we do so if the project has a pyproject.toml file.
98 elif use_pep517 is None:
99 use_pep517 = has_pyproject
100
101 # At this point, we know whether we're going to use PEP 517.
102 assert use_pep517 is not None
103
104 # If we're using the legacy code path, there is nothing further
105 # for us to do here.
106 if not use_pep517:
107 return None
108
109 if build_system is None:
110 # Either the user has a pyproject.toml with no build-system
111 # section, or the user has no pyproject.toml, but has opted in
112 # explicitly via --use-pep517.
113 # In the absence of any explicit backend specification, we
114 # assume the setuptools backend that most closely emulates the
115 # traditional direct setup.py execution, and require wheel and
116 # a version of setuptools that supports that backend.
117
118 build_system = {
119 "requires": ["setuptools>=40.8.0", "wheel"],
120 "build-backend": "setuptools.build_meta:__legacy__",
121 }
122
123 # If we're using PEP 517, we have build system information (either
124 # from pyproject.toml, or defaulted by the code above).
125 # Note that at this point, we do not know if the user has actually
126 # specified a backend, though.
127 assert build_system is not None
128
129 # Ensure that the build-system section in pyproject.toml conforms
130 # to PEP 518.
131 error_template = (
132 "{package} has a pyproject.toml file that does not comply "
133 "with PEP 518: {reason}"
134 )
135
136 # Specifying the build-system table but not the requires key is invalid
137 if "requires" not in build_system:
138 raise InstallationError(
139 error_template.format(package=req_name, reason=(
140 "it has a 'build-system' table but not "
141 "'build-system.requires' which is mandatory in the table"
142 ))
143 )
144
145 # Error out if requires is not a list of strings
146 requires = build_system["requires"]
147 if not _is_list_of_str(requires):
148 raise InstallationError(error_template.format(
149 package=req_name,
150 reason="'build-system.requires' is not a list of strings.",
151 ))
152
153 backend = build_system.get("build-backend")
154 check = [] # type: List[str]
155 if backend is None:
156 # If the user didn't specify a backend, we assume they want to use
157 # the setuptools backend. But we can't be sure they have included
158 # a version of setuptools which supplies the backend, or wheel
159 # (which is needed by the backend) in their requirements. So we
160 # make a note to check that those requirements are present once
161 # we have set up the environment.
162 # This is quite a lot of work to check for a very specific case. But
163 # the problem is, that case is potentially quite common - projects that
164 # adopted PEP 518 early for the ability to specify requirements to
165 # execute setup.py, but never considered needing to mention the build
166 # tools themselves. The original PEP 518 code had a similar check (but
167 # implemented in a different way).
168 backend = "setuptools.build_meta:__legacy__"
169 check = ["setuptools>=40.8.0", "wheel"]
170
171 return (requires, backend, check)
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/pip/_internal/pyproject.py b/src/pip/_internal/pyproject.py
--- a/src/pip/_internal/pyproject.py
+++ b/src/pip/_internal/pyproject.py
@@ -5,6 +5,7 @@
import sys
from pip._vendor import pytoml, six
+from pip._vendor.packaging.requirements import InvalidRequirement, Requirement
from pip._internal.exceptions import InstallationError
from pip._internal.utils.typing import MYPY_CHECK_RUNNING
@@ -150,6 +151,21 @@
reason="'build-system.requires' is not a list of strings.",
))
+ # Each requirement must be valid as per PEP 508
+ for requirement in requires:
+ try:
+ Requirement(requirement)
+ except InvalidRequirement:
+ raise InstallationError(
+ error_template.format(
+ package=req_name,
+ reason=(
+ "'build-system.requires' contains an invalid "
+ "requirement: {!r}".format(requirement)
+ ),
+ )
+ )
+
backend = build_system.get("build-backend")
check = [] # type: List[str]
if backend is None:
| {"golden_diff": "diff --git a/src/pip/_internal/pyproject.py b/src/pip/_internal/pyproject.py\n--- a/src/pip/_internal/pyproject.py\n+++ b/src/pip/_internal/pyproject.py\n@@ -5,6 +5,7 @@\n import sys\n \n from pip._vendor import pytoml, six\n+from pip._vendor.packaging.requirements import InvalidRequirement, Requirement\n \n from pip._internal.exceptions import InstallationError\n from pip._internal.utils.typing import MYPY_CHECK_RUNNING\n@@ -150,6 +151,21 @@\n reason=\"'build-system.requires' is not a list of strings.\",\n ))\n \n+ # Each requirement must be valid as per PEP 508\n+ for requirement in requires:\n+ try:\n+ Requirement(requirement)\n+ except InvalidRequirement:\n+ raise InstallationError(\n+ error_template.format(\n+ package=req_name,\n+ reason=(\n+ \"'build-system.requires' contains an invalid \"\n+ \"requirement: {!r}\".format(requirement)\n+ ),\n+ )\n+ )\n+\n backend = build_system.get(\"build-backend\")\n check = [] # type: List[str]\n if backend is None:\n", "issue": "Call for clarification: acceptable values for build-system.requires in pyproject.toml\n**Environment**\r\n\r\n* pip version: 19.0.3\r\n* Python version: 3.7\r\n* OS: GNU/Linux\r\n\r\n<!-- Feel free to add more information about your environment here -->\r\n\r\n**Description**\r\n<!-- A clear and concise description of what the bug is. -->\r\nIt's not clear nor is clearly specified by PEP518. But I was having a need to have an in-tree build back-end.\r\nSo I've hacked it via installing with a relative path.\r\n*And it works!*\r\nYet, @pradyunsg has pointed out that the way I used probably doesn't conform to PEP 508.\r\nSo I tried some other ways to point to the in-tree distribution. And those didn't work.\r\n\r\n**How to Reproduce**\r\n<!-- Describe the steps to reproduce this bug. -->\r\n(this works)\r\n```toml\r\n[build-system]\r\nrequires = [\"./build-aiohttp\", ]\r\nbuild-backend = \"build_aiohttp.api\"\r\n\r\n# not yet supported, so it doesn't influence anything, it's for forward-compat:\r\nbackend-path = \"./build-aiohttp\"\r\n```\r\n\r\nBut if instead of `\"./build-aiohttp\"` in `requires` I try any of `\"file://build-aiohttp\"`, `\"file:///./build-aiohttp\"`, `\"build_aiohttp @ file://./build-aiohttp\"`, `\"build_aiohttp @ file:./build-aiohttp\"` pip fails to recognize those as installables.\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\nI don't know what to expect. The method which works seems to be undefined in PEPs so I probably shouldn't rely on it.\r\n\r\nPip may either decide to improve the filtering of `requires` option or document it being permissive...\r\n\r\nP.S. Oh and, by the way, I was able to test my other PEP517 backend outside of the project tree via\r\n```toml\r\n[build-system]\r\nrequires = [\"../fortunate_pkg\"]\r\n```\r\nso this relative path feature proves to be quite useful for development/debugging purposes.\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport io\nimport os\nimport sys\n\nfrom pip._vendor import pytoml, six\n\nfrom pip._internal.exceptions import InstallationError\nfrom pip._internal.utils.typing import MYPY_CHECK_RUNNING\n\nif MYPY_CHECK_RUNNING:\n from typing import Any, Tuple, Optional, List\n\n\ndef _is_list_of_str(obj):\n # type: (Any) -> bool\n return (\n isinstance(obj, list) and\n all(isinstance(item, six.string_types) for item in obj)\n )\n\n\ndef make_pyproject_path(unpacked_source_directory):\n # type: (str) -> str\n path = os.path.join(unpacked_source_directory, 'pyproject.toml')\n\n # Python2 __file__ should not be unicode\n if six.PY2 and isinstance(path, six.text_type):\n path = path.encode(sys.getfilesystemencoding())\n\n return path\n\n\ndef load_pyproject_toml(\n use_pep517, # type: Optional[bool]\n pyproject_toml, # type: str\n setup_py, # type: str\n req_name # type: str\n):\n # type: (...) -> Optional[Tuple[List[str], str, List[str]]]\n \"\"\"Load the pyproject.toml file.\n\n Parameters:\n use_pep517 - Has the user requested PEP 517 processing? None\n means the user hasn't explicitly specified.\n pyproject_toml - Location of the project's pyproject.toml file\n setup_py - Location of the project's setup.py file\n req_name - The name of the requirement we're processing (for\n error reporting)\n\n Returns:\n None if we should use the legacy code path, otherwise a tuple\n (\n requirements from pyproject.toml,\n name of PEP 517 backend,\n requirements we should check are installed after setting\n up the build environment\n )\n \"\"\"\n has_pyproject = os.path.isfile(pyproject_toml)\n has_setup = os.path.isfile(setup_py)\n\n if has_pyproject:\n with io.open(pyproject_toml, encoding=\"utf-8\") as f:\n pp_toml = pytoml.load(f)\n build_system = pp_toml.get(\"build-system\")\n else:\n build_system = None\n\n # The following cases must use PEP 517\n # We check for use_pep517 being non-None and falsey because that means\n # the user explicitly requested --no-use-pep517. The value 0 as\n # opposed to False can occur when the value is provided via an\n # environment variable or config file option (due to the quirk of\n # strtobool() returning an integer in pip's configuration code).\n if has_pyproject and not has_setup:\n if use_pep517 is not None and not use_pep517:\n raise InstallationError(\n \"Disabling PEP 517 processing is invalid: \"\n \"project does not have a setup.py\"\n )\n use_pep517 = True\n elif build_system and \"build-backend\" in build_system:\n if use_pep517 is not None and not use_pep517:\n raise InstallationError(\n \"Disabling PEP 517 processing is invalid: \"\n \"project specifies a build backend of {} \"\n \"in pyproject.toml\".format(\n build_system[\"build-backend\"]\n )\n )\n use_pep517 = True\n\n # If we haven't worked out whether to use PEP 517 yet,\n # and the user hasn't explicitly stated a preference,\n # we do so if the project has a pyproject.toml file.\n elif use_pep517 is None:\n use_pep517 = has_pyproject\n\n # At this point, we know whether we're going to use PEP 517.\n assert use_pep517 is not None\n\n # If we're using the legacy code path, there is nothing further\n # for us to do here.\n if not use_pep517:\n return None\n\n if build_system is None:\n # Either the user has a pyproject.toml with no build-system\n # section, or the user has no pyproject.toml, but has opted in\n # explicitly via --use-pep517.\n # In the absence of any explicit backend specification, we\n # assume the setuptools backend that most closely emulates the\n # traditional direct setup.py execution, and require wheel and\n # a version of setuptools that supports that backend.\n\n build_system = {\n \"requires\": [\"setuptools>=40.8.0\", \"wheel\"],\n \"build-backend\": \"setuptools.build_meta:__legacy__\",\n }\n\n # If we're using PEP 517, we have build system information (either\n # from pyproject.toml, or defaulted by the code above).\n # Note that at this point, we do not know if the user has actually\n # specified a backend, though.\n assert build_system is not None\n\n # Ensure that the build-system section in pyproject.toml conforms\n # to PEP 518.\n error_template = (\n \"{package} has a pyproject.toml file that does not comply \"\n \"with PEP 518: {reason}\"\n )\n\n # Specifying the build-system table but not the requires key is invalid\n if \"requires\" not in build_system:\n raise InstallationError(\n error_template.format(package=req_name, reason=(\n \"it has a 'build-system' table but not \"\n \"'build-system.requires' which is mandatory in the table\"\n ))\n )\n\n # Error out if requires is not a list of strings\n requires = build_system[\"requires\"]\n if not _is_list_of_str(requires):\n raise InstallationError(error_template.format(\n package=req_name,\n reason=\"'build-system.requires' is not a list of strings.\",\n ))\n\n backend = build_system.get(\"build-backend\")\n check = [] # type: List[str]\n if backend is None:\n # If the user didn't specify a backend, we assume they want to use\n # the setuptools backend. But we can't be sure they have included\n # a version of setuptools which supplies the backend, or wheel\n # (which is needed by the backend) in their requirements. So we\n # make a note to check that those requirements are present once\n # we have set up the environment.\n # This is quite a lot of work to check for a very specific case. But\n # the problem is, that case is potentially quite common - projects that\n # adopted PEP 518 early for the ability to specify requirements to\n # execute setup.py, but never considered needing to mention the build\n # tools themselves. The original PEP 518 code had a similar check (but\n # implemented in a different way).\n backend = \"setuptools.build_meta:__legacy__\"\n check = [\"setuptools>=40.8.0\", \"wheel\"]\n\n return (requires, backend, check)\n", "path": "src/pip/_internal/pyproject.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nimport io\nimport os\nimport sys\n\nfrom pip._vendor import pytoml, six\nfrom pip._vendor.packaging.requirements import InvalidRequirement, Requirement\n\nfrom pip._internal.exceptions import InstallationError\nfrom pip._internal.utils.typing import MYPY_CHECK_RUNNING\n\nif MYPY_CHECK_RUNNING:\n from typing import Any, Tuple, Optional, List\n\n\ndef _is_list_of_str(obj):\n # type: (Any) -> bool\n return (\n isinstance(obj, list) and\n all(isinstance(item, six.string_types) for item in obj)\n )\n\n\ndef make_pyproject_path(unpacked_source_directory):\n # type: (str) -> str\n path = os.path.join(unpacked_source_directory, 'pyproject.toml')\n\n # Python2 __file__ should not be unicode\n if six.PY2 and isinstance(path, six.text_type):\n path = path.encode(sys.getfilesystemencoding())\n\n return path\n\n\ndef load_pyproject_toml(\n use_pep517, # type: Optional[bool]\n pyproject_toml, # type: str\n setup_py, # type: str\n req_name # type: str\n):\n # type: (...) -> Optional[Tuple[List[str], str, List[str]]]\n \"\"\"Load the pyproject.toml file.\n\n Parameters:\n use_pep517 - Has the user requested PEP 517 processing? None\n means the user hasn't explicitly specified.\n pyproject_toml - Location of the project's pyproject.toml file\n setup_py - Location of the project's setup.py file\n req_name - The name of the requirement we're processing (for\n error reporting)\n\n Returns:\n None if we should use the legacy code path, otherwise a tuple\n (\n requirements from pyproject.toml,\n name of PEP 517 backend,\n requirements we should check are installed after setting\n up the build environment\n )\n \"\"\"\n has_pyproject = os.path.isfile(pyproject_toml)\n has_setup = os.path.isfile(setup_py)\n\n if has_pyproject:\n with io.open(pyproject_toml, encoding=\"utf-8\") as f:\n pp_toml = pytoml.load(f)\n build_system = pp_toml.get(\"build-system\")\n else:\n build_system = None\n\n # The following cases must use PEP 517\n # We check for use_pep517 being non-None and falsey because that means\n # the user explicitly requested --no-use-pep517. The value 0 as\n # opposed to False can occur when the value is provided via an\n # environment variable or config file option (due to the quirk of\n # strtobool() returning an integer in pip's configuration code).\n if has_pyproject and not has_setup:\n if use_pep517 is not None and not use_pep517:\n raise InstallationError(\n \"Disabling PEP 517 processing is invalid: \"\n \"project does not have a setup.py\"\n )\n use_pep517 = True\n elif build_system and \"build-backend\" in build_system:\n if use_pep517 is not None and not use_pep517:\n raise InstallationError(\n \"Disabling PEP 517 processing is invalid: \"\n \"project specifies a build backend of {} \"\n \"in pyproject.toml\".format(\n build_system[\"build-backend\"]\n )\n )\n use_pep517 = True\n\n # If we haven't worked out whether to use PEP 517 yet,\n # and the user hasn't explicitly stated a preference,\n # we do so if the project has a pyproject.toml file.\n elif use_pep517 is None:\n use_pep517 = has_pyproject\n\n # At this point, we know whether we're going to use PEP 517.\n assert use_pep517 is not None\n\n # If we're using the legacy code path, there is nothing further\n # for us to do here.\n if not use_pep517:\n return None\n\n if build_system is None:\n # Either the user has a pyproject.toml with no build-system\n # section, or the user has no pyproject.toml, but has opted in\n # explicitly via --use-pep517.\n # In the absence of any explicit backend specification, we\n # assume the setuptools backend that most closely emulates the\n # traditional direct setup.py execution, and require wheel and\n # a version of setuptools that supports that backend.\n\n build_system = {\n \"requires\": [\"setuptools>=40.8.0\", \"wheel\"],\n \"build-backend\": \"setuptools.build_meta:__legacy__\",\n }\n\n # If we're using PEP 517, we have build system information (either\n # from pyproject.toml, or defaulted by the code above).\n # Note that at this point, we do not know if the user has actually\n # specified a backend, though.\n assert build_system is not None\n\n # Ensure that the build-system section in pyproject.toml conforms\n # to PEP 518.\n error_template = (\n \"{package} has a pyproject.toml file that does not comply \"\n \"with PEP 518: {reason}\"\n )\n\n # Specifying the build-system table but not the requires key is invalid\n if \"requires\" not in build_system:\n raise InstallationError(\n error_template.format(package=req_name, reason=(\n \"it has a 'build-system' table but not \"\n \"'build-system.requires' which is mandatory in the table\"\n ))\n )\n\n # Error out if requires is not a list of strings\n requires = build_system[\"requires\"]\n if not _is_list_of_str(requires):\n raise InstallationError(error_template.format(\n package=req_name,\n reason=\"'build-system.requires' is not a list of strings.\",\n ))\n\n # Each requirement must be valid as per PEP 508\n for requirement in requires:\n try:\n Requirement(requirement)\n except InvalidRequirement:\n raise InstallationError(\n error_template.format(\n package=req_name,\n reason=(\n \"'build-system.requires' contains an invalid \"\n \"requirement: {!r}\".format(requirement)\n ),\n )\n )\n\n backend = build_system.get(\"build-backend\")\n check = [] # type: List[str]\n if backend is None:\n # If the user didn't specify a backend, we assume they want to use\n # the setuptools backend. But we can't be sure they have included\n # a version of setuptools which supplies the backend, or wheel\n # (which is needed by the backend) in their requirements. So we\n # make a note to check that those requirements are present once\n # we have set up the environment.\n # This is quite a lot of work to check for a very specific case. But\n # the problem is, that case is potentially quite common - projects that\n # adopted PEP 518 early for the ability to specify requirements to\n # execute setup.py, but never considered needing to mention the build\n # tools themselves. The original PEP 518 code had a similar check (but\n # implemented in a different way).\n backend = \"setuptools.build_meta:__legacy__\"\n check = [\"setuptools>=40.8.0\", \"wheel\"]\n\n return (requires, backend, check)\n", "path": "src/pip/_internal/pyproject.py"}]} | 2,725 | 261 |
gh_patches_debug_33161 | rasdani/github-patches | git_diff | sql-machine-learning__elasticdl-952 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use tf.function-decorated functions to accelerate model compute
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/python/worker/worker.py`
Content:
```
1 import logging
2 import time
3 import traceback
4 from contextlib import closing
5
6 import recordio
7 import tensorflow as tf
8 from tensorflow.python.ops import math_ops
9
10 from elasticdl.proto import elasticdl_pb2, elasticdl_pb2_grpc
11 from elasticdl.python.common.model_helper import (
12 load_model_from_module,
13 load_module,
14 )
15 from elasticdl.python.common.ndarray import (
16 ndarray_to_tensor,
17 tensor_to_ndarray,
18 )
19
20 # The default maximum number of a minibatch retry as its results
21 # (e.g. gradients) are not accepted by master.
22 DEFAULT_MAX_MINIBATCH_RETRY_NUM = 64
23
24
25 class Worker(object):
26 """ElasticDL worker"""
27
28 def __init__(
29 self,
30 worker_id,
31 model_file,
32 input_fn="input_fn",
33 loss="loss",
34 optimizer="optimizer",
35 eval_metrics_fn="eval_metrics_fn",
36 channel=None,
37 model_def=None,
38 model_params="",
39 max_minibatch_retry_num=DEFAULT_MAX_MINIBATCH_RETRY_NUM,
40 ):
41 """
42 Arguments:
43 model_file: A module to define the model
44 channel: grpc channel
45 max_minibatch_retry_num: The maximum number of a minibatch retry
46 as its results (e.g. gradients) are not accepted by master.
47 """
48 self._logger = logging.getLogger(__name__)
49 self._worker_id = worker_id
50 model_module = load_module(model_file).__dict__
51 self._model = load_model_from_module(
52 model_def, model_module, model_params
53 )
54 self._var_created = self._model.built
55 self._input_fn = model_module[input_fn]
56 self._opt_fn = model_module[optimizer]
57 self._loss = model_module[loss]
58 self._eval_metrics_fn = model_module[eval_metrics_fn]
59
60 if channel is None:
61 self._stub = None
62 else:
63 self._stub = elasticdl_pb2_grpc.MasterStub(channel)
64 self._max_minibatch_retry_num = max_minibatch_retry_num
65 self._model_version = -1
66
67 def get_task(self):
68 """
69 get task from master
70 """
71 req = elasticdl_pb2.GetTaskRequest()
72 req.worker_id = self._worker_id
73
74 return self._stub.GetTask(req)
75
76 def get_model(self, version, method):
77 """
78 get model from master, and update model_version
79 """
80 req = elasticdl_pb2.GetModelRequest()
81 req.version = version
82 req.method = method
83 model = self._stub.GetModel(req)
84
85 for var in self._model.trainable_variables:
86 # Assumes all trainable variables exist in model.param.
87 var.assign(tensor_to_ndarray(model.param[var.name]))
88 self._model_version = model.version
89
90 def report_task_result(self, task_id, err_msg):
91 """
92 report task result to master
93 """
94 report = elasticdl_pb2.ReportTaskResultRequest()
95 report.task_id = task_id
96 report.err_message = err_msg
97 return self._stub.ReportTaskResult(report)
98
99 def report_variable(self):
100 """
101 report variable to ps.
102 """
103 req = elasticdl_pb2.ReportVariableRequest()
104 for v in self._model.trainable_variables:
105 req.variable[v.name].CopyFrom(ndarray_to_tensor(v.numpy()))
106 self._stub.ReportVariable(req)
107
108 def report_gradient(self, grads):
109 """
110 report gradient to ps, return (accepted, model_version) from rpc call.
111 """
112 req = elasticdl_pb2.ReportGradientRequest()
113 for g, v in zip(grads, self._model.trainable_variables):
114 req.gradient[v.name].CopyFrom(ndarray_to_tensor(g.numpy()))
115 req.model_version = self._model_version
116 res = self._stub.ReportGradient(req)
117 return res.accepted, res.model_version
118
119 def report_evaluation_metrics(self, evaluation_metrics):
120 """
121 report evaluation metrics to ps, return (accepted, model_version)
122 from rpc call.
123 """
124 req = elasticdl_pb2.ReportEvaluationMetricsRequest()
125 for k, v in evaluation_metrics.items():
126 v_np = v.numpy()
127 # If scalar, convert to numpy 1D array with size 1
128 if not v_np.shape:
129 v_np = v_np.reshape(1)
130 req.evaluation_metrics[k].CopyFrom(ndarray_to_tensor(v_np))
131 req.model_version = self._model_version
132 res = self._stub.ReportEvaluationMetrics(req)
133 return res.accepted, res.model_version
134
135 def report_prediction_outputs(self, predictions):
136 self._logger.info("Predicted: %f" % predictions.numpy())
137 # TODO: Decide whether we need to send results to master first
138 # or write results to destination directly from workers.
139 # Also, need to think about how users configure where to
140 # write results.
141 return True
142
143 def _get_batch(self, reader, batch_size):
144 res = []
145 for i in range(batch_size):
146 record = reader.record()
147 if record is None:
148 break
149 res.append(record)
150 return res
151
152 def _create_variable_and_report(self, features):
153 # Use model.call to create variables, then report to ps
154 _ = self._model.call(features)
155 self.report_variable()
156 self._var_created = True
157
158 def _run_training_task(self, features, labels):
159 with tf.GradientTape() as tape:
160 outputs = self._model.call(features, training=True)
161 loss = self._loss(outputs, labels)
162 # Add regularization loss if any
163 if self._model.losses:
164 loss += math_ops.add_n(self._model.losses)
165 grads = tape.gradient(loss, self._model.trainable_variables)
166 accepted, min_model_version = self.report_gradient(grads)
167 return accepted, min_model_version, loss
168
169 def _run_evaluation_task(self, features, labels):
170 outputs = self._model.call(features, training=False)
171 evaluation_metrics = self._eval_metrics_fn(outputs, labels)
172 return self.report_evaluation_metrics(evaluation_metrics)
173
174 def _run_prediction_task(self, features):
175 predictions = self._model.call(features, training=False)
176 return self.report_prediction_outputs(predictions)
177
178 def _handle_task(self, task):
179 min_model_version = task.model_version
180 with closing(
181 recordio.Scanner(
182 task.shard_file_name, task.start, task.end - task.start
183 )
184 ) as reader:
185 while True:
186 record_buf = self._get_batch(reader, task.minibatch_size)
187 if not record_buf:
188 break
189 min_model_version = self._process_minibatch(
190 task, record_buf, min_model_version
191 )
192
193 def _process_minibatch(self, task, record_buf, min_model_version):
194 # TODO: Discuss how we separate input_fn for different tasks
195 features, labels = self._input_fn(record_buf)
196 if not self._var_created:
197 self._create_variable_and_report(features)
198 for _ in range(self._max_minibatch_retry_num):
199 if task.type == elasticdl_pb2.EVALUATION:
200 if min_model_version == -1:
201 if self._model_version < 0:
202 self.get_model(0, elasticdl_pb2.MINIMUM)
203 elif self._model_version != min_model_version:
204 self.get_model(min_model_version, elasticdl_pb2.FIXED)
205 accepted, _ = self._run_evaluation_task(features, labels)
206 if accepted:
207 break
208 elif task.type == elasticdl_pb2.TRAINING:
209 # TODO: optimize the logic to avoid unnecessary
210 # get_model call.
211 self.get_model(
212 max(self._model_version, min_model_version),
213 elasticdl_pb2.MINIMUM,
214 )
215 accepted, min_model_version, loss = self._run_training_task(
216 features, labels
217 )
218 if accepted:
219 self._logger.info("Loss is %f" % loss.numpy())
220 break
221 elif task.type == elasticdl_pb2.PREDICTION:
222 if self._model_version != min_model_version:
223 self.get_model(min_model_version, elasticdl_pb2.FIXED)
224 accepted = self._run_prediction_task(features)
225 if accepted:
226 break
227 else:
228 raise RuntimeError("Unrecognized task type, %s" % task.type)
229 else:
230 # Worker got stuck, fail the task.
231 # TODO: stop the worker if it fails to make any
232 # progress for some time.
233 raise RuntimeError("Worker got stuck")
234 return min_model_version
235
236 def run(self):
237 """
238 Fetches task from master and performs training or evaluation.
239 """
240 while True:
241 task = self.get_task()
242 if not task.shard_file_name:
243 if task.type == elasticdl_pb2.WAIT:
244 # Wait a few seconds then try to get_task again
245 time.sleep(5)
246 continue
247 else:
248 # No more task
249 self._logger.info("No more task, stopping")
250 break
251 self._logger.info("Receive a new task: %d", task.task_id)
252 err_msg = ""
253 try:
254 self._handle_task(task)
255 except RuntimeError as err:
256 err_msg = str(err)
257 traceback.print_exc()
258 except Exception as ex:
259 err_msg = str(ex)
260 traceback.print_exc()
261 raise ex
262 self.report_task_result(task.task_id, err_msg)
263
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/elasticdl/python/worker/worker.py b/elasticdl/python/worker/worker.py
--- a/elasticdl/python/worker/worker.py
+++ b/elasticdl/python/worker/worker.py
@@ -155,7 +155,8 @@
self.report_variable()
self._var_created = True
- def _run_training_task(self, features, labels):
+ @tf.function
+ def training_process(self, features, labels):
with tf.GradientTape() as tape:
outputs = self._model.call(features, training=True)
loss = self._loss(outputs, labels)
@@ -163,16 +164,30 @@
if self._model.losses:
loss += math_ops.add_n(self._model.losses)
grads = tape.gradient(loss, self._model.trainable_variables)
+ return loss, grads
+
+ @tf.function
+ def evaluation_process(self, features, labels):
+ outputs = self._model.call(features, training=False)
+ evaluation_metrics = self._eval_metrics_fn(outputs, labels)
+ return evaluation_metrics
+
+ @tf.function
+ def predict_process(self, features):
+ outputs = self._model.call(features, training=False)
+ return outputs
+
+ def _run_training_task(self, features, labels):
+ loss, grads = self.training_process(features, labels)
accepted, min_model_version = self.report_gradient(grads)
return accepted, min_model_version, loss
def _run_evaluation_task(self, features, labels):
- outputs = self._model.call(features, training=False)
- evaluation_metrics = self._eval_metrics_fn(outputs, labels)
+ evaluation_metrics = self.evaluation_process(features, labels)
return self.report_evaluation_metrics(evaluation_metrics)
def _run_prediction_task(self, features):
- predictions = self._model.call(features, training=False)
+ predictions = self.predict_process(features)
return self.report_prediction_outputs(predictions)
def _handle_task(self, task):
| {"golden_diff": "diff --git a/elasticdl/python/worker/worker.py b/elasticdl/python/worker/worker.py\n--- a/elasticdl/python/worker/worker.py\n+++ b/elasticdl/python/worker/worker.py\n@@ -155,7 +155,8 @@\n self.report_variable()\n self._var_created = True\n \n- def _run_training_task(self, features, labels):\n+ @tf.function\n+ def training_process(self, features, labels):\n with tf.GradientTape() as tape:\n outputs = self._model.call(features, training=True)\n loss = self._loss(outputs, labels)\n@@ -163,16 +164,30 @@\n if self._model.losses:\n loss += math_ops.add_n(self._model.losses)\n grads = tape.gradient(loss, self._model.trainable_variables)\n+ return loss, grads\n+\n+ @tf.function\n+ def evaluation_process(self, features, labels):\n+ outputs = self._model.call(features, training=False)\n+ evaluation_metrics = self._eval_metrics_fn(outputs, labels)\n+ return evaluation_metrics\n+\n+ @tf.function\n+ def predict_process(self, features):\n+ outputs = self._model.call(features, training=False)\n+ return outputs\n+\n+ def _run_training_task(self, features, labels):\n+ loss, grads = self.training_process(features, labels)\n accepted, min_model_version = self.report_gradient(grads)\n return accepted, min_model_version, loss\n \n def _run_evaluation_task(self, features, labels):\n- outputs = self._model.call(features, training=False)\n- evaluation_metrics = self._eval_metrics_fn(outputs, labels)\n+ evaluation_metrics = self.evaluation_process(features, labels)\n return self.report_evaluation_metrics(evaluation_metrics)\n \n def _run_prediction_task(self, features):\n- predictions = self._model.call(features, training=False)\n+ predictions = self.predict_process(features)\n return self.report_prediction_outputs(predictions)\n \n def _handle_task(self, task):\n", "issue": "Use tf.function-decorated functions to accelerate model compute\n\n", "before_files": [{"content": "import logging\nimport time\nimport traceback\nfrom contextlib import closing\n\nimport recordio\nimport tensorflow as tf\nfrom tensorflow.python.ops import math_ops\n\nfrom elasticdl.proto import elasticdl_pb2, elasticdl_pb2_grpc\nfrom elasticdl.python.common.model_helper import (\n load_model_from_module,\n load_module,\n)\nfrom elasticdl.python.common.ndarray import (\n ndarray_to_tensor,\n tensor_to_ndarray,\n)\n\n# The default maximum number of a minibatch retry as its results\n# (e.g. gradients) are not accepted by master.\nDEFAULT_MAX_MINIBATCH_RETRY_NUM = 64\n\n\nclass Worker(object):\n \"\"\"ElasticDL worker\"\"\"\n\n def __init__(\n self,\n worker_id,\n model_file,\n input_fn=\"input_fn\",\n loss=\"loss\",\n optimizer=\"optimizer\",\n eval_metrics_fn=\"eval_metrics_fn\",\n channel=None,\n model_def=None,\n model_params=\"\",\n max_minibatch_retry_num=DEFAULT_MAX_MINIBATCH_RETRY_NUM,\n ):\n \"\"\"\n Arguments:\n model_file: A module to define the model\n channel: grpc channel\n max_minibatch_retry_num: The maximum number of a minibatch retry\n as its results (e.g. gradients) are not accepted by master.\n \"\"\"\n self._logger = logging.getLogger(__name__)\n self._worker_id = worker_id\n model_module = load_module(model_file).__dict__\n self._model = load_model_from_module(\n model_def, model_module, model_params\n )\n self._var_created = self._model.built\n self._input_fn = model_module[input_fn]\n self._opt_fn = model_module[optimizer]\n self._loss = model_module[loss]\n self._eval_metrics_fn = model_module[eval_metrics_fn]\n\n if channel is None:\n self._stub = None\n else:\n self._stub = elasticdl_pb2_grpc.MasterStub(channel)\n self._max_minibatch_retry_num = max_minibatch_retry_num\n self._model_version = -1\n\n def get_task(self):\n \"\"\"\n get task from master\n \"\"\"\n req = elasticdl_pb2.GetTaskRequest()\n req.worker_id = self._worker_id\n\n return self._stub.GetTask(req)\n\n def get_model(self, version, method):\n \"\"\"\n get model from master, and update model_version\n \"\"\"\n req = elasticdl_pb2.GetModelRequest()\n req.version = version\n req.method = method\n model = self._stub.GetModel(req)\n\n for var in self._model.trainable_variables:\n # Assumes all trainable variables exist in model.param.\n var.assign(tensor_to_ndarray(model.param[var.name]))\n self._model_version = model.version\n\n def report_task_result(self, task_id, err_msg):\n \"\"\"\n report task result to master\n \"\"\"\n report = elasticdl_pb2.ReportTaskResultRequest()\n report.task_id = task_id\n report.err_message = err_msg\n return self._stub.ReportTaskResult(report)\n\n def report_variable(self):\n \"\"\"\n report variable to ps.\n \"\"\"\n req = elasticdl_pb2.ReportVariableRequest()\n for v in self._model.trainable_variables:\n req.variable[v.name].CopyFrom(ndarray_to_tensor(v.numpy()))\n self._stub.ReportVariable(req)\n\n def report_gradient(self, grads):\n \"\"\"\n report gradient to ps, return (accepted, model_version) from rpc call.\n \"\"\"\n req = elasticdl_pb2.ReportGradientRequest()\n for g, v in zip(grads, self._model.trainable_variables):\n req.gradient[v.name].CopyFrom(ndarray_to_tensor(g.numpy()))\n req.model_version = self._model_version\n res = self._stub.ReportGradient(req)\n return res.accepted, res.model_version\n\n def report_evaluation_metrics(self, evaluation_metrics):\n \"\"\"\n report evaluation metrics to ps, return (accepted, model_version)\n from rpc call.\n \"\"\"\n req = elasticdl_pb2.ReportEvaluationMetricsRequest()\n for k, v in evaluation_metrics.items():\n v_np = v.numpy()\n # If scalar, convert to numpy 1D array with size 1\n if not v_np.shape:\n v_np = v_np.reshape(1)\n req.evaluation_metrics[k].CopyFrom(ndarray_to_tensor(v_np))\n req.model_version = self._model_version\n res = self._stub.ReportEvaluationMetrics(req)\n return res.accepted, res.model_version\n\n def report_prediction_outputs(self, predictions):\n self._logger.info(\"Predicted: %f\" % predictions.numpy())\n # TODO: Decide whether we need to send results to master first\n # or write results to destination directly from workers.\n # Also, need to think about how users configure where to\n # write results.\n return True\n\n def _get_batch(self, reader, batch_size):\n res = []\n for i in range(batch_size):\n record = reader.record()\n if record is None:\n break\n res.append(record)\n return res\n\n def _create_variable_and_report(self, features):\n # Use model.call to create variables, then report to ps\n _ = self._model.call(features)\n self.report_variable()\n self._var_created = True\n\n def _run_training_task(self, features, labels):\n with tf.GradientTape() as tape:\n outputs = self._model.call(features, training=True)\n loss = self._loss(outputs, labels)\n # Add regularization loss if any\n if self._model.losses:\n loss += math_ops.add_n(self._model.losses)\n grads = tape.gradient(loss, self._model.trainable_variables)\n accepted, min_model_version = self.report_gradient(grads)\n return accepted, min_model_version, loss\n\n def _run_evaluation_task(self, features, labels):\n outputs = self._model.call(features, training=False)\n evaluation_metrics = self._eval_metrics_fn(outputs, labels)\n return self.report_evaluation_metrics(evaluation_metrics)\n\n def _run_prediction_task(self, features):\n predictions = self._model.call(features, training=False)\n return self.report_prediction_outputs(predictions)\n\n def _handle_task(self, task):\n min_model_version = task.model_version\n with closing(\n recordio.Scanner(\n task.shard_file_name, task.start, task.end - task.start\n )\n ) as reader:\n while True:\n record_buf = self._get_batch(reader, task.minibatch_size)\n if not record_buf:\n break\n min_model_version = self._process_minibatch(\n task, record_buf, min_model_version\n )\n\n def _process_minibatch(self, task, record_buf, min_model_version):\n # TODO: Discuss how we separate input_fn for different tasks\n features, labels = self._input_fn(record_buf)\n if not self._var_created:\n self._create_variable_and_report(features)\n for _ in range(self._max_minibatch_retry_num):\n if task.type == elasticdl_pb2.EVALUATION:\n if min_model_version == -1:\n if self._model_version < 0:\n self.get_model(0, elasticdl_pb2.MINIMUM)\n elif self._model_version != min_model_version:\n self.get_model(min_model_version, elasticdl_pb2.FIXED)\n accepted, _ = self._run_evaluation_task(features, labels)\n if accepted:\n break\n elif task.type == elasticdl_pb2.TRAINING:\n # TODO: optimize the logic to avoid unnecessary\n # get_model call.\n self.get_model(\n max(self._model_version, min_model_version),\n elasticdl_pb2.MINIMUM,\n )\n accepted, min_model_version, loss = self._run_training_task(\n features, labels\n )\n if accepted:\n self._logger.info(\"Loss is %f\" % loss.numpy())\n break\n elif task.type == elasticdl_pb2.PREDICTION:\n if self._model_version != min_model_version:\n self.get_model(min_model_version, elasticdl_pb2.FIXED)\n accepted = self._run_prediction_task(features)\n if accepted:\n break\n else:\n raise RuntimeError(\"Unrecognized task type, %s\" % task.type)\n else:\n # Worker got stuck, fail the task.\n # TODO: stop the worker if it fails to make any\n # progress for some time.\n raise RuntimeError(\"Worker got stuck\")\n return min_model_version\n\n def run(self):\n \"\"\"\n Fetches task from master and performs training or evaluation.\n \"\"\"\n while True:\n task = self.get_task()\n if not task.shard_file_name:\n if task.type == elasticdl_pb2.WAIT:\n # Wait a few seconds then try to get_task again\n time.sleep(5)\n continue\n else:\n # No more task\n self._logger.info(\"No more task, stopping\")\n break\n self._logger.info(\"Receive a new task: %d\", task.task_id)\n err_msg = \"\"\n try:\n self._handle_task(task)\n except RuntimeError as err:\n err_msg = str(err)\n traceback.print_exc()\n except Exception as ex:\n err_msg = str(ex)\n traceback.print_exc()\n raise ex\n self.report_task_result(task.task_id, err_msg)\n", "path": "elasticdl/python/worker/worker.py"}], "after_files": [{"content": "import logging\nimport time\nimport traceback\nfrom contextlib import closing\n\nimport recordio\nimport tensorflow as tf\nfrom tensorflow.python.ops import math_ops\n\nfrom elasticdl.proto import elasticdl_pb2, elasticdl_pb2_grpc\nfrom elasticdl.python.common.model_helper import (\n load_model_from_module,\n load_module,\n)\nfrom elasticdl.python.common.ndarray import (\n ndarray_to_tensor,\n tensor_to_ndarray,\n)\n\n# The default maximum number of a minibatch retry as its results\n# (e.g. gradients) are not accepted by master.\nDEFAULT_MAX_MINIBATCH_RETRY_NUM = 64\n\n\nclass Worker(object):\n \"\"\"ElasticDL worker\"\"\"\n\n def __init__(\n self,\n worker_id,\n model_file,\n input_fn=\"input_fn\",\n loss=\"loss\",\n optimizer=\"optimizer\",\n eval_metrics_fn=\"eval_metrics_fn\",\n channel=None,\n model_def=None,\n model_params=\"\",\n max_minibatch_retry_num=DEFAULT_MAX_MINIBATCH_RETRY_NUM,\n ):\n \"\"\"\n Arguments:\n model_file: A module to define the model\n channel: grpc channel\n max_minibatch_retry_num: The maximum number of a minibatch retry\n as its results (e.g. gradients) are not accepted by master.\n \"\"\"\n self._logger = logging.getLogger(__name__)\n self._worker_id = worker_id\n model_module = load_module(model_file).__dict__\n self._model = load_model_from_module(\n model_def, model_module, model_params\n )\n self._var_created = self._model.built\n self._input_fn = model_module[input_fn]\n self._opt_fn = model_module[optimizer]\n self._loss = model_module[loss]\n self._eval_metrics_fn = model_module[eval_metrics_fn]\n\n if channel is None:\n self._stub = None\n else:\n self._stub = elasticdl_pb2_grpc.MasterStub(channel)\n self._max_minibatch_retry_num = max_minibatch_retry_num\n self._model_version = -1\n\n def get_task(self):\n \"\"\"\n get task from master\n \"\"\"\n req = elasticdl_pb2.GetTaskRequest()\n req.worker_id = self._worker_id\n\n return self._stub.GetTask(req)\n\n def get_model(self, version, method):\n \"\"\"\n get model from master, and update model_version\n \"\"\"\n req = elasticdl_pb2.GetModelRequest()\n req.version = version\n req.method = method\n model = self._stub.GetModel(req)\n\n for var in self._model.trainable_variables:\n # Assumes all trainable variables exist in model.param.\n var.assign(tensor_to_ndarray(model.param[var.name]))\n self._model_version = model.version\n\n def report_task_result(self, task_id, err_msg):\n \"\"\"\n report task result to master\n \"\"\"\n report = elasticdl_pb2.ReportTaskResultRequest()\n report.task_id = task_id\n report.err_message = err_msg\n return self._stub.ReportTaskResult(report)\n\n def report_variable(self):\n \"\"\"\n report variable to ps.\n \"\"\"\n req = elasticdl_pb2.ReportVariableRequest()\n for v in self._model.trainable_variables:\n req.variable[v.name].CopyFrom(ndarray_to_tensor(v.numpy()))\n self._stub.ReportVariable(req)\n\n def report_gradient(self, grads):\n \"\"\"\n report gradient to ps, return (accepted, model_version) from rpc call.\n \"\"\"\n req = elasticdl_pb2.ReportGradientRequest()\n for g, v in zip(grads, self._model.trainable_variables):\n req.gradient[v.name].CopyFrom(ndarray_to_tensor(g.numpy()))\n req.model_version = self._model_version\n res = self._stub.ReportGradient(req)\n return res.accepted, res.model_version\n\n def report_evaluation_metrics(self, evaluation_metrics):\n \"\"\"\n report evaluation metrics to ps, return (accepted, model_version)\n from rpc call.\n \"\"\"\n req = elasticdl_pb2.ReportEvaluationMetricsRequest()\n for k, v in evaluation_metrics.items():\n v_np = v.numpy()\n # If scalar, convert to numpy 1D array with size 1\n if not v_np.shape:\n v_np = v_np.reshape(1)\n req.evaluation_metrics[k].CopyFrom(ndarray_to_tensor(v_np))\n req.model_version = self._model_version\n res = self._stub.ReportEvaluationMetrics(req)\n return res.accepted, res.model_version\n\n def report_prediction_outputs(self, predictions):\n self._logger.info(\"Predicted: %f\" % predictions.numpy())\n # TODO: Decide whether we need to send results to master first\n # or write results to destination directly from workers.\n # Also, need to think about how users configure where to\n # write results.\n return True\n\n def _get_batch(self, reader, batch_size):\n res = []\n for i in range(batch_size):\n record = reader.record()\n if record is None:\n break\n res.append(record)\n return res\n\n def _create_variable_and_report(self, features):\n # Use model.call to create variables, then report to ps\n _ = self._model.call(features)\n self.report_variable()\n self._var_created = True\n\n @tf.function\n def training_process(self, features, labels):\n with tf.GradientTape() as tape:\n outputs = self._model.call(features, training=True)\n loss = self._loss(outputs, labels)\n # Add regularization loss if any\n if self._model.losses:\n loss += math_ops.add_n(self._model.losses)\n grads = tape.gradient(loss, self._model.trainable_variables)\n return loss, grads\n\n @tf.function\n def evaluation_process(self, features, labels):\n outputs = self._model.call(features, training=False)\n evaluation_metrics = self._eval_metrics_fn(outputs, labels)\n return evaluation_metrics\n\n @tf.function\n def predict_process(self, features):\n outputs = self._model.call(features, training=False)\n return outputs\n\n def _run_training_task(self, features, labels):\n loss, grads = self.training_process(features, labels)\n accepted, min_model_version = self.report_gradient(grads)\n return accepted, min_model_version, loss\n\n def _run_evaluation_task(self, features, labels):\n evaluation_metrics = self.evaluation_process(features, labels)\n return self.report_evaluation_metrics(evaluation_metrics)\n\n def _run_prediction_task(self, features):\n predictions = self.predict_process(features)\n return self.report_prediction_outputs(predictions)\n\n def _handle_task(self, task):\n min_model_version = task.model_version\n with closing(\n recordio.Scanner(\n task.shard_file_name, task.start, task.end - task.start\n )\n ) as reader:\n while True:\n record_buf = self._get_batch(reader, task.minibatch_size)\n if not record_buf:\n break\n min_model_version = self._process_minibatch(\n task, record_buf, min_model_version\n )\n\n def _process_minibatch(self, task, record_buf, min_model_version):\n # TODO: Discuss how we separate input_fn for different tasks\n features, labels = self._input_fn(record_buf)\n if not self._var_created:\n self._create_variable_and_report(features)\n for _ in range(self._max_minibatch_retry_num):\n if task.type == elasticdl_pb2.EVALUATION:\n if min_model_version == -1:\n if self._model_version < 0:\n self.get_model(0, elasticdl_pb2.MINIMUM)\n elif self._model_version != min_model_version:\n self.get_model(min_model_version, elasticdl_pb2.FIXED)\n accepted, _ = self._run_evaluation_task(features, labels)\n if accepted:\n break\n elif task.type == elasticdl_pb2.TRAINING:\n # TODO: optimize the logic to avoid unnecessary\n # get_model call.\n self.get_model(\n max(self._model_version, min_model_version),\n elasticdl_pb2.MINIMUM,\n )\n accepted, min_model_version, loss = self._run_training_task(\n features, labels\n )\n if accepted:\n self._logger.info(\"Loss is %f\" % loss.numpy())\n break\n elif task.type == elasticdl_pb2.PREDICTION:\n if self._model_version != min_model_version:\n self.get_model(min_model_version, elasticdl_pb2.FIXED)\n accepted = self._run_prediction_task(features)\n if accepted:\n break\n else:\n raise RuntimeError(\"Unrecognized task type, %s\" % task.type)\n else:\n # Worker got stuck, fail the task.\n # TODO: stop the worker if it fails to make any\n # progress for some time.\n raise RuntimeError(\"Worker got stuck\")\n return min_model_version\n\n def run(self):\n \"\"\"\n Fetches task from master and performs training or evaluation.\n \"\"\"\n while True:\n task = self.get_task()\n if not task.shard_file_name:\n if task.type == elasticdl_pb2.WAIT:\n # Wait a few seconds then try to get_task again\n time.sleep(5)\n continue\n else:\n # No more task\n self._logger.info(\"No more task, stopping\")\n break\n self._logger.info(\"Receive a new task: %d\", task.task_id)\n err_msg = \"\"\n try:\n self._handle_task(task)\n except RuntimeError as err:\n err_msg = str(err)\n traceback.print_exc()\n except Exception as ex:\n err_msg = str(ex)\n traceback.print_exc()\n raise ex\n self.report_task_result(task.task_id, err_msg)\n", "path": "elasticdl/python/worker/worker.py"}]} | 2,971 | 443 |
gh_patches_debug_16288 | rasdani/github-patches | git_diff | pytorch__vision-7702 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
to_grayscale gives non-actionable deprecation warning
_Originally reported in the [user forum](https://discuss.pytorch.org/t/cannot-find-convert-color-space/182591) by `@function2`._
> When I use to_grayscale, there’s a deprecation warning:
> ```
> UserWarning: The function `to_grayscale(...)` is deprecated in will be removed in a future release. Instead, please use `convert_color_space(..., color_space=datapoints.ColorSpace.GRAY)`.
> ```
> However, I can’t find this function in the current code base
---
Note that this only applies to `torchvision.transforms.v2.function`
https://github.com/pytorch/vision/blob/52eb5039bed1a23eee14014ff4cd6fd9cc9b2b08/torchvision/transforms/v2/functional/_deprecated.py#L12-L22
since the v1 version, i.e. `torchvision.transforms.functional` does not emit the warning
https://github.com/pytorch/vision/blob/52eb5039bed1a23eee14014ff4cd6fd9cc9b2b08/torchvision/transforms/functional.py#L1249-L1253
Fixing the v2 warning was forgotten in #7120.
cc @vfdev-5
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchvision/transforms/v2/functional/_deprecated.py`
Content:
```
1 import warnings
2 from typing import Any, List, Union
3
4 import PIL.Image
5 import torch
6
7 from torchvision import datapoints
8 from torchvision.transforms import functional as _F
9
10
11 @torch.jit.unused
12 def to_grayscale(inpt: PIL.Image.Image, num_output_channels: int = 1) -> PIL.Image.Image:
13 call = ", num_output_channels=3" if num_output_channels == 3 else ""
14 replacement = "convert_color_space(..., color_space=datapoints.ColorSpace.GRAY)"
15 if num_output_channels == 3:
16 replacement = f"convert_color_space({replacement}, color_space=datapoints.ColorSpace.RGB)"
17 warnings.warn(
18 f"The function `to_grayscale(...{call})` is deprecated in will be removed in a future release. "
19 f"Instead, please use `{replacement}`.",
20 )
21
22 return _F.to_grayscale(inpt, num_output_channels=num_output_channels)
23
24
25 @torch.jit.unused
26 def to_tensor(inpt: Any) -> torch.Tensor:
27 warnings.warn(
28 "The function `to_tensor(...)` is deprecated and will be removed in a future release. "
29 "Instead, please use `to_image_tensor(...)` followed by `convert_image_dtype(...)`."
30 )
31 return _F.to_tensor(inpt)
32
33
34 def get_image_size(inpt: Union[datapoints._ImageTypeJIT, datapoints._VideoTypeJIT]) -> List[int]:
35 warnings.warn(
36 "The function `get_image_size(...)` is deprecated and will be removed in a future release. "
37 "Instead, please use `get_spatial_size(...)` which returns `[h, w]` instead of `[w, h]`."
38 )
39 return _F.get_image_size(inpt)
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchvision/transforms/v2/functional/_deprecated.py b/torchvision/transforms/v2/functional/_deprecated.py
--- a/torchvision/transforms/v2/functional/_deprecated.py
+++ b/torchvision/transforms/v2/functional/_deprecated.py
@@ -10,15 +10,10 @@
@torch.jit.unused
def to_grayscale(inpt: PIL.Image.Image, num_output_channels: int = 1) -> PIL.Image.Image:
- call = ", num_output_channels=3" if num_output_channels == 3 else ""
- replacement = "convert_color_space(..., color_space=datapoints.ColorSpace.GRAY)"
- if num_output_channels == 3:
- replacement = f"convert_color_space({replacement}, color_space=datapoints.ColorSpace.RGB)"
warnings.warn(
- f"The function `to_grayscale(...{call})` is deprecated in will be removed in a future release. "
- f"Instead, please use `{replacement}`.",
+ "The function `to_grayscale` is deprecated in will be removed in a future release. "
+ "Instead, please use `rgb_to_grayscale`.",
)
-
return _F.to_grayscale(inpt, num_output_channels=num_output_channels)
| {"golden_diff": "diff --git a/torchvision/transforms/v2/functional/_deprecated.py b/torchvision/transforms/v2/functional/_deprecated.py\n--- a/torchvision/transforms/v2/functional/_deprecated.py\n+++ b/torchvision/transforms/v2/functional/_deprecated.py\n@@ -10,15 +10,10 @@\n \n @torch.jit.unused\n def to_grayscale(inpt: PIL.Image.Image, num_output_channels: int = 1) -> PIL.Image.Image:\n- call = \", num_output_channels=3\" if num_output_channels == 3 else \"\"\n- replacement = \"convert_color_space(..., color_space=datapoints.ColorSpace.GRAY)\"\n- if num_output_channels == 3:\n- replacement = f\"convert_color_space({replacement}, color_space=datapoints.ColorSpace.RGB)\"\n warnings.warn(\n- f\"The function `to_grayscale(...{call})` is deprecated in will be removed in a future release. \"\n- f\"Instead, please use `{replacement}`.\",\n+ \"The function `to_grayscale` is deprecated in will be removed in a future release. \"\n+ \"Instead, please use `rgb_to_grayscale`.\",\n )\n-\n return _F.to_grayscale(inpt, num_output_channels=num_output_channels)\n", "issue": "to_grayscale gives non-actionable deprecation warning\n_Originally reported in the [user forum](https://discuss.pytorch.org/t/cannot-find-convert-color-space/182591) by `@function2`._\r\n\r\n> When I use to_grayscale, there\u2019s a deprecation warning:\r\n> ```\r\n> UserWarning: The function `to_grayscale(...)` is deprecated in will be removed in a future release. Instead, please use `convert_color_space(..., color_space=datapoints.ColorSpace.GRAY)`.\r\n> ```\r\n> However, I can\u2019t find this function in the current code base\r\n\r\n---\r\n\r\nNote that this only applies to `torchvision.transforms.v2.function`\r\n\r\nhttps://github.com/pytorch/vision/blob/52eb5039bed1a23eee14014ff4cd6fd9cc9b2b08/torchvision/transforms/v2/functional/_deprecated.py#L12-L22\r\n\r\nsince the v1 version, i.e. `torchvision.transforms.functional` does not emit the warning\r\n\r\nhttps://github.com/pytorch/vision/blob/52eb5039bed1a23eee14014ff4cd6fd9cc9b2b08/torchvision/transforms/functional.py#L1249-L1253\r\n\r\nFixing the v2 warning was forgotten in #7120.\r\n\n\ncc @vfdev-5\n", "before_files": [{"content": "import warnings\nfrom typing import Any, List, Union\n\nimport PIL.Image\nimport torch\n\nfrom torchvision import datapoints\nfrom torchvision.transforms import functional as _F\n\n\[email protected]\ndef to_grayscale(inpt: PIL.Image.Image, num_output_channels: int = 1) -> PIL.Image.Image:\n call = \", num_output_channels=3\" if num_output_channels == 3 else \"\"\n replacement = \"convert_color_space(..., color_space=datapoints.ColorSpace.GRAY)\"\n if num_output_channels == 3:\n replacement = f\"convert_color_space({replacement}, color_space=datapoints.ColorSpace.RGB)\"\n warnings.warn(\n f\"The function `to_grayscale(...{call})` is deprecated in will be removed in a future release. \"\n f\"Instead, please use `{replacement}`.\",\n )\n\n return _F.to_grayscale(inpt, num_output_channels=num_output_channels)\n\n\[email protected]\ndef to_tensor(inpt: Any) -> torch.Tensor:\n warnings.warn(\n \"The function `to_tensor(...)` is deprecated and will be removed in a future release. \"\n \"Instead, please use `to_image_tensor(...)` followed by `convert_image_dtype(...)`.\"\n )\n return _F.to_tensor(inpt)\n\n\ndef get_image_size(inpt: Union[datapoints._ImageTypeJIT, datapoints._VideoTypeJIT]) -> List[int]:\n warnings.warn(\n \"The function `get_image_size(...)` is deprecated and will be removed in a future release. \"\n \"Instead, please use `get_spatial_size(...)` which returns `[h, w]` instead of `[w, h]`.\"\n )\n return _F.get_image_size(inpt)\n", "path": "torchvision/transforms/v2/functional/_deprecated.py"}], "after_files": [{"content": "import warnings\nfrom typing import Any, List, Union\n\nimport PIL.Image\nimport torch\n\nfrom torchvision import datapoints\nfrom torchvision.transforms import functional as _F\n\n\[email protected]\ndef to_grayscale(inpt: PIL.Image.Image, num_output_channels: int = 1) -> PIL.Image.Image:\n warnings.warn(\n \"The function `to_grayscale` is deprecated in will be removed in a future release. \"\n \"Instead, please use `rgb_to_grayscale`.\",\n )\n return _F.to_grayscale(inpt, num_output_channels=num_output_channels)\n\n\[email protected]\ndef to_tensor(inpt: Any) -> torch.Tensor:\n warnings.warn(\n \"The function `to_tensor(...)` is deprecated and will be removed in a future release. \"\n \"Instead, please use `to_image_tensor(...)` followed by `convert_image_dtype(...)`.\"\n )\n return _F.to_tensor(inpt)\n\n\ndef get_image_size(inpt: Union[datapoints._ImageTypeJIT, datapoints._VideoTypeJIT]) -> List[int]:\n warnings.warn(\n \"The function `get_image_size(...)` is deprecated and will be removed in a future release. \"\n \"Instead, please use `get_spatial_size(...)` which returns `[h, w]` instead of `[w, h]`.\"\n )\n return _F.get_image_size(inpt)\n", "path": "torchvision/transforms/v2/functional/_deprecated.py"}]} | 1,025 | 276 |
gh_patches_debug_28038 | rasdani/github-patches | git_diff | TheAlgorithms__Python-796 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
math CheckPrime is wrong
The current implementation doesn't support:
* Negative values, -1 is not a prime number. Current implementation raise a TypeError due to `math.sqrt` on negative values.
The current implementation return the wrong value for:
* 0, 0 doesn't have any divider, primes must have two.
* 1, 1 just have one divider, primes must have two.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `maths/PrimeCheck.py`
Content:
```
1 import math
2 def primeCheck(number):
3 if number % 2 == 0 and number > 2:
4 return False
5 return all(number % i for i in range(3, int(math.sqrt(number)) + 1, 2))
6
7 def main():
8 print(primeCheck(37))
9 print(primeCheck(100))
10 print(primeCheck(77))
11
12 if __name__ == '__main__':
13 main()
14
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/maths/PrimeCheck.py b/maths/PrimeCheck.py
--- a/maths/PrimeCheck.py
+++ b/maths/PrimeCheck.py
@@ -1,13 +1,54 @@
import math
+import unittest
+
+
def primeCheck(number):
- if number % 2 == 0 and number > 2:
+ """
+ A number is prime if it has exactly two dividers: 1 and itself.
+ """
+ if number < 2:
+ # Negatives, 0 and 1 are not primes
return False
- return all(number % i for i in range(3, int(math.sqrt(number)) + 1, 2))
+ if number < 4:
+ # 2 and 3 are primes
+ return True
+ if number % 2 == 0:
+ # Even values are not primes
+ return False
+
+ # Except 2, all primes are odd. If any odd value divide
+ # the number, then that number is not prime.
+ odd_numbers = range(3, int(math.sqrt(number)) + 1, 2)
+ return not any(number % i == 0 for i in odd_numbers)
+
+
+class Test(unittest.TestCase):
+ def test_primes(self):
+ self.assertTrue(primeCheck(2))
+ self.assertTrue(primeCheck(3))
+ self.assertTrue(primeCheck(5))
+ self.assertTrue(primeCheck(7))
+ self.assertTrue(primeCheck(11))
+ self.assertTrue(primeCheck(13))
+ self.assertTrue(primeCheck(17))
+ self.assertTrue(primeCheck(19))
+ self.assertTrue(primeCheck(23))
+ self.assertTrue(primeCheck(29))
+
+ def test_not_primes(self):
+ self.assertFalse(primeCheck(-19),
+ "Negative numbers are not prime.")
+ self.assertFalse(primeCheck(0),
+ "Zero doesn't have any divider, primes must have two")
+ self.assertFalse(primeCheck(1),
+ "One just have 1 divider, primes must have two.")
+ self.assertFalse(primeCheck(2 * 2))
+ self.assertFalse(primeCheck(2 * 3))
+ self.assertFalse(primeCheck(3 * 3))
+ self.assertFalse(primeCheck(3 * 5))
+ self.assertFalse(primeCheck(3 * 5 * 7))
-def main():
- print(primeCheck(37))
- print(primeCheck(100))
- print(primeCheck(77))
if __name__ == '__main__':
- main()
+ unittest.main()
+
| {"golden_diff": "diff --git a/maths/PrimeCheck.py b/maths/PrimeCheck.py\n--- a/maths/PrimeCheck.py\n+++ b/maths/PrimeCheck.py\n@@ -1,13 +1,54 @@\n import math\n+import unittest\n+\n+\n def primeCheck(number):\n- if number % 2 == 0 and number > 2: \n+ \"\"\"\n+ A number is prime if it has exactly two dividers: 1 and itself.\n+ \"\"\"\n+ if number < 2:\n+ # Negatives, 0 and 1 are not primes\n return False\n- return all(number % i for i in range(3, int(math.sqrt(number)) + 1, 2))\n+ if number < 4:\n+ # 2 and 3 are primes\n+ return True\n+ if number % 2 == 0:\n+ # Even values are not primes\n+ return False\n+\n+ # Except 2, all primes are odd. If any odd value divide\n+ # the number, then that number is not prime.\n+ odd_numbers = range(3, int(math.sqrt(number)) + 1, 2)\n+ return not any(number % i == 0 for i in odd_numbers)\n+\n+\n+class Test(unittest.TestCase):\n+ def test_primes(self):\n+ self.assertTrue(primeCheck(2))\n+ self.assertTrue(primeCheck(3))\n+ self.assertTrue(primeCheck(5))\n+ self.assertTrue(primeCheck(7))\n+ self.assertTrue(primeCheck(11))\n+ self.assertTrue(primeCheck(13))\n+ self.assertTrue(primeCheck(17))\n+ self.assertTrue(primeCheck(19))\n+ self.assertTrue(primeCheck(23))\n+ self.assertTrue(primeCheck(29))\n+\n+ def test_not_primes(self):\n+ self.assertFalse(primeCheck(-19),\n+ \"Negative numbers are not prime.\")\n+ self.assertFalse(primeCheck(0),\n+ \"Zero doesn't have any divider, primes must have two\")\n+ self.assertFalse(primeCheck(1),\n+ \"One just have 1 divider, primes must have two.\")\n+ self.assertFalse(primeCheck(2 * 2))\n+ self.assertFalse(primeCheck(2 * 3))\n+ self.assertFalse(primeCheck(3 * 3))\n+ self.assertFalse(primeCheck(3 * 5))\n+ self.assertFalse(primeCheck(3 * 5 * 7))\n \n-def main():\n- print(primeCheck(37))\n- print(primeCheck(100))\n- print(primeCheck(77))\n \n if __name__ == '__main__':\n-\tmain()\n+ unittest.main()\n+\n", "issue": "math CheckPrime is wrong\nThe current implementation doesn't support:\r\n\r\n* Negative values, -1 is not a prime number. Current implementation raise a TypeError due to `math.sqrt` on negative values.\r\n\r\nThe current implementation return the wrong value for:\r\n\r\n* 0, 0 doesn't have any divider, primes must have two.\r\n* 1, 1 just have one divider, primes must have two.\n", "before_files": [{"content": "import math\ndef primeCheck(number):\n if number % 2 == 0 and number > 2: \n return False\n return all(number % i for i in range(3, int(math.sqrt(number)) + 1, 2))\n\ndef main():\n print(primeCheck(37))\n print(primeCheck(100))\n print(primeCheck(77))\n\nif __name__ == '__main__':\n\tmain()\n", "path": "maths/PrimeCheck.py"}], "after_files": [{"content": "import math\nimport unittest\n\n\ndef primeCheck(number):\n \"\"\"\n A number is prime if it has exactly two dividers: 1 and itself.\n \"\"\"\n if number < 2:\n # Negatives, 0 and 1 are not primes\n return False\n if number < 4:\n # 2 and 3 are primes\n return True\n if number % 2 == 0:\n # Even values are not primes\n return False\n\n # Except 2, all primes are odd. If any odd value divide\n # the number, then that number is not prime.\n odd_numbers = range(3, int(math.sqrt(number)) + 1, 2)\n return not any(number % i == 0 for i in odd_numbers)\n\n\nclass Test(unittest.TestCase):\n def test_primes(self):\n self.assertTrue(primeCheck(2))\n self.assertTrue(primeCheck(3))\n self.assertTrue(primeCheck(5))\n self.assertTrue(primeCheck(7))\n self.assertTrue(primeCheck(11))\n self.assertTrue(primeCheck(13))\n self.assertTrue(primeCheck(17))\n self.assertTrue(primeCheck(19))\n self.assertTrue(primeCheck(23))\n self.assertTrue(primeCheck(29))\n\n def test_not_primes(self):\n self.assertFalse(primeCheck(-19),\n \"Negative numbers are not prime.\")\n self.assertFalse(primeCheck(0),\n \"Zero doesn't have any divider, primes must have two\")\n self.assertFalse(primeCheck(1),\n \"One just have 1 divider, primes must have two.\")\n self.assertFalse(primeCheck(2 * 2))\n self.assertFalse(primeCheck(2 * 3))\n self.assertFalse(primeCheck(3 * 3))\n self.assertFalse(primeCheck(3 * 5))\n self.assertFalse(primeCheck(3 * 5 * 7))\n\n\nif __name__ == '__main__':\n unittest.main()\n\n", "path": "maths/PrimeCheck.py"}]} | 458 | 599 |
gh_patches_debug_38846 | rasdani/github-patches | git_diff | python-discord__bot-971 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove periodic ping in #verification
This creates "accept bombs" that may trigger our anti-raid and burst spam filters, and is not really necessary anymore. We have more than enough people joining without having to periodically nag at them.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bot/cogs/verification.py`
Content:
```
1 import logging
2 from contextlib import suppress
3 from datetime import datetime
4
5 from discord import Colour, Forbidden, Message, NotFound, Object
6 from discord.ext import tasks
7 from discord.ext.commands import Cog, Context, command
8
9 from bot import constants
10 from bot.bot import Bot
11 from bot.cogs.moderation import ModLog
12 from bot.decorators import in_whitelist, without_role
13 from bot.utils.checks import InWhitelistCheckFailure, without_role_check
14
15 log = logging.getLogger(__name__)
16
17 WELCOME_MESSAGE = f"""
18 Hello! Welcome to the server, and thanks for verifying yourself!
19
20 For your records, these are the documents you accepted:
21
22 `1)` Our rules, here: <https://pythondiscord.com/pages/rules>
23 `2)` Our privacy policy, here: <https://pythondiscord.com/pages/privacy> - you can find information on how to have \
24 your information removed here as well.
25
26 Feel free to review them at any point!
27
28 Additionally, if you'd like to receive notifications for the announcements \
29 we post in <#{constants.Channels.announcements}>
30 from time to time, you can send `!subscribe` to <#{constants.Channels.bot_commands}> at any time \
31 to assign yourself the **Announcements** role. We'll mention this role every time we make an announcement.
32
33 If you'd like to unsubscribe from the announcement notifications, simply send `!unsubscribe` to \
34 <#{constants.Channels.bot_commands}>.
35 """
36
37 if constants.DEBUG_MODE:
38 PERIODIC_PING = "Periodic checkpoint message successfully sent."
39 else:
40 PERIODIC_PING = (
41 f"@everyone To verify that you have read our rules, please type `{constants.Bot.prefix}accept`."
42 " If you encounter any problems during the verification process, "
43 f"send a direct message to a staff member."
44 )
45 BOT_MESSAGE_DELETE_DELAY = 10
46
47
48 class Verification(Cog):
49 """User verification and role self-management."""
50
51 def __init__(self, bot: Bot):
52 self.bot = bot
53 self.periodic_ping.start()
54
55 @property
56 def mod_log(self) -> ModLog:
57 """Get currently loaded ModLog cog instance."""
58 return self.bot.get_cog("ModLog")
59
60 @Cog.listener()
61 async def on_message(self, message: Message) -> None:
62 """Check new message event for messages to the checkpoint channel & process."""
63 if message.channel.id != constants.Channels.verification:
64 return # Only listen for #checkpoint messages
65
66 if message.author.bot:
67 # They're a bot, delete their message after the delay.
68 # But not the periodic ping; we like that one.
69 if message.content != PERIODIC_PING:
70 await message.delete(delay=BOT_MESSAGE_DELETE_DELAY)
71 return
72
73 # if a user mentions a role or guild member
74 # alert the mods in mod-alerts channel
75 if message.mentions or message.role_mentions:
76 log.debug(
77 f"{message.author} mentioned one or more users "
78 f"and/or roles in {message.channel.name}"
79 )
80
81 embed_text = (
82 f"{message.author.mention} sent a message in "
83 f"{message.channel.mention} that contained user and/or role mentions."
84 f"\n\n**Original message:**\n>>> {message.content}"
85 )
86
87 # Send pretty mod log embed to mod-alerts
88 await self.mod_log.send_log_message(
89 icon_url=constants.Icons.filtering,
90 colour=Colour(constants.Colours.soft_red),
91 title=f"User/Role mentioned in {message.channel.name}",
92 text=embed_text,
93 thumbnail=message.author.avatar_url_as(static_format="png"),
94 channel_id=constants.Channels.mod_alerts,
95 )
96
97 ctx: Context = await self.bot.get_context(message)
98 if ctx.command is not None and ctx.command.name == "accept":
99 return
100
101 if any(r.id == constants.Roles.verified for r in ctx.author.roles):
102 log.info(
103 f"{ctx.author} posted '{ctx.message.content}' "
104 "in the verification channel, but is already verified."
105 )
106 return
107
108 log.debug(
109 f"{ctx.author} posted '{ctx.message.content}' in the verification "
110 "channel. We are providing instructions how to verify."
111 )
112 await ctx.send(
113 f"{ctx.author.mention} Please type `!accept` to verify that you accept our rules, "
114 f"and gain access to the rest of the server.",
115 delete_after=20
116 )
117
118 log.trace(f"Deleting the message posted by {ctx.author}")
119 with suppress(NotFound):
120 await ctx.message.delete()
121
122 @command(name='accept', aliases=('verify', 'verified', 'accepted'), hidden=True)
123 @without_role(constants.Roles.verified)
124 @in_whitelist(channels=(constants.Channels.verification,))
125 async def accept_command(self, ctx: Context, *_) -> None: # We don't actually care about the args
126 """Accept our rules and gain access to the rest of the server."""
127 log.debug(f"{ctx.author} called !accept. Assigning the 'Developer' role.")
128 await ctx.author.add_roles(Object(constants.Roles.verified), reason="Accepted the rules")
129 try:
130 await ctx.author.send(WELCOME_MESSAGE)
131 except Forbidden:
132 log.info(f"Sending welcome message failed for {ctx.author}.")
133 finally:
134 log.trace(f"Deleting accept message by {ctx.author}.")
135 with suppress(NotFound):
136 self.mod_log.ignore(constants.Event.message_delete, ctx.message.id)
137 await ctx.message.delete()
138
139 @command(name='subscribe')
140 @in_whitelist(channels=(constants.Channels.bot_commands,))
141 async def subscribe_command(self, ctx: Context, *_) -> None: # We don't actually care about the args
142 """Subscribe to announcement notifications by assigning yourself the role."""
143 has_role = False
144
145 for role in ctx.author.roles:
146 if role.id == constants.Roles.announcements:
147 has_role = True
148 break
149
150 if has_role:
151 await ctx.send(f"{ctx.author.mention} You're already subscribed!")
152 return
153
154 log.debug(f"{ctx.author} called !subscribe. Assigning the 'Announcements' role.")
155 await ctx.author.add_roles(Object(constants.Roles.announcements), reason="Subscribed to announcements")
156
157 log.trace(f"Deleting the message posted by {ctx.author}.")
158
159 await ctx.send(
160 f"{ctx.author.mention} Subscribed to <#{constants.Channels.announcements}> notifications.",
161 )
162
163 @command(name='unsubscribe')
164 @in_whitelist(channels=(constants.Channels.bot_commands,))
165 async def unsubscribe_command(self, ctx: Context, *_) -> None: # We don't actually care about the args
166 """Unsubscribe from announcement notifications by removing the role from yourself."""
167 has_role = False
168
169 for role in ctx.author.roles:
170 if role.id == constants.Roles.announcements:
171 has_role = True
172 break
173
174 if not has_role:
175 await ctx.send(f"{ctx.author.mention} You're already unsubscribed!")
176 return
177
178 log.debug(f"{ctx.author} called !unsubscribe. Removing the 'Announcements' role.")
179 await ctx.author.remove_roles(Object(constants.Roles.announcements), reason="Unsubscribed from announcements")
180
181 log.trace(f"Deleting the message posted by {ctx.author}.")
182
183 await ctx.send(
184 f"{ctx.author.mention} Unsubscribed from <#{constants.Channels.announcements}> notifications."
185 )
186
187 # This cannot be static (must have a __func__ attribute).
188 async def cog_command_error(self, ctx: Context, error: Exception) -> None:
189 """Check for & ignore any InWhitelistCheckFailure."""
190 if isinstance(error, InWhitelistCheckFailure):
191 error.handled = True
192
193 @staticmethod
194 def bot_check(ctx: Context) -> bool:
195 """Block any command within the verification channel that is not !accept."""
196 if ctx.channel.id == constants.Channels.verification and without_role_check(ctx, *constants.MODERATION_ROLES):
197 return ctx.command.name == "accept"
198 else:
199 return True
200
201 @tasks.loop(hours=12)
202 async def periodic_ping(self) -> None:
203 """Every week, mention @everyone to remind them to verify."""
204 messages = self.bot.get_channel(constants.Channels.verification).history(limit=10)
205 need_to_post = True # True if a new message needs to be sent.
206
207 async for message in messages:
208 if message.author == self.bot.user and message.content == PERIODIC_PING:
209 delta = datetime.utcnow() - message.created_at # Time since last message.
210 if delta.days >= 7: # Message is older than a week.
211 await message.delete()
212 else:
213 need_to_post = False
214
215 break
216
217 if need_to_post:
218 await self.bot.get_channel(constants.Channels.verification).send(PERIODIC_PING)
219
220 @periodic_ping.before_loop
221 async def before_ping(self) -> None:
222 """Only start the loop when the bot is ready."""
223 await self.bot.wait_until_guild_available()
224
225 def cog_unload(self) -> None:
226 """Cancel the periodic ping task when the cog is unloaded."""
227 self.periodic_ping.cancel()
228
229
230 def setup(bot: Bot) -> None:
231 """Load the Verification cog."""
232 bot.add_cog(Verification(bot))
233
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bot/cogs/verification.py b/bot/cogs/verification.py
--- a/bot/cogs/verification.py
+++ b/bot/cogs/verification.py
@@ -1,9 +1,7 @@
import logging
from contextlib import suppress
-from datetime import datetime
from discord import Colour, Forbidden, Message, NotFound, Object
-from discord.ext import tasks
from discord.ext.commands import Cog, Context, command
from bot import constants
@@ -34,14 +32,6 @@
<#{constants.Channels.bot_commands}>.
"""
-if constants.DEBUG_MODE:
- PERIODIC_PING = "Periodic checkpoint message successfully sent."
-else:
- PERIODIC_PING = (
- f"@everyone To verify that you have read our rules, please type `{constants.Bot.prefix}accept`."
- " If you encounter any problems during the verification process, "
- f"send a direct message to a staff member."
- )
BOT_MESSAGE_DELETE_DELAY = 10
@@ -50,7 +40,6 @@
def __init__(self, bot: Bot):
self.bot = bot
- self.periodic_ping.start()
@property
def mod_log(self) -> ModLog:
@@ -65,9 +54,7 @@
if message.author.bot:
# They're a bot, delete their message after the delay.
- # But not the periodic ping; we like that one.
- if message.content != PERIODIC_PING:
- await message.delete(delay=BOT_MESSAGE_DELETE_DELAY)
+ await message.delete(delay=BOT_MESSAGE_DELETE_DELAY)
return
# if a user mentions a role or guild member
@@ -198,34 +185,6 @@
else:
return True
- @tasks.loop(hours=12)
- async def periodic_ping(self) -> None:
- """Every week, mention @everyone to remind them to verify."""
- messages = self.bot.get_channel(constants.Channels.verification).history(limit=10)
- need_to_post = True # True if a new message needs to be sent.
-
- async for message in messages:
- if message.author == self.bot.user and message.content == PERIODIC_PING:
- delta = datetime.utcnow() - message.created_at # Time since last message.
- if delta.days >= 7: # Message is older than a week.
- await message.delete()
- else:
- need_to_post = False
-
- break
-
- if need_to_post:
- await self.bot.get_channel(constants.Channels.verification).send(PERIODIC_PING)
-
- @periodic_ping.before_loop
- async def before_ping(self) -> None:
- """Only start the loop when the bot is ready."""
- await self.bot.wait_until_guild_available()
-
- def cog_unload(self) -> None:
- """Cancel the periodic ping task when the cog is unloaded."""
- self.periodic_ping.cancel()
-
def setup(bot: Bot) -> None:
"""Load the Verification cog."""
| {"golden_diff": "diff --git a/bot/cogs/verification.py b/bot/cogs/verification.py\n--- a/bot/cogs/verification.py\n+++ b/bot/cogs/verification.py\n@@ -1,9 +1,7 @@\n import logging\n from contextlib import suppress\n-from datetime import datetime\n \n from discord import Colour, Forbidden, Message, NotFound, Object\n-from discord.ext import tasks\n from discord.ext.commands import Cog, Context, command\n \n from bot import constants\n@@ -34,14 +32,6 @@\n <#{constants.Channels.bot_commands}>.\n \"\"\"\n \n-if constants.DEBUG_MODE:\n- PERIODIC_PING = \"Periodic checkpoint message successfully sent.\"\n-else:\n- PERIODIC_PING = (\n- f\"@everyone To verify that you have read our rules, please type `{constants.Bot.prefix}accept`.\"\n- \" If you encounter any problems during the verification process, \"\n- f\"send a direct message to a staff member.\"\n- )\n BOT_MESSAGE_DELETE_DELAY = 10\n \n \n@@ -50,7 +40,6 @@\n \n def __init__(self, bot: Bot):\n self.bot = bot\n- self.periodic_ping.start()\n \n @property\n def mod_log(self) -> ModLog:\n@@ -65,9 +54,7 @@\n \n if message.author.bot:\n # They're a bot, delete their message after the delay.\n- # But not the periodic ping; we like that one.\n- if message.content != PERIODIC_PING:\n- await message.delete(delay=BOT_MESSAGE_DELETE_DELAY)\n+ await message.delete(delay=BOT_MESSAGE_DELETE_DELAY)\n return\n \n # if a user mentions a role or guild member\n@@ -198,34 +185,6 @@\n else:\n return True\n \n- @tasks.loop(hours=12)\n- async def periodic_ping(self) -> None:\n- \"\"\"Every week, mention @everyone to remind them to verify.\"\"\"\n- messages = self.bot.get_channel(constants.Channels.verification).history(limit=10)\n- need_to_post = True # True if a new message needs to be sent.\n-\n- async for message in messages:\n- if message.author == self.bot.user and message.content == PERIODIC_PING:\n- delta = datetime.utcnow() - message.created_at # Time since last message.\n- if delta.days >= 7: # Message is older than a week.\n- await message.delete()\n- else:\n- need_to_post = False\n-\n- break\n-\n- if need_to_post:\n- await self.bot.get_channel(constants.Channels.verification).send(PERIODIC_PING)\n-\n- @periodic_ping.before_loop\n- async def before_ping(self) -> None:\n- \"\"\"Only start the loop when the bot is ready.\"\"\"\n- await self.bot.wait_until_guild_available()\n-\n- def cog_unload(self) -> None:\n- \"\"\"Cancel the periodic ping task when the cog is unloaded.\"\"\"\n- self.periodic_ping.cancel()\n-\n \n def setup(bot: Bot) -> None:\n \"\"\"Load the Verification cog.\"\"\"\n", "issue": "Remove periodic ping in #verification\nThis creates \"accept bombs\" that may trigger our anti-raid and burst spam filters, and is not really necessary anymore. We have more than enough people joining without having to periodically nag at them.\n", "before_files": [{"content": "import logging\nfrom contextlib import suppress\nfrom datetime import datetime\n\nfrom discord import Colour, Forbidden, Message, NotFound, Object\nfrom discord.ext import tasks\nfrom discord.ext.commands import Cog, Context, command\n\nfrom bot import constants\nfrom bot.bot import Bot\nfrom bot.cogs.moderation import ModLog\nfrom bot.decorators import in_whitelist, without_role\nfrom bot.utils.checks import InWhitelistCheckFailure, without_role_check\n\nlog = logging.getLogger(__name__)\n\nWELCOME_MESSAGE = f\"\"\"\nHello! Welcome to the server, and thanks for verifying yourself!\n\nFor your records, these are the documents you accepted:\n\n`1)` Our rules, here: <https://pythondiscord.com/pages/rules>\n`2)` Our privacy policy, here: <https://pythondiscord.com/pages/privacy> - you can find information on how to have \\\nyour information removed here as well.\n\nFeel free to review them at any point!\n\nAdditionally, if you'd like to receive notifications for the announcements \\\nwe post in <#{constants.Channels.announcements}>\nfrom time to time, you can send `!subscribe` to <#{constants.Channels.bot_commands}> at any time \\\nto assign yourself the **Announcements** role. We'll mention this role every time we make an announcement.\n\nIf you'd like to unsubscribe from the announcement notifications, simply send `!unsubscribe` to \\\n<#{constants.Channels.bot_commands}>.\n\"\"\"\n\nif constants.DEBUG_MODE:\n PERIODIC_PING = \"Periodic checkpoint message successfully sent.\"\nelse:\n PERIODIC_PING = (\n f\"@everyone To verify that you have read our rules, please type `{constants.Bot.prefix}accept`.\"\n \" If you encounter any problems during the verification process, \"\n f\"send a direct message to a staff member.\"\n )\nBOT_MESSAGE_DELETE_DELAY = 10\n\n\nclass Verification(Cog):\n \"\"\"User verification and role self-management.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self.periodic_ping.start()\n\n @property\n def mod_log(self) -> ModLog:\n \"\"\"Get currently loaded ModLog cog instance.\"\"\"\n return self.bot.get_cog(\"ModLog\")\n\n @Cog.listener()\n async def on_message(self, message: Message) -> None:\n \"\"\"Check new message event for messages to the checkpoint channel & process.\"\"\"\n if message.channel.id != constants.Channels.verification:\n return # Only listen for #checkpoint messages\n\n if message.author.bot:\n # They're a bot, delete their message after the delay.\n # But not the periodic ping; we like that one.\n if message.content != PERIODIC_PING:\n await message.delete(delay=BOT_MESSAGE_DELETE_DELAY)\n return\n\n # if a user mentions a role or guild member\n # alert the mods in mod-alerts channel\n if message.mentions or message.role_mentions:\n log.debug(\n f\"{message.author} mentioned one or more users \"\n f\"and/or roles in {message.channel.name}\"\n )\n\n embed_text = (\n f\"{message.author.mention} sent a message in \"\n f\"{message.channel.mention} that contained user and/or role mentions.\"\n f\"\\n\\n**Original message:**\\n>>> {message.content}\"\n )\n\n # Send pretty mod log embed to mod-alerts\n await self.mod_log.send_log_message(\n icon_url=constants.Icons.filtering,\n colour=Colour(constants.Colours.soft_red),\n title=f\"User/Role mentioned in {message.channel.name}\",\n text=embed_text,\n thumbnail=message.author.avatar_url_as(static_format=\"png\"),\n channel_id=constants.Channels.mod_alerts,\n )\n\n ctx: Context = await self.bot.get_context(message)\n if ctx.command is not None and ctx.command.name == \"accept\":\n return\n\n if any(r.id == constants.Roles.verified for r in ctx.author.roles):\n log.info(\n f\"{ctx.author} posted '{ctx.message.content}' \"\n \"in the verification channel, but is already verified.\"\n )\n return\n\n log.debug(\n f\"{ctx.author} posted '{ctx.message.content}' in the verification \"\n \"channel. We are providing instructions how to verify.\"\n )\n await ctx.send(\n f\"{ctx.author.mention} Please type `!accept` to verify that you accept our rules, \"\n f\"and gain access to the rest of the server.\",\n delete_after=20\n )\n\n log.trace(f\"Deleting the message posted by {ctx.author}\")\n with suppress(NotFound):\n await ctx.message.delete()\n\n @command(name='accept', aliases=('verify', 'verified', 'accepted'), hidden=True)\n @without_role(constants.Roles.verified)\n @in_whitelist(channels=(constants.Channels.verification,))\n async def accept_command(self, ctx: Context, *_) -> None: # We don't actually care about the args\n \"\"\"Accept our rules and gain access to the rest of the server.\"\"\"\n log.debug(f\"{ctx.author} called !accept. Assigning the 'Developer' role.\")\n await ctx.author.add_roles(Object(constants.Roles.verified), reason=\"Accepted the rules\")\n try:\n await ctx.author.send(WELCOME_MESSAGE)\n except Forbidden:\n log.info(f\"Sending welcome message failed for {ctx.author}.\")\n finally:\n log.trace(f\"Deleting accept message by {ctx.author}.\")\n with suppress(NotFound):\n self.mod_log.ignore(constants.Event.message_delete, ctx.message.id)\n await ctx.message.delete()\n\n @command(name='subscribe')\n @in_whitelist(channels=(constants.Channels.bot_commands,))\n async def subscribe_command(self, ctx: Context, *_) -> None: # We don't actually care about the args\n \"\"\"Subscribe to announcement notifications by assigning yourself the role.\"\"\"\n has_role = False\n\n for role in ctx.author.roles:\n if role.id == constants.Roles.announcements:\n has_role = True\n break\n\n if has_role:\n await ctx.send(f\"{ctx.author.mention} You're already subscribed!\")\n return\n\n log.debug(f\"{ctx.author} called !subscribe. Assigning the 'Announcements' role.\")\n await ctx.author.add_roles(Object(constants.Roles.announcements), reason=\"Subscribed to announcements\")\n\n log.trace(f\"Deleting the message posted by {ctx.author}.\")\n\n await ctx.send(\n f\"{ctx.author.mention} Subscribed to <#{constants.Channels.announcements}> notifications.\",\n )\n\n @command(name='unsubscribe')\n @in_whitelist(channels=(constants.Channels.bot_commands,))\n async def unsubscribe_command(self, ctx: Context, *_) -> None: # We don't actually care about the args\n \"\"\"Unsubscribe from announcement notifications by removing the role from yourself.\"\"\"\n has_role = False\n\n for role in ctx.author.roles:\n if role.id == constants.Roles.announcements:\n has_role = True\n break\n\n if not has_role:\n await ctx.send(f\"{ctx.author.mention} You're already unsubscribed!\")\n return\n\n log.debug(f\"{ctx.author} called !unsubscribe. Removing the 'Announcements' role.\")\n await ctx.author.remove_roles(Object(constants.Roles.announcements), reason=\"Unsubscribed from announcements\")\n\n log.trace(f\"Deleting the message posted by {ctx.author}.\")\n\n await ctx.send(\n f\"{ctx.author.mention} Unsubscribed from <#{constants.Channels.announcements}> notifications.\"\n )\n\n # This cannot be static (must have a __func__ attribute).\n async def cog_command_error(self, ctx: Context, error: Exception) -> None:\n \"\"\"Check for & ignore any InWhitelistCheckFailure.\"\"\"\n if isinstance(error, InWhitelistCheckFailure):\n error.handled = True\n\n @staticmethod\n def bot_check(ctx: Context) -> bool:\n \"\"\"Block any command within the verification channel that is not !accept.\"\"\"\n if ctx.channel.id == constants.Channels.verification and without_role_check(ctx, *constants.MODERATION_ROLES):\n return ctx.command.name == \"accept\"\n else:\n return True\n\n @tasks.loop(hours=12)\n async def periodic_ping(self) -> None:\n \"\"\"Every week, mention @everyone to remind them to verify.\"\"\"\n messages = self.bot.get_channel(constants.Channels.verification).history(limit=10)\n need_to_post = True # True if a new message needs to be sent.\n\n async for message in messages:\n if message.author == self.bot.user and message.content == PERIODIC_PING:\n delta = datetime.utcnow() - message.created_at # Time since last message.\n if delta.days >= 7: # Message is older than a week.\n await message.delete()\n else:\n need_to_post = False\n\n break\n\n if need_to_post:\n await self.bot.get_channel(constants.Channels.verification).send(PERIODIC_PING)\n\n @periodic_ping.before_loop\n async def before_ping(self) -> None:\n \"\"\"Only start the loop when the bot is ready.\"\"\"\n await self.bot.wait_until_guild_available()\n\n def cog_unload(self) -> None:\n \"\"\"Cancel the periodic ping task when the cog is unloaded.\"\"\"\n self.periodic_ping.cancel()\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the Verification cog.\"\"\"\n bot.add_cog(Verification(bot))\n", "path": "bot/cogs/verification.py"}], "after_files": [{"content": "import logging\nfrom contextlib import suppress\n\nfrom discord import Colour, Forbidden, Message, NotFound, Object\nfrom discord.ext.commands import Cog, Context, command\n\nfrom bot import constants\nfrom bot.bot import Bot\nfrom bot.cogs.moderation import ModLog\nfrom bot.decorators import in_whitelist, without_role\nfrom bot.utils.checks import InWhitelistCheckFailure, without_role_check\n\nlog = logging.getLogger(__name__)\n\nWELCOME_MESSAGE = f\"\"\"\nHello! Welcome to the server, and thanks for verifying yourself!\n\nFor your records, these are the documents you accepted:\n\n`1)` Our rules, here: <https://pythondiscord.com/pages/rules>\n`2)` Our privacy policy, here: <https://pythondiscord.com/pages/privacy> - you can find information on how to have \\\nyour information removed here as well.\n\nFeel free to review them at any point!\n\nAdditionally, if you'd like to receive notifications for the announcements \\\nwe post in <#{constants.Channels.announcements}>\nfrom time to time, you can send `!subscribe` to <#{constants.Channels.bot_commands}> at any time \\\nto assign yourself the **Announcements** role. We'll mention this role every time we make an announcement.\n\nIf you'd like to unsubscribe from the announcement notifications, simply send `!unsubscribe` to \\\n<#{constants.Channels.bot_commands}>.\n\"\"\"\n\nBOT_MESSAGE_DELETE_DELAY = 10\n\n\nclass Verification(Cog):\n \"\"\"User verification and role self-management.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n\n @property\n def mod_log(self) -> ModLog:\n \"\"\"Get currently loaded ModLog cog instance.\"\"\"\n return self.bot.get_cog(\"ModLog\")\n\n @Cog.listener()\n async def on_message(self, message: Message) -> None:\n \"\"\"Check new message event for messages to the checkpoint channel & process.\"\"\"\n if message.channel.id != constants.Channels.verification:\n return # Only listen for #checkpoint messages\n\n if message.author.bot:\n # They're a bot, delete their message after the delay.\n await message.delete(delay=BOT_MESSAGE_DELETE_DELAY)\n return\n\n # if a user mentions a role or guild member\n # alert the mods in mod-alerts channel\n if message.mentions or message.role_mentions:\n log.debug(\n f\"{message.author} mentioned one or more users \"\n f\"and/or roles in {message.channel.name}\"\n )\n\n embed_text = (\n f\"{message.author.mention} sent a message in \"\n f\"{message.channel.mention} that contained user and/or role mentions.\"\n f\"\\n\\n**Original message:**\\n>>> {message.content}\"\n )\n\n # Send pretty mod log embed to mod-alerts\n await self.mod_log.send_log_message(\n icon_url=constants.Icons.filtering,\n colour=Colour(constants.Colours.soft_red),\n title=f\"User/Role mentioned in {message.channel.name}\",\n text=embed_text,\n thumbnail=message.author.avatar_url_as(static_format=\"png\"),\n channel_id=constants.Channels.mod_alerts,\n )\n\n ctx: Context = await self.bot.get_context(message)\n if ctx.command is not None and ctx.command.name == \"accept\":\n return\n\n if any(r.id == constants.Roles.verified for r in ctx.author.roles):\n log.info(\n f\"{ctx.author} posted '{ctx.message.content}' \"\n \"in the verification channel, but is already verified.\"\n )\n return\n\n log.debug(\n f\"{ctx.author} posted '{ctx.message.content}' in the verification \"\n \"channel. We are providing instructions how to verify.\"\n )\n await ctx.send(\n f\"{ctx.author.mention} Please type `!accept` to verify that you accept our rules, \"\n f\"and gain access to the rest of the server.\",\n delete_after=20\n )\n\n log.trace(f\"Deleting the message posted by {ctx.author}\")\n with suppress(NotFound):\n await ctx.message.delete()\n\n @command(name='accept', aliases=('verify', 'verified', 'accepted'), hidden=True)\n @without_role(constants.Roles.verified)\n @in_whitelist(channels=(constants.Channels.verification,))\n async def accept_command(self, ctx: Context, *_) -> None: # We don't actually care about the args\n \"\"\"Accept our rules and gain access to the rest of the server.\"\"\"\n log.debug(f\"{ctx.author} called !accept. Assigning the 'Developer' role.\")\n await ctx.author.add_roles(Object(constants.Roles.verified), reason=\"Accepted the rules\")\n try:\n await ctx.author.send(WELCOME_MESSAGE)\n except Forbidden:\n log.info(f\"Sending welcome message failed for {ctx.author}.\")\n finally:\n log.trace(f\"Deleting accept message by {ctx.author}.\")\n with suppress(NotFound):\n self.mod_log.ignore(constants.Event.message_delete, ctx.message.id)\n await ctx.message.delete()\n\n @command(name='subscribe')\n @in_whitelist(channels=(constants.Channels.bot_commands,))\n async def subscribe_command(self, ctx: Context, *_) -> None: # We don't actually care about the args\n \"\"\"Subscribe to announcement notifications by assigning yourself the role.\"\"\"\n has_role = False\n\n for role in ctx.author.roles:\n if role.id == constants.Roles.announcements:\n has_role = True\n break\n\n if has_role:\n await ctx.send(f\"{ctx.author.mention} You're already subscribed!\")\n return\n\n log.debug(f\"{ctx.author} called !subscribe. Assigning the 'Announcements' role.\")\n await ctx.author.add_roles(Object(constants.Roles.announcements), reason=\"Subscribed to announcements\")\n\n log.trace(f\"Deleting the message posted by {ctx.author}.\")\n\n await ctx.send(\n f\"{ctx.author.mention} Subscribed to <#{constants.Channels.announcements}> notifications.\",\n )\n\n @command(name='unsubscribe')\n @in_whitelist(channels=(constants.Channels.bot_commands,))\n async def unsubscribe_command(self, ctx: Context, *_) -> None: # We don't actually care about the args\n \"\"\"Unsubscribe from announcement notifications by removing the role from yourself.\"\"\"\n has_role = False\n\n for role in ctx.author.roles:\n if role.id == constants.Roles.announcements:\n has_role = True\n break\n\n if not has_role:\n await ctx.send(f\"{ctx.author.mention} You're already unsubscribed!\")\n return\n\n log.debug(f\"{ctx.author} called !unsubscribe. Removing the 'Announcements' role.\")\n await ctx.author.remove_roles(Object(constants.Roles.announcements), reason=\"Unsubscribed from announcements\")\n\n log.trace(f\"Deleting the message posted by {ctx.author}.\")\n\n await ctx.send(\n f\"{ctx.author.mention} Unsubscribed from <#{constants.Channels.announcements}> notifications.\"\n )\n\n # This cannot be static (must have a __func__ attribute).\n async def cog_command_error(self, ctx: Context, error: Exception) -> None:\n \"\"\"Check for & ignore any InWhitelistCheckFailure.\"\"\"\n if isinstance(error, InWhitelistCheckFailure):\n error.handled = True\n\n @staticmethod\n def bot_check(ctx: Context) -> bool:\n \"\"\"Block any command within the verification channel that is not !accept.\"\"\"\n if ctx.channel.id == constants.Channels.verification and without_role_check(ctx, *constants.MODERATION_ROLES):\n return ctx.command.name == \"accept\"\n else:\n return True\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the Verification cog.\"\"\"\n bot.add_cog(Verification(bot))\n", "path": "bot/cogs/verification.py"}]} | 2,925 | 682 |
gh_patches_debug_655 | rasdani/github-patches | git_diff | pex-tool__pex-2104 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Release 2.1.130
On the docket:
+ [x] Pex fails to lock - missing artifact #2098
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pex/version.py`
Content:
```
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.129"
5
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.129"
+__version__ = "2.1.130"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.129\"\n+__version__ = \"2.1.130\"\n", "issue": "Release 2.1.130\nOn the docket:\r\n+ [x] Pex fails to lock - missing artifact #2098 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.129\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.130\"\n", "path": "pex/version.py"}]} | 341 | 98 |
gh_patches_debug_18520 | rasdani/github-patches | git_diff | pydantic__pydantic-7346 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Failure to override an inherited field with a computed field
### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
It seems that overriding a regular field with a computed field doesn't behave as expected. Instead of a value being returned by accessing the property the property object itself is returned. And as for `@cached_property` causes an exception within `pydantic/_internal/_utils.py:smart_deepcopy`.
### Example Code
```Python
from pydantic import BaseModel, computed_field
from functools import cached_property
class Foo(BaseModel):
name: str = "Foo"
class Bar(Foo):
@computed_field
@property
def name(self) -> str:
return "Bar"
print(Foo().name) # Foo
print(Bar().name) # <property object at 0x...>
# Upon definition:
# > TypeError: cannot pickle '_thread.RLock' object
class Baz(Foo):
@computed_field
@cached_property
def name(self) -> str:
return "Baz"
```
Edit: After some further playing around, removing the default `name = "Foo"` from `Foo` it's clear that `name` within `Bar` is in some kind of weird hybrid state.
```Python
class Foo(BaseModel):
name: str
class Bar(Foo):
@computed_field
@property
def name(self) -> str:
print("from property")
return "Bar"
# Pyright complains:
# 1. Pyright: Argument missing for parameter "name" [reportGeneralTypeIssues]
# But runs and I get the same result
print(Bar().name) # <property object at 0x...>
# Prints name as inherited from Foo, and without the `"from property"`
print(Bar(name="Bar")) # Bar
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.1.1
pydantic-core version: 2.4.0
pydantic-core build: profile=release pgo=false mimalloc=true
install path: /Users/.../.pyenv/versions/3.11.1/lib/python3.11/site-packages/pydantic
python version: 3.11.1 (main, Jan 14 2023, 16:44:31) [Clang 13.1.6 (clang-1316.0.21.2.5)]
platform: macOS-13.4-arm64-arm-64bit
optional deps. installed: ['email-validator', 'typing-extensions']
```
Selected Assignee: @hramezani
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pydantic/_internal/_fields.py`
Content:
```
1 """Private logic related to fields (the `Field()` function and `FieldInfo` class), and arguments to `Annotated`."""
2 from __future__ import annotations as _annotations
3
4 import dataclasses
5 import sys
6 import warnings
7 from copy import copy
8 from typing import TYPE_CHECKING, Any
9
10 from annotated_types import BaseMetadata
11 from pydantic_core import PydanticUndefined
12
13 from . import _typing_extra
14 from ._config import ConfigWrapper
15 from ._repr import Representation
16 from ._typing_extra import get_cls_type_hints_lenient, get_type_hints, is_classvar, is_finalvar
17
18 if TYPE_CHECKING:
19 from ..fields import FieldInfo
20 from ..main import BaseModel
21 from ._dataclasses import StandardDataclass
22
23
24 def get_type_hints_infer_globalns(
25 obj: Any,
26 localns: dict[str, Any] | None = None,
27 include_extras: bool = False,
28 ) -> dict[str, Any]:
29 """Gets type hints for an object by inferring the global namespace.
30
31 It uses the `typing.get_type_hints`, The only thing that we do here is fetching
32 global namespace from `obj.__module__` if it is not `None`.
33
34 Args:
35 obj: The object to get its type hints.
36 localns: The local namespaces.
37 include_extras: Whether to recursively include annotation metadata.
38
39 Returns:
40 The object type hints.
41 """
42 module_name = getattr(obj, '__module__', None)
43 globalns: dict[str, Any] | None = None
44 if module_name:
45 try:
46 globalns = sys.modules[module_name].__dict__
47 except KeyError:
48 # happens occasionally, see https://github.com/pydantic/pydantic/issues/2363
49 pass
50 return get_type_hints(obj, globalns=globalns, localns=localns, include_extras=include_extras)
51
52
53 class PydanticMetadata(Representation):
54 """Base class for annotation markers like `Strict`."""
55
56 __slots__ = ()
57
58
59 class PydanticGeneralMetadata(PydanticMetadata, BaseMetadata):
60 """Pydantic general metada like `max_digits`."""
61
62 def __init__(self, **metadata: Any):
63 self.__dict__ = metadata
64
65
66 def collect_model_fields( # noqa: C901
67 cls: type[BaseModel],
68 bases: tuple[type[Any], ...],
69 config_wrapper: ConfigWrapper,
70 types_namespace: dict[str, Any] | None,
71 *,
72 typevars_map: dict[Any, Any] | None = None,
73 ) -> tuple[dict[str, FieldInfo], set[str]]:
74 """Collect the fields of a nascent pydantic model.
75
76 Also collect the names of any ClassVars present in the type hints.
77
78 The returned value is a tuple of two items: the fields dict, and the set of ClassVar names.
79
80 Args:
81 cls: BaseModel or dataclass.
82 bases: Parents of the class, generally `cls.__bases__`.
83 config_wrapper: The config wrapper instance.
84 types_namespace: Optional extra namespace to look for types in.
85 typevars_map: A dictionary mapping type variables to their concrete types.
86
87 Returns:
88 A tuple contains fields and class variables.
89
90 Raises:
91 NameError:
92 - If there is a conflict between a field name and protected namespaces.
93 - If there is a field other than `root` in `RootModel`.
94 - If a field shadows an attribute in the parent model.
95 """
96 from ..fields import FieldInfo
97
98 type_hints = get_cls_type_hints_lenient(cls, types_namespace)
99
100 # https://docs.python.org/3/howto/annotations.html#accessing-the-annotations-dict-of-an-object-in-python-3-9-and-older
101 # annotations is only used for finding fields in parent classes
102 annotations = cls.__dict__.get('__annotations__', {})
103 fields: dict[str, FieldInfo] = {}
104
105 class_vars: set[str] = set()
106 for ann_name, ann_type in type_hints.items():
107 if ann_name == 'model_config':
108 # We never want to treat `model_config` as a field
109 # Note: we may need to change this logic if/when we introduce a `BareModel` class with no
110 # protected namespaces (where `model_config` might be allowed as a field name)
111 continue
112 for protected_namespace in config_wrapper.protected_namespaces:
113 if ann_name.startswith(protected_namespace):
114 for b in bases:
115 if hasattr(b, ann_name):
116 from ..main import BaseModel
117
118 if not (issubclass(b, BaseModel) and ann_name in b.model_fields):
119 raise NameError(
120 f'Field "{ann_name}" conflicts with member {getattr(b, ann_name)}'
121 f' of protected namespace "{protected_namespace}".'
122 )
123 else:
124 valid_namespaces = tuple(
125 x for x in config_wrapper.protected_namespaces if not ann_name.startswith(x)
126 )
127 warnings.warn(
128 f'Field "{ann_name}" has conflict with protected namespace "{protected_namespace}".'
129 '\n\nYou may be able to resolve this warning by setting'
130 f" `model_config['protected_namespaces'] = {valid_namespaces}`.",
131 UserWarning,
132 )
133 if is_classvar(ann_type):
134 class_vars.add(ann_name)
135 continue
136 if _is_finalvar_with_default_val(ann_type, getattr(cls, ann_name, PydanticUndefined)):
137 class_vars.add(ann_name)
138 continue
139 if not is_valid_field_name(ann_name):
140 continue
141 if cls.__pydantic_root_model__ and ann_name != 'root':
142 raise NameError(
143 f"Unexpected field with name {ann_name!r}; only 'root' is allowed as a field of a `RootModel`"
144 )
145
146 # when building a generic model with `MyModel[int]`, the generic_origin check makes sure we don't get
147 # "... shadows an attribute" errors
148 generic_origin = getattr(cls, '__pydantic_generic_metadata__', {}).get('origin')
149 for base in bases:
150 dataclass_fields = {
151 field.name for field in (dataclasses.fields(base) if dataclasses.is_dataclass(base) else ())
152 }
153 if hasattr(base, ann_name):
154 if base is generic_origin:
155 # Don't error when "shadowing" of attributes in parametrized generics
156 continue
157
158 if ann_name in dataclass_fields:
159 # Don't error when inheriting stdlib dataclasses whose fields are "shadowed" by defaults being set
160 # on the class instance.
161 continue
162 warnings.warn(
163 f'Field name "{ann_name}" shadows an attribute in parent "{base.__qualname__}"; ',
164 UserWarning,
165 )
166
167 try:
168 default = getattr(cls, ann_name, PydanticUndefined)
169 if default is PydanticUndefined:
170 raise AttributeError
171 except AttributeError:
172 if ann_name in annotations:
173 field_info = FieldInfo.from_annotation(ann_type)
174 else:
175 # if field has no default value and is not in __annotations__ this means that it is
176 # defined in a base class and we can take it from there
177 model_fields_lookup: dict[str, FieldInfo] = {}
178 for x in cls.__bases__[::-1]:
179 model_fields_lookup.update(getattr(x, 'model_fields', {}))
180 if ann_name in model_fields_lookup:
181 # The field was present on one of the (possibly multiple) base classes
182 # copy the field to make sure typevar substitutions don't cause issues with the base classes
183 field_info = copy(model_fields_lookup[ann_name])
184 else:
185 # The field was not found on any base classes; this seems to be caused by fields not getting
186 # generated thanks to models not being fully defined while initializing recursive models.
187 # Nothing stops us from just creating a new FieldInfo for this type hint, so we do this.
188 field_info = FieldInfo.from_annotation(ann_type)
189 else:
190 field_info = FieldInfo.from_annotated_attribute(ann_type, default)
191 # attributes which are fields are removed from the class namespace:
192 # 1. To match the behaviour of annotation-only fields
193 # 2. To avoid false positives in the NameError check above
194 try:
195 delattr(cls, ann_name)
196 except AttributeError:
197 pass # indicates the attribute was on a parent class
198
199 fields[ann_name] = field_info
200
201 if typevars_map:
202 for field in fields.values():
203 field.apply_typevars_map(typevars_map, types_namespace)
204
205 return fields, class_vars
206
207
208 def _is_finalvar_with_default_val(type_: type[Any], val: Any) -> bool:
209 from ..fields import FieldInfo
210
211 if not is_finalvar(type_):
212 return False
213 elif val is PydanticUndefined:
214 return False
215 elif isinstance(val, FieldInfo) and (val.default is PydanticUndefined and val.default_factory is None):
216 return False
217 else:
218 return True
219
220
221 def collect_dataclass_fields(
222 cls: type[StandardDataclass], types_namespace: dict[str, Any] | None, *, typevars_map: dict[Any, Any] | None = None
223 ) -> dict[str, FieldInfo]:
224 """Collect the fields of a dataclass.
225
226 Args:
227 cls: dataclass.
228 types_namespace: Optional extra namespace to look for types in.
229 typevars_map: A dictionary mapping type variables to their concrete types.
230
231 Returns:
232 The dataclass fields.
233 """
234 from ..fields import FieldInfo
235
236 fields: dict[str, FieldInfo] = {}
237 dataclass_fields: dict[str, dataclasses.Field] = cls.__dataclass_fields__
238 cls_localns = dict(vars(cls)) # this matches get_cls_type_hints_lenient, but all tests pass with `= None` instead
239
240 for ann_name, dataclass_field in dataclass_fields.items():
241 ann_type = _typing_extra.eval_type_lenient(dataclass_field.type, types_namespace, cls_localns)
242 if is_classvar(ann_type):
243 continue
244
245 if not dataclass_field.init and dataclass_field.default_factory == dataclasses.MISSING:
246 # TODO: We should probably do something with this so that validate_assignment behaves properly
247 # Issue: https://github.com/pydantic/pydantic/issues/5470
248 continue
249
250 if isinstance(dataclass_field.default, FieldInfo):
251 if dataclass_field.default.init_var:
252 # TODO: same note as above
253 continue
254 field_info = FieldInfo.from_annotated_attribute(ann_type, dataclass_field.default)
255 else:
256 field_info = FieldInfo.from_annotated_attribute(ann_type, dataclass_field)
257 fields[ann_name] = field_info
258
259 if field_info.default is not PydanticUndefined and isinstance(getattr(cls, ann_name, field_info), FieldInfo):
260 # We need this to fix the default when the "default" from __dataclass_fields__ is a pydantic.FieldInfo
261 setattr(cls, ann_name, field_info.default)
262
263 if typevars_map:
264 for field in fields.values():
265 field.apply_typevars_map(typevars_map, types_namespace)
266
267 return fields
268
269
270 def is_valid_field_name(name: str) -> bool:
271 return not name.startswith('_')
272
273
274 def is_valid_privateattr_name(name: str) -> bool:
275 return name.startswith('_') and not name.startswith('__')
276
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pydantic/_internal/_fields.py b/pydantic/_internal/_fields.py
--- a/pydantic/_internal/_fields.py
+++ b/pydantic/_internal/_fields.py
@@ -19,6 +19,7 @@
from ..fields import FieldInfo
from ..main import BaseModel
from ._dataclasses import StandardDataclass
+ from ._decorators import DecoratorInfos
def get_type_hints_infer_globalns(
@@ -196,6 +197,11 @@
except AttributeError:
pass # indicates the attribute was on a parent class
+ # Use cls.__dict__['__pydantic_decorators__'] instead of cls.__pydantic_decorators__
+ # to make sure the decorators have already been built for this exact class
+ decorators: DecoratorInfos = cls.__dict__['__pydantic_decorators__']
+ if ann_name in decorators.computed_fields:
+ raise ValueError("you can't override a field with a computed field")
fields[ann_name] = field_info
if typevars_map:
| {"golden_diff": "diff --git a/pydantic/_internal/_fields.py b/pydantic/_internal/_fields.py\n--- a/pydantic/_internal/_fields.py\n+++ b/pydantic/_internal/_fields.py\n@@ -19,6 +19,7 @@\n from ..fields import FieldInfo\n from ..main import BaseModel\n from ._dataclasses import StandardDataclass\n+ from ._decorators import DecoratorInfos\n \n \n def get_type_hints_infer_globalns(\n@@ -196,6 +197,11 @@\n except AttributeError:\n pass # indicates the attribute was on a parent class\n \n+ # Use cls.__dict__['__pydantic_decorators__'] instead of cls.__pydantic_decorators__\n+ # to make sure the decorators have already been built for this exact class\n+ decorators: DecoratorInfos = cls.__dict__['__pydantic_decorators__']\n+ if ann_name in decorators.computed_fields:\n+ raise ValueError(\"you can't override a field with a computed field\")\n fields[ann_name] = field_info\n \n if typevars_map:\n", "issue": "Failure to override an inherited field with a computed field\n### Initial Checks\r\n\r\n- [X] I confirm that I'm using Pydantic V2\r\n\r\n### Description\r\n\r\nIt seems that overriding a regular field with a computed field doesn't behave as expected. Instead of a value being returned by accessing the property the property object itself is returned. And as for `@cached_property` causes an exception within `pydantic/_internal/_utils.py:smart_deepcopy`.\r\n\r\n### Example Code\r\n\r\n```Python\r\nfrom pydantic import BaseModel, computed_field\r\nfrom functools import cached_property\r\n\r\nclass Foo(BaseModel):\r\n name: str = \"Foo\"\r\n\r\nclass Bar(Foo):\r\n @computed_field\r\n @property\r\n def name(self) -> str:\r\n return \"Bar\"\r\n\r\nprint(Foo().name) # Foo\r\nprint(Bar().name) # <property object at 0x...>\r\n\r\n# Upon definition:\r\n# > TypeError: cannot pickle '_thread.RLock' object\r\nclass Baz(Foo):\r\n @computed_field\r\n @cached_property\r\n def name(self) -> str:\r\n return \"Baz\"\r\n```\r\n\r\nEdit: After some further playing around, removing the default `name = \"Foo\"` from `Foo` it's clear that `name` within `Bar` is in some kind of weird hybrid state.\r\n\r\n```Python\r\nclass Foo(BaseModel):\r\n name: str\r\n\r\nclass Bar(Foo):\r\n @computed_field\r\n @property\r\n def name(self) -> str:\r\n print(\"from property\")\r\n return \"Bar\"\r\n\r\n# Pyright complains:\r\n# 1. Pyright: Argument missing for parameter \"name\" [reportGeneralTypeIssues]\r\n# But runs and I get the same result\r\nprint(Bar().name) # <property object at 0x...>\r\n\r\n# Prints name as inherited from Foo, and without the `\"from property\"`\r\nprint(Bar(name=\"Bar\")) # Bar\r\n```\r\n\r\n\r\n### Python, Pydantic & OS Version\r\n\r\n```Text\r\npydantic version: 2.1.1\r\n pydantic-core version: 2.4.0\r\n pydantic-core build: profile=release pgo=false mimalloc=true\r\n install path: /Users/.../.pyenv/versions/3.11.1/lib/python3.11/site-packages/pydantic\r\n python version: 3.11.1 (main, Jan 14 2023, 16:44:31) [Clang 13.1.6 (clang-1316.0.21.2.5)]\r\n platform: macOS-13.4-arm64-arm-64bit\r\n optional deps. installed: ['email-validator', 'typing-extensions']\r\n```\r\n\r\n\r\nSelected Assignee: @hramezani\n", "before_files": [{"content": "\"\"\"Private logic related to fields (the `Field()` function and `FieldInfo` class), and arguments to `Annotated`.\"\"\"\nfrom __future__ import annotations as _annotations\n\nimport dataclasses\nimport sys\nimport warnings\nfrom copy import copy\nfrom typing import TYPE_CHECKING, Any\n\nfrom annotated_types import BaseMetadata\nfrom pydantic_core import PydanticUndefined\n\nfrom . import _typing_extra\nfrom ._config import ConfigWrapper\nfrom ._repr import Representation\nfrom ._typing_extra import get_cls_type_hints_lenient, get_type_hints, is_classvar, is_finalvar\n\nif TYPE_CHECKING:\n from ..fields import FieldInfo\n from ..main import BaseModel\n from ._dataclasses import StandardDataclass\n\n\ndef get_type_hints_infer_globalns(\n obj: Any,\n localns: dict[str, Any] | None = None,\n include_extras: bool = False,\n) -> dict[str, Any]:\n \"\"\"Gets type hints for an object by inferring the global namespace.\n\n It uses the `typing.get_type_hints`, The only thing that we do here is fetching\n global namespace from `obj.__module__` if it is not `None`.\n\n Args:\n obj: The object to get its type hints.\n localns: The local namespaces.\n include_extras: Whether to recursively include annotation metadata.\n\n Returns:\n The object type hints.\n \"\"\"\n module_name = getattr(obj, '__module__', None)\n globalns: dict[str, Any] | None = None\n if module_name:\n try:\n globalns = sys.modules[module_name].__dict__\n except KeyError:\n # happens occasionally, see https://github.com/pydantic/pydantic/issues/2363\n pass\n return get_type_hints(obj, globalns=globalns, localns=localns, include_extras=include_extras)\n\n\nclass PydanticMetadata(Representation):\n \"\"\"Base class for annotation markers like `Strict`.\"\"\"\n\n __slots__ = ()\n\n\nclass PydanticGeneralMetadata(PydanticMetadata, BaseMetadata):\n \"\"\"Pydantic general metada like `max_digits`.\"\"\"\n\n def __init__(self, **metadata: Any):\n self.__dict__ = metadata\n\n\ndef collect_model_fields( # noqa: C901\n cls: type[BaseModel],\n bases: tuple[type[Any], ...],\n config_wrapper: ConfigWrapper,\n types_namespace: dict[str, Any] | None,\n *,\n typevars_map: dict[Any, Any] | None = None,\n) -> tuple[dict[str, FieldInfo], set[str]]:\n \"\"\"Collect the fields of a nascent pydantic model.\n\n Also collect the names of any ClassVars present in the type hints.\n\n The returned value is a tuple of two items: the fields dict, and the set of ClassVar names.\n\n Args:\n cls: BaseModel or dataclass.\n bases: Parents of the class, generally `cls.__bases__`.\n config_wrapper: The config wrapper instance.\n types_namespace: Optional extra namespace to look for types in.\n typevars_map: A dictionary mapping type variables to their concrete types.\n\n Returns:\n A tuple contains fields and class variables.\n\n Raises:\n NameError:\n - If there is a conflict between a field name and protected namespaces.\n - If there is a field other than `root` in `RootModel`.\n - If a field shadows an attribute in the parent model.\n \"\"\"\n from ..fields import FieldInfo\n\n type_hints = get_cls_type_hints_lenient(cls, types_namespace)\n\n # https://docs.python.org/3/howto/annotations.html#accessing-the-annotations-dict-of-an-object-in-python-3-9-and-older\n # annotations is only used for finding fields in parent classes\n annotations = cls.__dict__.get('__annotations__', {})\n fields: dict[str, FieldInfo] = {}\n\n class_vars: set[str] = set()\n for ann_name, ann_type in type_hints.items():\n if ann_name == 'model_config':\n # We never want to treat `model_config` as a field\n # Note: we may need to change this logic if/when we introduce a `BareModel` class with no\n # protected namespaces (where `model_config` might be allowed as a field name)\n continue\n for protected_namespace in config_wrapper.protected_namespaces:\n if ann_name.startswith(protected_namespace):\n for b in bases:\n if hasattr(b, ann_name):\n from ..main import BaseModel\n\n if not (issubclass(b, BaseModel) and ann_name in b.model_fields):\n raise NameError(\n f'Field \"{ann_name}\" conflicts with member {getattr(b, ann_name)}'\n f' of protected namespace \"{protected_namespace}\".'\n )\n else:\n valid_namespaces = tuple(\n x for x in config_wrapper.protected_namespaces if not ann_name.startswith(x)\n )\n warnings.warn(\n f'Field \"{ann_name}\" has conflict with protected namespace \"{protected_namespace}\".'\n '\\n\\nYou may be able to resolve this warning by setting'\n f\" `model_config['protected_namespaces'] = {valid_namespaces}`.\",\n UserWarning,\n )\n if is_classvar(ann_type):\n class_vars.add(ann_name)\n continue\n if _is_finalvar_with_default_val(ann_type, getattr(cls, ann_name, PydanticUndefined)):\n class_vars.add(ann_name)\n continue\n if not is_valid_field_name(ann_name):\n continue\n if cls.__pydantic_root_model__ and ann_name != 'root':\n raise NameError(\n f\"Unexpected field with name {ann_name!r}; only 'root' is allowed as a field of a `RootModel`\"\n )\n\n # when building a generic model with `MyModel[int]`, the generic_origin check makes sure we don't get\n # \"... shadows an attribute\" errors\n generic_origin = getattr(cls, '__pydantic_generic_metadata__', {}).get('origin')\n for base in bases:\n dataclass_fields = {\n field.name for field in (dataclasses.fields(base) if dataclasses.is_dataclass(base) else ())\n }\n if hasattr(base, ann_name):\n if base is generic_origin:\n # Don't error when \"shadowing\" of attributes in parametrized generics\n continue\n\n if ann_name in dataclass_fields:\n # Don't error when inheriting stdlib dataclasses whose fields are \"shadowed\" by defaults being set\n # on the class instance.\n continue\n warnings.warn(\n f'Field name \"{ann_name}\" shadows an attribute in parent \"{base.__qualname__}\"; ',\n UserWarning,\n )\n\n try:\n default = getattr(cls, ann_name, PydanticUndefined)\n if default is PydanticUndefined:\n raise AttributeError\n except AttributeError:\n if ann_name in annotations:\n field_info = FieldInfo.from_annotation(ann_type)\n else:\n # if field has no default value and is not in __annotations__ this means that it is\n # defined in a base class and we can take it from there\n model_fields_lookup: dict[str, FieldInfo] = {}\n for x in cls.__bases__[::-1]:\n model_fields_lookup.update(getattr(x, 'model_fields', {}))\n if ann_name in model_fields_lookup:\n # The field was present on one of the (possibly multiple) base classes\n # copy the field to make sure typevar substitutions don't cause issues with the base classes\n field_info = copy(model_fields_lookup[ann_name])\n else:\n # The field was not found on any base classes; this seems to be caused by fields not getting\n # generated thanks to models not being fully defined while initializing recursive models.\n # Nothing stops us from just creating a new FieldInfo for this type hint, so we do this.\n field_info = FieldInfo.from_annotation(ann_type)\n else:\n field_info = FieldInfo.from_annotated_attribute(ann_type, default)\n # attributes which are fields are removed from the class namespace:\n # 1. To match the behaviour of annotation-only fields\n # 2. To avoid false positives in the NameError check above\n try:\n delattr(cls, ann_name)\n except AttributeError:\n pass # indicates the attribute was on a parent class\n\n fields[ann_name] = field_info\n\n if typevars_map:\n for field in fields.values():\n field.apply_typevars_map(typevars_map, types_namespace)\n\n return fields, class_vars\n\n\ndef _is_finalvar_with_default_val(type_: type[Any], val: Any) -> bool:\n from ..fields import FieldInfo\n\n if not is_finalvar(type_):\n return False\n elif val is PydanticUndefined:\n return False\n elif isinstance(val, FieldInfo) and (val.default is PydanticUndefined and val.default_factory is None):\n return False\n else:\n return True\n\n\ndef collect_dataclass_fields(\n cls: type[StandardDataclass], types_namespace: dict[str, Any] | None, *, typevars_map: dict[Any, Any] | None = None\n) -> dict[str, FieldInfo]:\n \"\"\"Collect the fields of a dataclass.\n\n Args:\n cls: dataclass.\n types_namespace: Optional extra namespace to look for types in.\n typevars_map: A dictionary mapping type variables to their concrete types.\n\n Returns:\n The dataclass fields.\n \"\"\"\n from ..fields import FieldInfo\n\n fields: dict[str, FieldInfo] = {}\n dataclass_fields: dict[str, dataclasses.Field] = cls.__dataclass_fields__\n cls_localns = dict(vars(cls)) # this matches get_cls_type_hints_lenient, but all tests pass with `= None` instead\n\n for ann_name, dataclass_field in dataclass_fields.items():\n ann_type = _typing_extra.eval_type_lenient(dataclass_field.type, types_namespace, cls_localns)\n if is_classvar(ann_type):\n continue\n\n if not dataclass_field.init and dataclass_field.default_factory == dataclasses.MISSING:\n # TODO: We should probably do something with this so that validate_assignment behaves properly\n # Issue: https://github.com/pydantic/pydantic/issues/5470\n continue\n\n if isinstance(dataclass_field.default, FieldInfo):\n if dataclass_field.default.init_var:\n # TODO: same note as above\n continue\n field_info = FieldInfo.from_annotated_attribute(ann_type, dataclass_field.default)\n else:\n field_info = FieldInfo.from_annotated_attribute(ann_type, dataclass_field)\n fields[ann_name] = field_info\n\n if field_info.default is not PydanticUndefined and isinstance(getattr(cls, ann_name, field_info), FieldInfo):\n # We need this to fix the default when the \"default\" from __dataclass_fields__ is a pydantic.FieldInfo\n setattr(cls, ann_name, field_info.default)\n\n if typevars_map:\n for field in fields.values():\n field.apply_typevars_map(typevars_map, types_namespace)\n\n return fields\n\n\ndef is_valid_field_name(name: str) -> bool:\n return not name.startswith('_')\n\n\ndef is_valid_privateattr_name(name: str) -> bool:\n return name.startswith('_') and not name.startswith('__')\n", "path": "pydantic/_internal/_fields.py"}], "after_files": [{"content": "\"\"\"Private logic related to fields (the `Field()` function and `FieldInfo` class), and arguments to `Annotated`.\"\"\"\nfrom __future__ import annotations as _annotations\n\nimport dataclasses\nimport sys\nimport warnings\nfrom copy import copy\nfrom typing import TYPE_CHECKING, Any\n\nfrom annotated_types import BaseMetadata\nfrom pydantic_core import PydanticUndefined\n\nfrom . import _typing_extra\nfrom ._config import ConfigWrapper\nfrom ._repr import Representation\nfrom ._typing_extra import get_cls_type_hints_lenient, get_type_hints, is_classvar, is_finalvar\n\nif TYPE_CHECKING:\n from ..fields import FieldInfo\n from ..main import BaseModel\n from ._dataclasses import StandardDataclass\n from ._decorators import DecoratorInfos\n\n\ndef get_type_hints_infer_globalns(\n obj: Any,\n localns: dict[str, Any] | None = None,\n include_extras: bool = False,\n) -> dict[str, Any]:\n \"\"\"Gets type hints for an object by inferring the global namespace.\n\n It uses the `typing.get_type_hints`, The only thing that we do here is fetching\n global namespace from `obj.__module__` if it is not `None`.\n\n Args:\n obj: The object to get its type hints.\n localns: The local namespaces.\n include_extras: Whether to recursively include annotation metadata.\n\n Returns:\n The object type hints.\n \"\"\"\n module_name = getattr(obj, '__module__', None)\n globalns: dict[str, Any] | None = None\n if module_name:\n try:\n globalns = sys.modules[module_name].__dict__\n except KeyError:\n # happens occasionally, see https://github.com/pydantic/pydantic/issues/2363\n pass\n return get_type_hints(obj, globalns=globalns, localns=localns, include_extras=include_extras)\n\n\nclass PydanticMetadata(Representation):\n \"\"\"Base class for annotation markers like `Strict`.\"\"\"\n\n __slots__ = ()\n\n\nclass PydanticGeneralMetadata(PydanticMetadata, BaseMetadata):\n \"\"\"Pydantic general metada like `max_digits`.\"\"\"\n\n def __init__(self, **metadata: Any):\n self.__dict__ = metadata\n\n\ndef collect_model_fields( # noqa: C901\n cls: type[BaseModel],\n bases: tuple[type[Any], ...],\n config_wrapper: ConfigWrapper,\n types_namespace: dict[str, Any] | None,\n *,\n typevars_map: dict[Any, Any] | None = None,\n) -> tuple[dict[str, FieldInfo], set[str]]:\n \"\"\"Collect the fields of a nascent pydantic model.\n\n Also collect the names of any ClassVars present in the type hints.\n\n The returned value is a tuple of two items: the fields dict, and the set of ClassVar names.\n\n Args:\n cls: BaseModel or dataclass.\n bases: Parents of the class, generally `cls.__bases__`.\n config_wrapper: The config wrapper instance.\n types_namespace: Optional extra namespace to look for types in.\n typevars_map: A dictionary mapping type variables to their concrete types.\n\n Returns:\n A tuple contains fields and class variables.\n\n Raises:\n NameError:\n - If there is a conflict between a field name and protected namespaces.\n - If there is a field other than `root` in `RootModel`.\n - If a field shadows an attribute in the parent model.\n \"\"\"\n from ..fields import FieldInfo\n\n type_hints = get_cls_type_hints_lenient(cls, types_namespace)\n\n # https://docs.python.org/3/howto/annotations.html#accessing-the-annotations-dict-of-an-object-in-python-3-9-and-older\n # annotations is only used for finding fields in parent classes\n annotations = cls.__dict__.get('__annotations__', {})\n fields: dict[str, FieldInfo] = {}\n\n class_vars: set[str] = set()\n for ann_name, ann_type in type_hints.items():\n if ann_name == 'model_config':\n # We never want to treat `model_config` as a field\n # Note: we may need to change this logic if/when we introduce a `BareModel` class with no\n # protected namespaces (where `model_config` might be allowed as a field name)\n continue\n for protected_namespace in config_wrapper.protected_namespaces:\n if ann_name.startswith(protected_namespace):\n for b in bases:\n if hasattr(b, ann_name):\n from ..main import BaseModel\n\n if not (issubclass(b, BaseModel) and ann_name in b.model_fields):\n raise NameError(\n f'Field \"{ann_name}\" conflicts with member {getattr(b, ann_name)}'\n f' of protected namespace \"{protected_namespace}\".'\n )\n else:\n valid_namespaces = tuple(\n x for x in config_wrapper.protected_namespaces if not ann_name.startswith(x)\n )\n warnings.warn(\n f'Field \"{ann_name}\" has conflict with protected namespace \"{protected_namespace}\".'\n '\\n\\nYou may be able to resolve this warning by setting'\n f\" `model_config['protected_namespaces'] = {valid_namespaces}`.\",\n UserWarning,\n )\n if is_classvar(ann_type):\n class_vars.add(ann_name)\n continue\n if _is_finalvar_with_default_val(ann_type, getattr(cls, ann_name, PydanticUndefined)):\n class_vars.add(ann_name)\n continue\n if not is_valid_field_name(ann_name):\n continue\n if cls.__pydantic_root_model__ and ann_name != 'root':\n raise NameError(\n f\"Unexpected field with name {ann_name!r}; only 'root' is allowed as a field of a `RootModel`\"\n )\n\n # when building a generic model with `MyModel[int]`, the generic_origin check makes sure we don't get\n # \"... shadows an attribute\" errors\n generic_origin = getattr(cls, '__pydantic_generic_metadata__', {}).get('origin')\n for base in bases:\n dataclass_fields = {\n field.name for field in (dataclasses.fields(base) if dataclasses.is_dataclass(base) else ())\n }\n if hasattr(base, ann_name):\n if base is generic_origin:\n # Don't error when \"shadowing\" of attributes in parametrized generics\n continue\n\n if ann_name in dataclass_fields:\n # Don't error when inheriting stdlib dataclasses whose fields are \"shadowed\" by defaults being set\n # on the class instance.\n continue\n warnings.warn(\n f'Field name \"{ann_name}\" shadows an attribute in parent \"{base.__qualname__}\"; ',\n UserWarning,\n )\n\n try:\n default = getattr(cls, ann_name, PydanticUndefined)\n if default is PydanticUndefined:\n raise AttributeError\n except AttributeError:\n if ann_name in annotations:\n field_info = FieldInfo.from_annotation(ann_type)\n else:\n # if field has no default value and is not in __annotations__ this means that it is\n # defined in a base class and we can take it from there\n model_fields_lookup: dict[str, FieldInfo] = {}\n for x in cls.__bases__[::-1]:\n model_fields_lookup.update(getattr(x, 'model_fields', {}))\n if ann_name in model_fields_lookup:\n # The field was present on one of the (possibly multiple) base classes\n # copy the field to make sure typevar substitutions don't cause issues with the base classes\n field_info = copy(model_fields_lookup[ann_name])\n else:\n # The field was not found on any base classes; this seems to be caused by fields not getting\n # generated thanks to models not being fully defined while initializing recursive models.\n # Nothing stops us from just creating a new FieldInfo for this type hint, so we do this.\n field_info = FieldInfo.from_annotation(ann_type)\n else:\n field_info = FieldInfo.from_annotated_attribute(ann_type, default)\n # attributes which are fields are removed from the class namespace:\n # 1. To match the behaviour of annotation-only fields\n # 2. To avoid false positives in the NameError check above\n try:\n delattr(cls, ann_name)\n except AttributeError:\n pass # indicates the attribute was on a parent class\n\n # Use cls.__dict__['__pydantic_decorators__'] instead of cls.__pydantic_decorators__\n # to make sure the decorators have already been built for this exact class\n decorators: DecoratorInfos = cls.__dict__['__pydantic_decorators__']\n if ann_name in decorators.computed_fields:\n raise ValueError(\"you can't override a field with a computed field\")\n fields[ann_name] = field_info\n\n if typevars_map:\n for field in fields.values():\n field.apply_typevars_map(typevars_map, types_namespace)\n\n return fields, class_vars\n\n\ndef _is_finalvar_with_default_val(type_: type[Any], val: Any) -> bool:\n from ..fields import FieldInfo\n\n if not is_finalvar(type_):\n return False\n elif val is PydanticUndefined:\n return False\n elif isinstance(val, FieldInfo) and (val.default is PydanticUndefined and val.default_factory is None):\n return False\n else:\n return True\n\n\ndef collect_dataclass_fields(\n cls: type[StandardDataclass], types_namespace: dict[str, Any] | None, *, typevars_map: dict[Any, Any] | None = None\n) -> dict[str, FieldInfo]:\n \"\"\"Collect the fields of a dataclass.\n\n Args:\n cls: dataclass.\n types_namespace: Optional extra namespace to look for types in.\n typevars_map: A dictionary mapping type variables to their concrete types.\n\n Returns:\n The dataclass fields.\n \"\"\"\n from ..fields import FieldInfo\n\n fields: dict[str, FieldInfo] = {}\n dataclass_fields: dict[str, dataclasses.Field] = cls.__dataclass_fields__\n cls_localns = dict(vars(cls)) # this matches get_cls_type_hints_lenient, but all tests pass with `= None` instead\n\n for ann_name, dataclass_field in dataclass_fields.items():\n ann_type = _typing_extra.eval_type_lenient(dataclass_field.type, types_namespace, cls_localns)\n if is_classvar(ann_type):\n continue\n\n if not dataclass_field.init and dataclass_field.default_factory == dataclasses.MISSING:\n # TODO: We should probably do something with this so that validate_assignment behaves properly\n # Issue: https://github.com/pydantic/pydantic/issues/5470\n continue\n\n if isinstance(dataclass_field.default, FieldInfo):\n if dataclass_field.default.init_var:\n # TODO: same note as above\n continue\n field_info = FieldInfo.from_annotated_attribute(ann_type, dataclass_field.default)\n else:\n field_info = FieldInfo.from_annotated_attribute(ann_type, dataclass_field)\n fields[ann_name] = field_info\n\n if field_info.default is not PydanticUndefined and isinstance(getattr(cls, ann_name, field_info), FieldInfo):\n # We need this to fix the default when the \"default\" from __dataclass_fields__ is a pydantic.FieldInfo\n setattr(cls, ann_name, field_info.default)\n\n if typevars_map:\n for field in fields.values():\n field.apply_typevars_map(typevars_map, types_namespace)\n\n return fields\n\n\ndef is_valid_field_name(name: str) -> bool:\n return not name.startswith('_')\n\n\ndef is_valid_privateattr_name(name: str) -> bool:\n return name.startswith('_') and not name.startswith('__')\n", "path": "pydantic/_internal/_fields.py"}]} | 4,066 | 240 |
gh_patches_debug_22746 | rasdani/github-patches | git_diff | pre-commit__pre-commit-346 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Windows: Terminal width support
We detect terminal width in unixlikes by running `tput cols`. This works fine for those platforms but doesn't work well for windows. Maybe find a package which does this logic for us and depend on that.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/output.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import os
4 import subprocess
5 import sys
6
7 from pre_commit import color
8 from pre_commit import five
9
10
11 # TODO: smell: import side-effects
12 try:
13 if not os.environ.get('TERM'): # pragma: no cover (dumb terminal)
14 raise OSError('Cannot determine width without TERM')
15 else: # pragma no cover (windows)
16 COLS = int(
17 subprocess.Popen(
18 ('tput', 'cols'), stdout=subprocess.PIPE,
19 ).communicate()[0] or
20 # Default in the case of no terminal
21 80
22 )
23 except OSError: # pragma: no cover (windows)
24 COLS = 80
25
26
27 def get_hook_message(
28 start,
29 postfix='',
30 end_msg=None,
31 end_len=0,
32 end_color=None,
33 use_color=None,
34 cols=COLS,
35 ):
36 """Prints a message for running a hook.
37
38 This currently supports three approaches:
39
40 # Print `start` followed by dots, leaving 6 characters at the end
41 >>> print_hook_message('start', end_len=6)
42 start...............................................................
43
44 # Print `start` followed by dots with the end message colored if coloring
45 # is specified and a newline afterwards
46 >>> print_hook_message(
47 'start',
48 end_msg='end',
49 end_color=color.RED,
50 use_color=True,
51 )
52 start...................................................................end
53
54 # Print `start` followed by dots, followed by the `postfix` message
55 # uncolored, followed by the `end_msg` colored if specified and a newline
56 # afterwards
57 >>> print_hook_message(
58 'start',
59 postfix='postfix ',
60 end_msg='end',
61 end_color=color.RED,
62 use_color=True,
63 )
64 start...........................................................postfix end
65 """
66 if bool(end_msg) == bool(end_len):
67 raise ValueError('Expected one of (`end_msg`, `end_len`)')
68 if end_msg is not None and (end_color is None or use_color is None):
69 raise ValueError(
70 '`end_color` and `use_color` are required with `end_msg`'
71 )
72
73 if end_len:
74 return start + '.' * (cols - len(start) - end_len - 1)
75 else:
76 return '{0}{1}{2}{3}\n'.format(
77 start,
78 '.' * (cols - len(start) - len(postfix) - len(end_msg) - 1),
79 postfix,
80 color.format_color(end_msg, end_color, use_color),
81 )
82
83
84 stdout_byte_stream = getattr(sys.stdout, 'buffer', sys.stdout)
85
86
87 def sys_stdout_write_wrapper(s, stream=stdout_byte_stream):
88 stream.write(five.to_bytes(s))
89
```
Path: `setup.py`
Content:
```
1 from setuptools import find_packages
2 from setuptools import setup
3
4
5 setup(
6 name='pre_commit',
7 description=(
8 'A framework for managing and maintaining multi-language pre-commit '
9 'hooks.'
10 ),
11 url='https://github.com/pre-commit/pre-commit',
12 version='0.7.6',
13
14 author='Anthony Sottile',
15 author_email='[email protected]',
16
17 platforms='linux',
18 classifiers=[
19 'License :: OSI Approved :: MIT License',
20 'Programming Language :: Python :: 2',
21 'Programming Language :: Python :: 2.6',
22 'Programming Language :: Python :: 2.7',
23 'Programming Language :: Python :: 3',
24 'Programming Language :: Python :: 3.3',
25 'Programming Language :: Python :: 3.4',
26 'Programming Language :: Python :: 3.5',
27 'Programming Language :: Python :: Implementation :: CPython',
28 'Programming Language :: Python :: Implementation :: PyPy',
29 ],
30
31 packages=find_packages('.', exclude=('tests*', 'testing*')),
32 package_data={
33 'pre_commit': [
34 'resources/hook-tmpl',
35 'resources/pre-push-tmpl',
36 'resources/rbenv.tar.gz',
37 'resources/ruby-build.tar.gz',
38 'resources/ruby-download.tar.gz',
39 ]
40 },
41 install_requires=[
42 'argparse',
43 'aspy.yaml',
44 'cached-property',
45 'jsonschema',
46 'nodeenv>=0.11.1',
47 'ordereddict',
48 'pyyaml',
49 'virtualenv',
50 ],
51 entry_points={
52 'console_scripts': [
53 'pre-commit = pre_commit.main:main',
54 'pre-commit-validate-config = pre_commit.clientlib.validate_config:run', # noqa
55 'pre-commit-validate-manifest = pre_commit.clientlib.validate_manifest:run', # noqa
56 ],
57 },
58 )
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pre_commit/output.py b/pre_commit/output.py
--- a/pre_commit/output.py
+++ b/pre_commit/output.py
@@ -1,27 +1,15 @@
from __future__ import unicode_literals
-import os
-import subprocess
import sys
+from backports.shutil_get_terminal_size import get_terminal_size
+
from pre_commit import color
from pre_commit import five
-
# TODO: smell: import side-effects
-try:
- if not os.environ.get('TERM'): # pragma: no cover (dumb terminal)
- raise OSError('Cannot determine width without TERM')
- else: # pragma no cover (windows)
- COLS = int(
- subprocess.Popen(
- ('tput', 'cols'), stdout=subprocess.PIPE,
- ).communicate()[0] or
- # Default in the case of no terminal
- 80
- )
-except OSError: # pragma: no cover (windows)
- COLS = 80
+# TODO: https://github.com/chrippa/backports.shutil_get_terminal_size/issues/4
+COLS = get_terminal_size().columns or 80
def get_hook_message(
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -41,6 +41,7 @@
install_requires=[
'argparse',
'aspy.yaml',
+ 'backports.shutil_get_terminal_size',
'cached-property',
'jsonschema',
'nodeenv>=0.11.1',
| {"golden_diff": "diff --git a/pre_commit/output.py b/pre_commit/output.py\n--- a/pre_commit/output.py\n+++ b/pre_commit/output.py\n@@ -1,27 +1,15 @@\n from __future__ import unicode_literals\n \n-import os\n-import subprocess\n import sys\n \n+from backports.shutil_get_terminal_size import get_terminal_size\n+\n from pre_commit import color\n from pre_commit import five\n \n-\n # TODO: smell: import side-effects\n-try:\n- if not os.environ.get('TERM'): # pragma: no cover (dumb terminal)\n- raise OSError('Cannot determine width without TERM')\n- else: # pragma no cover (windows)\n- COLS = int(\n- subprocess.Popen(\n- ('tput', 'cols'), stdout=subprocess.PIPE,\n- ).communicate()[0] or\n- # Default in the case of no terminal\n- 80\n- )\n-except OSError: # pragma: no cover (windows)\n- COLS = 80\n+# TODO: https://github.com/chrippa/backports.shutil_get_terminal_size/issues/4\n+COLS = get_terminal_size().columns or 80\n \n \n def get_hook_message(\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -41,6 +41,7 @@\n install_requires=[\n 'argparse',\n 'aspy.yaml',\n+ 'backports.shutil_get_terminal_size',\n 'cached-property',\n 'jsonschema',\n 'nodeenv>=0.11.1',\n", "issue": "Windows: Terminal width support\nWe detect terminal width in unixlikes by running `tput cols`. This works fine for those platforms but doesn't work well for windows. Maybe find a package which does this logic for us and depend on that.\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport os\nimport subprocess\nimport sys\n\nfrom pre_commit import color\nfrom pre_commit import five\n\n\n# TODO: smell: import side-effects\ntry:\n if not os.environ.get('TERM'): # pragma: no cover (dumb terminal)\n raise OSError('Cannot determine width without TERM')\n else: # pragma no cover (windows)\n COLS = int(\n subprocess.Popen(\n ('tput', 'cols'), stdout=subprocess.PIPE,\n ).communicate()[0] or\n # Default in the case of no terminal\n 80\n )\nexcept OSError: # pragma: no cover (windows)\n COLS = 80\n\n\ndef get_hook_message(\n start,\n postfix='',\n end_msg=None,\n end_len=0,\n end_color=None,\n use_color=None,\n cols=COLS,\n):\n \"\"\"Prints a message for running a hook.\n\n This currently supports three approaches:\n\n # Print `start` followed by dots, leaving 6 characters at the end\n >>> print_hook_message('start', end_len=6)\n start...............................................................\n\n # Print `start` followed by dots with the end message colored if coloring\n # is specified and a newline afterwards\n >>> print_hook_message(\n 'start',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...................................................................end\n\n # Print `start` followed by dots, followed by the `postfix` message\n # uncolored, followed by the `end_msg` colored if specified and a newline\n # afterwards\n >>> print_hook_message(\n 'start',\n postfix='postfix ',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...........................................................postfix end\n \"\"\"\n if bool(end_msg) == bool(end_len):\n raise ValueError('Expected one of (`end_msg`, `end_len`)')\n if end_msg is not None and (end_color is None or use_color is None):\n raise ValueError(\n '`end_color` and `use_color` are required with `end_msg`'\n )\n\n if end_len:\n return start + '.' * (cols - len(start) - end_len - 1)\n else:\n return '{0}{1}{2}{3}\\n'.format(\n start,\n '.' * (cols - len(start) - len(postfix) - len(end_msg) - 1),\n postfix,\n color.format_color(end_msg, end_color, use_color),\n )\n\n\nstdout_byte_stream = getattr(sys.stdout, 'buffer', sys.stdout)\n\n\ndef sys_stdout_write_wrapper(s, stream=stdout_byte_stream):\n stream.write(five.to_bytes(s))\n", "path": "pre_commit/output.py"}, {"content": "from setuptools import find_packages\nfrom setuptools import setup\n\n\nsetup(\n name='pre_commit',\n description=(\n 'A framework for managing and maintaining multi-language pre-commit '\n 'hooks.'\n ),\n url='https://github.com/pre-commit/pre-commit',\n version='0.7.6',\n\n author='Anthony Sottile',\n author_email='[email protected]',\n\n platforms='linux',\n classifiers=[\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n\n packages=find_packages('.', exclude=('tests*', 'testing*')),\n package_data={\n 'pre_commit': [\n 'resources/hook-tmpl',\n 'resources/pre-push-tmpl',\n 'resources/rbenv.tar.gz',\n 'resources/ruby-build.tar.gz',\n 'resources/ruby-download.tar.gz',\n ]\n },\n install_requires=[\n 'argparse',\n 'aspy.yaml',\n 'cached-property',\n 'jsonschema',\n 'nodeenv>=0.11.1',\n 'ordereddict',\n 'pyyaml',\n 'virtualenv',\n ],\n entry_points={\n 'console_scripts': [\n 'pre-commit = pre_commit.main:main',\n 'pre-commit-validate-config = pre_commit.clientlib.validate_config:run', # noqa\n 'pre-commit-validate-manifest = pre_commit.clientlib.validate_manifest:run', # noqa\n ],\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport sys\n\nfrom backports.shutil_get_terminal_size import get_terminal_size\n\nfrom pre_commit import color\nfrom pre_commit import five\n\n# TODO: smell: import side-effects\n# TODO: https://github.com/chrippa/backports.shutil_get_terminal_size/issues/4\nCOLS = get_terminal_size().columns or 80\n\n\ndef get_hook_message(\n start,\n postfix='',\n end_msg=None,\n end_len=0,\n end_color=None,\n use_color=None,\n cols=COLS,\n):\n \"\"\"Prints a message for running a hook.\n\n This currently supports three approaches:\n\n # Print `start` followed by dots, leaving 6 characters at the end\n >>> print_hook_message('start', end_len=6)\n start...............................................................\n\n # Print `start` followed by dots with the end message colored if coloring\n # is specified and a newline afterwards\n >>> print_hook_message(\n 'start',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...................................................................end\n\n # Print `start` followed by dots, followed by the `postfix` message\n # uncolored, followed by the `end_msg` colored if specified and a newline\n # afterwards\n >>> print_hook_message(\n 'start',\n postfix='postfix ',\n end_msg='end',\n end_color=color.RED,\n use_color=True,\n )\n start...........................................................postfix end\n \"\"\"\n if bool(end_msg) == bool(end_len):\n raise ValueError('Expected one of (`end_msg`, `end_len`)')\n if end_msg is not None and (end_color is None or use_color is None):\n raise ValueError(\n '`end_color` and `use_color` are required with `end_msg`'\n )\n\n if end_len:\n return start + '.' * (cols - len(start) - end_len - 1)\n else:\n return '{0}{1}{2}{3}\\n'.format(\n start,\n '.' * (cols - len(start) - len(postfix) - len(end_msg) - 1),\n postfix,\n color.format_color(end_msg, end_color, use_color),\n )\n\n\nstdout_byte_stream = getattr(sys.stdout, 'buffer', sys.stdout)\n\n\ndef sys_stdout_write_wrapper(s, stream=stdout_byte_stream):\n stream.write(five.to_bytes(s))\n", "path": "pre_commit/output.py"}, {"content": "from setuptools import find_packages\nfrom setuptools import setup\n\n\nsetup(\n name='pre_commit',\n description=(\n 'A framework for managing and maintaining multi-language pre-commit '\n 'hooks.'\n ),\n url='https://github.com/pre-commit/pre-commit',\n version='0.7.6',\n\n author='Anthony Sottile',\n author_email='[email protected]',\n\n platforms='linux',\n classifiers=[\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n ],\n\n packages=find_packages('.', exclude=('tests*', 'testing*')),\n package_data={\n 'pre_commit': [\n 'resources/hook-tmpl',\n 'resources/pre-push-tmpl',\n 'resources/rbenv.tar.gz',\n 'resources/ruby-build.tar.gz',\n 'resources/ruby-download.tar.gz',\n ]\n },\n install_requires=[\n 'argparse',\n 'aspy.yaml',\n 'backports.shutil_get_terminal_size',\n 'cached-property',\n 'jsonschema',\n 'nodeenv>=0.11.1',\n 'ordereddict',\n 'pyyaml',\n 'virtualenv',\n ],\n entry_points={\n 'console_scripts': [\n 'pre-commit = pre_commit.main:main',\n 'pre-commit-validate-config = pre_commit.clientlib.validate_config:run', # noqa\n 'pre-commit-validate-manifest = pre_commit.clientlib.validate_manifest:run', # noqa\n ],\n },\n)\n", "path": "setup.py"}]} | 1,602 | 339 |
gh_patches_debug_60612 | rasdani/github-patches | git_diff | cloudtools__troposphere-2037 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for additional Flink runtimes in Kinesis Data Analytics.
Kinesis supports additional Flink runtimes (FLINK-1_13, ZEPPELIN-FLINK-1_0, ZEPPELIN-FLINK-2_0), see https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kinesisanalyticsv2-application.html.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `troposphere/validators/kinesisanalyticsv2.py`
Content:
```
1 # Copyright (c) 2012-2022, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6
7 def validate_runtime_environment(runtime_environment):
8 """
9 Validate RuntimeEnvironment for Application
10 Property: Application.RuntimeEnvironment
11 """
12
13 VALID_RUNTIME_ENVIRONMENTS = ("SQL-1_0", "FLINK-1_6", "FLINK-1_8", "FLINK-1_11")
14
15 if runtime_environment not in VALID_RUNTIME_ENVIRONMENTS:
16 raise ValueError(
17 "Application RuntimeEnvironment must be one of: %s"
18 % ", ".join(VALID_RUNTIME_ENVIRONMENTS)
19 )
20 return runtime_environment
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/troposphere/validators/kinesisanalyticsv2.py b/troposphere/validators/kinesisanalyticsv2.py
--- a/troposphere/validators/kinesisanalyticsv2.py
+++ b/troposphere/validators/kinesisanalyticsv2.py
@@ -10,7 +10,15 @@
Property: Application.RuntimeEnvironment
"""
- VALID_RUNTIME_ENVIRONMENTS = ("SQL-1_0", "FLINK-1_6", "FLINK-1_8", "FLINK-1_11")
+ VALID_RUNTIME_ENVIRONMENTS = (
+ "FLINK-1_6",
+ "FLINK-1_8",
+ "FLINK-1_11",
+ "FLINK-1_13",
+ "SQL-1_0",
+ "ZEPPELIN-FLINK-1_0",
+ "ZEPPELIN-FLINK-2_0",
+ )
if runtime_environment not in VALID_RUNTIME_ENVIRONMENTS:
raise ValueError(
| {"golden_diff": "diff --git a/troposphere/validators/kinesisanalyticsv2.py b/troposphere/validators/kinesisanalyticsv2.py\n--- a/troposphere/validators/kinesisanalyticsv2.py\n+++ b/troposphere/validators/kinesisanalyticsv2.py\n@@ -10,7 +10,15 @@\n Property: Application.RuntimeEnvironment\n \"\"\"\n \n- VALID_RUNTIME_ENVIRONMENTS = (\"SQL-1_0\", \"FLINK-1_6\", \"FLINK-1_8\", \"FLINK-1_11\")\n+ VALID_RUNTIME_ENVIRONMENTS = (\n+ \"FLINK-1_6\",\n+ \"FLINK-1_8\",\n+ \"FLINK-1_11\",\n+ \"FLINK-1_13\",\n+ \"SQL-1_0\",\n+ \"ZEPPELIN-FLINK-1_0\",\n+ \"ZEPPELIN-FLINK-2_0\",\n+ )\n \n if runtime_environment not in VALID_RUNTIME_ENVIRONMENTS:\n raise ValueError(\n", "issue": "Add support for additional Flink runtimes in Kinesis Data Analytics.\nKinesis supports additional Flink runtimes (FLINK-1_13, ZEPPELIN-FLINK-1_0, ZEPPELIN-FLINK-2_0), see https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-kinesisanalyticsv2-application.html.\n", "before_files": [{"content": "# Copyright (c) 2012-2022, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\n\ndef validate_runtime_environment(runtime_environment):\n \"\"\"\n Validate RuntimeEnvironment for Application\n Property: Application.RuntimeEnvironment\n \"\"\"\n\n VALID_RUNTIME_ENVIRONMENTS = (\"SQL-1_0\", \"FLINK-1_6\", \"FLINK-1_8\", \"FLINK-1_11\")\n\n if runtime_environment not in VALID_RUNTIME_ENVIRONMENTS:\n raise ValueError(\n \"Application RuntimeEnvironment must be one of: %s\"\n % \", \".join(VALID_RUNTIME_ENVIRONMENTS)\n )\n return runtime_environment\n", "path": "troposphere/validators/kinesisanalyticsv2.py"}], "after_files": [{"content": "# Copyright (c) 2012-2022, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\n\ndef validate_runtime_environment(runtime_environment):\n \"\"\"\n Validate RuntimeEnvironment for Application\n Property: Application.RuntimeEnvironment\n \"\"\"\n\n VALID_RUNTIME_ENVIRONMENTS = (\n \"FLINK-1_6\",\n \"FLINK-1_8\",\n \"FLINK-1_11\",\n \"FLINK-1_13\",\n \"SQL-1_0\",\n \"ZEPPELIN-FLINK-1_0\",\n \"ZEPPELIN-FLINK-2_0\",\n )\n\n if runtime_environment not in VALID_RUNTIME_ENVIRONMENTS:\n raise ValueError(\n \"Application RuntimeEnvironment must be one of: %s\"\n % \", \".join(VALID_RUNTIME_ENVIRONMENTS)\n )\n return runtime_environment\n", "path": "troposphere/validators/kinesisanalyticsv2.py"}]} | 534 | 233 |
gh_patches_debug_979 | rasdani/github-patches | git_diff | acl-org__acl-anthology-3022 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Paper Metadata: 2023.findings-emnlp.1054
### Confirm that this is a metadata correction
- [X] I want to file corrections to make the metadata match the PDF file hosted on the ACL Anthology.
### Anthology ID
2023.findings-emnlp.1054
### Type of Paper Metadata Correction
- [X] Paper Title
- [ ] Paper Abstract
- [ ] Author Name(s)
### Correction to Paper Title
Please change the paper title appeared in Cite (ACL) and Cite (Informal) to "Measuring Pointwise 𝒱-Usable Information In-Context-ly"
### Correction to Paper Abstract
_No response_
### Correction to Author Name(s)
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bin/anthology/texmath.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright 2019 Marcel Bollmann <[email protected]>
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 from lxml import etree
18 import csv
19 import logging as log
20 import os
21 from TexSoup import TexSoup
22 from TexSoup.data import TexCmd, TexText, TexGroup
23
24 FUNCTION_NAMES = ("lim", "log")
25 TEX_TO_HTML = {
26 "mathrm": ("span", {"class": "font-weight-normal"}),
27 "textrm": ("span", {"class": "font-weight-normal"}),
28 "text": ("span", {"class": "font-weight-normal"}),
29 "mathbf": ("strong", {}),
30 "textbf": ("strong", {}),
31 "boldsymbol": ("strong", {}),
32 "mathit": ("em", {}),
33 "textit": ("em", {}),
34 "emph": ("em", {}),
35 "textsc": ("span", {"style": "font-variant: small-caps;"}),
36 "texttt": ("span", {"class": "text-monospace"}),
37 "textsubscript": ("sub", {}),
38 "textsuperscript": ("sup", {}),
39 }
40 REMOVED_COMMANDS = ("bf", "rm", "it", "sc")
41
42
43 def _append_text(text, trg):
44 if not text:
45 return
46 if len(trg):
47 if trg[-1].tail is not None:
48 trg[-1].tail += text
49 else:
50 trg[-1].tail = text
51 else:
52 if trg.text is not None:
53 trg.text += text
54 else:
55 trg.text = text
56
57
58 class TexMath:
59 """Interpreter and converter for TeX inline math expressions.
60
61 This class uses TexSoup (https://github.com/alvinwan/TexSoup) to parse a TeX
62 expression and converts it to valid HTML. The conversion combines a small
63 number of handwritten rules with a mapping of LaTeX math mode commands to
64 Unicode symbols (http://milde.users.sourceforge.net/LUCR/Math/). Parts that
65 cannot be interpreted using this simple method are preserved as raw LaTeX.
66 """
67
68 def __init__(self, symbolsfile=None):
69 self.cmd_map = {}
70 if symbolsfile is None:
71 symbolsfile = os.path.join(
72 os.path.dirname(os.path.abspath(__file__)), "unimathsymbols.txt"
73 )
74 self.load_symbols(symbolsfile)
75
76 def load_symbols(self, filename):
77 with open(filename, "r", encoding="utf8") as f:
78 reader = csv.reader(f, delimiter="^")
79 for row in reader:
80 if row[0].startswith("#"): # comment
81 continue
82 assert len(row) == 8, "Expect eight-column format"
83 char, cmd = row[1], row[2]
84 if cmd.startswith("\\"):
85 self.cmd_map[cmd[1:]] = char
86 if row[-1].startswith("= ") and ", " in row[-1]:
87 # last column sometimes contains alternative command
88 cmd = row[-1][2:].split(", ")[0]
89 if cmd.startswith("\\"):
90 self.cmd_map[cmd[1:]] = char
91
92 def _parse(self, everything, trg):
93 """Parses a list of TeX constituents into an lxml.etree._Element.
94
95 Arguments:
96 everything: An iterator over TeX constituents as provided by TexSoup
97 trg: The lxml.etree._Element to parse the expression into
98
99 The approach of iterating over the TeX constituents roughly follows
100 <https://github.com/alvinwan/TexSoup/blob/master/examples/list_everything.py>.
101 """
102 sxscript = False # Tracks whether we're in a subscript/superscript
103 for code in everything:
104 if isinstance(code, TexCmd):
105 # code is a TeX command
106 self._parse_command(code, trg)
107 elif isinstance(code, TexText) or isinstance(code, str):
108 # code is text
109 sxscript = self._parse_text(code, trg)
110 elif isinstance(code, TexGroup):
111 # If in subscript/superscript, wrap the entire element in respective tag
112 if sxscript:
113 my_trg = etree.Element(sxscript)
114 self._parse(code.contents, my_trg)
115 trg.append(my_trg)
116 sxscript = False
117 # Otherwise, just parse it normally
118 else:
119 self._parse(code.contents, trg)
120 else:
121 log.error(f"TeX-math parser got unhandled element: {type(code)}")
122
123 def _parse_command(self, code, trg):
124 args = list(code.args)
125 name = str(code.name)
126 # Check if the command is in the list of known Unicode mappings
127 if name in self.cmd_map:
128 _append_text(self.cmd_map[name], trg)
129 self._parse(args, trg)
130 # Check if command + arguments is in the list of known Unicode mappings
131 # (this covers commands like "\mathcal{A}", which have their own entries)
132 elif str(code)[1:] in self.cmd_map:
133 _append_text(self.cmd_map[str(code)[1:]], trg)
134 # Check if command is a known function name (e.g. "log")
135 elif name in FUNCTION_NAMES:
136 sx = etree.Element("span")
137 sx.attrib["class"] = "tex-math-function"
138 sx.text = str(name)
139 trg.append(sx)
140 self._parse(args, trg)
141 # Handle fractions
142 elif name == "frac":
143 self._parse_fraction(args, trg)
144 # Handle commands with simple HTML tag substitutions
145 elif name in TEX_TO_HTML:
146 elem_name, elem_attrib = TEX_TO_HTML[name]
147 sx = etree.Element(elem_name, attrib=elem_attrib)
148 self._parse(args, sx)
149 trg.append(sx)
150 # Known, but unsupported formatting tags that will just be removed
151 elif name in REMOVED_COMMANDS and not args:
152 pass
153 # Give up, but preserve element
154 else:
155 log.warn(f"Unknown TeX-math command: {code}")
156 self._append_unparsed(code, trg)
157
158 def _parse_fraction(self, args, trg):
159 if len(args) != 2:
160 log.warn(f"Couldn't parse \\frac: got {len(args)} arguments, expected 2")
161 self._append_unparsed({'name': 'frac', 'args': args}, trg)
162 else:
163 # Represent numerator of fraction as superscript
164 sx = etree.Element("sup")
165 self._parse([args[0]], sx)
166 trg.append(sx)
167 # Unicode symbol for fraction slash
168 _append_text("\u2044", trg)
169 # Represent denominator of fraction as subscript
170 sx = etree.Element("sub")
171 self._parse([args[1]], sx)
172 trg.append(sx)
173
174 def _parse_text(self, code, trg):
175 text = str(code)
176 # TexSoup doesn't parse any non-alpha command as a command. Ex: \$
177 # However it does seperate them into their own text part. Ex: 'r\\&dd' -> ['r', '\\&', 'dd']
178 # Therefore try to do command mapping replacement of all text beginning with \ and of length 2
179 if len(text) == 2 and text[0] == '\\':
180 text = self.cmd_map.get(text[1], text)
181 _append_text(text, trg)
182 return
183 # parse ^ and _ (won't get recognized as separate nodes by TexSoup)
184 sxscript = False
185 if "^" in text or "_" in text:
186 buf = ""
187 for char in text:
188 if char == "^" or char == "_":
189 _append_text(buf, trg)
190 buf = ""
191 sxscript = "sup" if char == "^" else "sub"
192 elif sxscript:
193 sx = etree.Element(sxscript)
194 sx.text = char
195 trg.append(sx)
196 sxscript = False
197 else:
198 buf += char
199 text = buf
200 # Append as text
201 _append_text(text, trg)
202 return sxscript
203
204 def _append_unparsed(self, code, trg):
205 pre = etree.Element("span")
206 pre.attrib["class"] = "tex-math-unparsed"
207 pre.text = f"\\{code.name}{code.args}"
208 trg.append(pre)
209
210 def etree_to_html(self, element):
211 result = etree.Element("span")
212 result.attrib["class"] = "tex-math"
213 result.tail = element.tail # Preserve tail
214 self._parse(TexSoup(element.text).expr.all, result)
215 return result
216
217 def to_html(self, element):
218 """Converts a TeX math expression to HTML markup."""
219 if isinstance(element, etree._Element):
220 return self.etree_to_html(element)
221 elif isinstance(element, str):
222 value = self.etree_to_html(etree.fromstring(f"<span>{element}</span>"))
223 return etree.tostring(value)
224 raise NotImplementedError(f"Cannot convert elements of type {type(element)}")
225
226 def to_unicode(self, element):
227 """Converts a TeX math expression to a Unicode string.
228
229 This will perform the same conversions as `to_html()`, but strip out the
230 HTML tags afterwards.
231 """
232 element = self.to_html(element)
233 return etree.tostring(element, encoding="unicode", method="text")
234
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bin/anthology/texmath.py b/bin/anthology/texmath.py
--- a/bin/anthology/texmath.py
+++ b/bin/anthology/texmath.py
@@ -230,4 +230,4 @@
HTML tags afterwards.
"""
element = self.to_html(element)
- return etree.tostring(element, encoding="unicode", method="text")
+ return etree.tostring(element, encoding="unicode", method="text", with_tail=False)
| {"golden_diff": "diff --git a/bin/anthology/texmath.py b/bin/anthology/texmath.py\n--- a/bin/anthology/texmath.py\n+++ b/bin/anthology/texmath.py\n@@ -230,4 +230,4 @@\n HTML tags afterwards.\n \"\"\"\n element = self.to_html(element)\n- return etree.tostring(element, encoding=\"unicode\", method=\"text\")\n+ return etree.tostring(element, encoding=\"unicode\", method=\"text\", with_tail=False)\n", "issue": "Paper Metadata: 2023.findings-emnlp.1054\n### Confirm that this is a metadata correction\n\n- [X] I want to file corrections to make the metadata match the PDF file hosted on the ACL Anthology.\n\n### Anthology ID\n\n2023.findings-emnlp.1054\n\n### Type of Paper Metadata Correction\n\n- [X] Paper Title\n- [ ] Paper Abstract\n- [ ] Author Name(s)\n\n### Correction to Paper Title\n\nPlease change the paper title appeared in Cite (ACL) and Cite (Informal) to \"Measuring Pointwise \ud835\udcb1-Usable Information In-Context-ly\"\n\n### Correction to Paper Abstract\n\n_No response_\n\n### Correction to Author Name(s)\n\n_No response_\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2019 Marcel Bollmann <[email protected]>\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom lxml import etree\nimport csv\nimport logging as log\nimport os\nfrom TexSoup import TexSoup\nfrom TexSoup.data import TexCmd, TexText, TexGroup\n\nFUNCTION_NAMES = (\"lim\", \"log\")\nTEX_TO_HTML = {\n \"mathrm\": (\"span\", {\"class\": \"font-weight-normal\"}),\n \"textrm\": (\"span\", {\"class\": \"font-weight-normal\"}),\n \"text\": (\"span\", {\"class\": \"font-weight-normal\"}),\n \"mathbf\": (\"strong\", {}),\n \"textbf\": (\"strong\", {}),\n \"boldsymbol\": (\"strong\", {}),\n \"mathit\": (\"em\", {}),\n \"textit\": (\"em\", {}),\n \"emph\": (\"em\", {}),\n \"textsc\": (\"span\", {\"style\": \"font-variant: small-caps;\"}),\n \"texttt\": (\"span\", {\"class\": \"text-monospace\"}),\n \"textsubscript\": (\"sub\", {}),\n \"textsuperscript\": (\"sup\", {}),\n}\nREMOVED_COMMANDS = (\"bf\", \"rm\", \"it\", \"sc\")\n\n\ndef _append_text(text, trg):\n if not text:\n return\n if len(trg):\n if trg[-1].tail is not None:\n trg[-1].tail += text\n else:\n trg[-1].tail = text\n else:\n if trg.text is not None:\n trg.text += text\n else:\n trg.text = text\n\n\nclass TexMath:\n \"\"\"Interpreter and converter for TeX inline math expressions.\n\n This class uses TexSoup (https://github.com/alvinwan/TexSoup) to parse a TeX\n expression and converts it to valid HTML. The conversion combines a small\n number of handwritten rules with a mapping of LaTeX math mode commands to\n Unicode symbols (http://milde.users.sourceforge.net/LUCR/Math/). Parts that\n cannot be interpreted using this simple method are preserved as raw LaTeX.\n \"\"\"\n\n def __init__(self, symbolsfile=None):\n self.cmd_map = {}\n if symbolsfile is None:\n symbolsfile = os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"unimathsymbols.txt\"\n )\n self.load_symbols(symbolsfile)\n\n def load_symbols(self, filename):\n with open(filename, \"r\", encoding=\"utf8\") as f:\n reader = csv.reader(f, delimiter=\"^\")\n for row in reader:\n if row[0].startswith(\"#\"): # comment\n continue\n assert len(row) == 8, \"Expect eight-column format\"\n char, cmd = row[1], row[2]\n if cmd.startswith(\"\\\\\"):\n self.cmd_map[cmd[1:]] = char\n if row[-1].startswith(\"= \") and \", \" in row[-1]:\n # last column sometimes contains alternative command\n cmd = row[-1][2:].split(\", \")[0]\n if cmd.startswith(\"\\\\\"):\n self.cmd_map[cmd[1:]] = char\n\n def _parse(self, everything, trg):\n \"\"\"Parses a list of TeX constituents into an lxml.etree._Element.\n\n Arguments:\n everything: An iterator over TeX constituents as provided by TexSoup\n trg: The lxml.etree._Element to parse the expression into\n\n The approach of iterating over the TeX constituents roughly follows\n <https://github.com/alvinwan/TexSoup/blob/master/examples/list_everything.py>.\n \"\"\"\n sxscript = False # Tracks whether we're in a subscript/superscript\n for code in everything:\n if isinstance(code, TexCmd):\n # code is a TeX command\n self._parse_command(code, trg)\n elif isinstance(code, TexText) or isinstance(code, str):\n # code is text\n sxscript = self._parse_text(code, trg)\n elif isinstance(code, TexGroup):\n # If in subscript/superscript, wrap the entire element in respective tag\n if sxscript:\n my_trg = etree.Element(sxscript)\n self._parse(code.contents, my_trg)\n trg.append(my_trg)\n sxscript = False\n # Otherwise, just parse it normally\n else:\n self._parse(code.contents, trg)\n else:\n log.error(f\"TeX-math parser got unhandled element: {type(code)}\")\n\n def _parse_command(self, code, trg):\n args = list(code.args)\n name = str(code.name)\n # Check if the command is in the list of known Unicode mappings\n if name in self.cmd_map:\n _append_text(self.cmd_map[name], trg)\n self._parse(args, trg)\n # Check if command + arguments is in the list of known Unicode mappings\n # (this covers commands like \"\\mathcal{A}\", which have their own entries)\n elif str(code)[1:] in self.cmd_map:\n _append_text(self.cmd_map[str(code)[1:]], trg)\n # Check if command is a known function name (e.g. \"log\")\n elif name in FUNCTION_NAMES:\n sx = etree.Element(\"span\")\n sx.attrib[\"class\"] = \"tex-math-function\"\n sx.text = str(name)\n trg.append(sx)\n self._parse(args, trg)\n # Handle fractions\n elif name == \"frac\":\n self._parse_fraction(args, trg)\n # Handle commands with simple HTML tag substitutions\n elif name in TEX_TO_HTML:\n elem_name, elem_attrib = TEX_TO_HTML[name]\n sx = etree.Element(elem_name, attrib=elem_attrib)\n self._parse(args, sx)\n trg.append(sx)\n # Known, but unsupported formatting tags that will just be removed\n elif name in REMOVED_COMMANDS and not args:\n pass\n # Give up, but preserve element\n else:\n log.warn(f\"Unknown TeX-math command: {code}\")\n self._append_unparsed(code, trg)\n\n def _parse_fraction(self, args, trg):\n if len(args) != 2:\n log.warn(f\"Couldn't parse \\\\frac: got {len(args)} arguments, expected 2\")\n self._append_unparsed({'name': 'frac', 'args': args}, trg)\n else:\n # Represent numerator of fraction as superscript\n sx = etree.Element(\"sup\")\n self._parse([args[0]], sx)\n trg.append(sx)\n # Unicode symbol for fraction slash\n _append_text(\"\\u2044\", trg)\n # Represent denominator of fraction as subscript\n sx = etree.Element(\"sub\")\n self._parse([args[1]], sx)\n trg.append(sx)\n\n def _parse_text(self, code, trg):\n text = str(code)\n # TexSoup doesn't parse any non-alpha command as a command. Ex: \\$\n # However it does seperate them into their own text part. Ex: 'r\\\\&dd' -> ['r', '\\\\&', 'dd']\n # Therefore try to do command mapping replacement of all text beginning with \\ and of length 2\n if len(text) == 2 and text[0] == '\\\\':\n text = self.cmd_map.get(text[1], text)\n _append_text(text, trg)\n return\n # parse ^ and _ (won't get recognized as separate nodes by TexSoup)\n sxscript = False\n if \"^\" in text or \"_\" in text:\n buf = \"\"\n for char in text:\n if char == \"^\" or char == \"_\":\n _append_text(buf, trg)\n buf = \"\"\n sxscript = \"sup\" if char == \"^\" else \"sub\"\n elif sxscript:\n sx = etree.Element(sxscript)\n sx.text = char\n trg.append(sx)\n sxscript = False\n else:\n buf += char\n text = buf\n # Append as text\n _append_text(text, trg)\n return sxscript\n\n def _append_unparsed(self, code, trg):\n pre = etree.Element(\"span\")\n pre.attrib[\"class\"] = \"tex-math-unparsed\"\n pre.text = f\"\\\\{code.name}{code.args}\"\n trg.append(pre)\n\n def etree_to_html(self, element):\n result = etree.Element(\"span\")\n result.attrib[\"class\"] = \"tex-math\"\n result.tail = element.tail # Preserve tail\n self._parse(TexSoup(element.text).expr.all, result)\n return result\n\n def to_html(self, element):\n \"\"\"Converts a TeX math expression to HTML markup.\"\"\"\n if isinstance(element, etree._Element):\n return self.etree_to_html(element)\n elif isinstance(element, str):\n value = self.etree_to_html(etree.fromstring(f\"<span>{element}</span>\"))\n return etree.tostring(value)\n raise NotImplementedError(f\"Cannot convert elements of type {type(element)}\")\n\n def to_unicode(self, element):\n \"\"\"Converts a TeX math expression to a Unicode string.\n\n This will perform the same conversions as `to_html()`, but strip out the\n HTML tags afterwards.\n \"\"\"\n element = self.to_html(element)\n return etree.tostring(element, encoding=\"unicode\", method=\"text\")\n", "path": "bin/anthology/texmath.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2019 Marcel Bollmann <[email protected]>\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom lxml import etree\nimport csv\nimport logging as log\nimport os\nfrom TexSoup import TexSoup\nfrom TexSoup.data import TexCmd, TexText, TexGroup\n\nFUNCTION_NAMES = (\"lim\", \"log\")\nTEX_TO_HTML = {\n \"mathrm\": (\"span\", {\"class\": \"font-weight-normal\"}),\n \"textrm\": (\"span\", {\"class\": \"font-weight-normal\"}),\n \"text\": (\"span\", {\"class\": \"font-weight-normal\"}),\n \"mathbf\": (\"strong\", {}),\n \"textbf\": (\"strong\", {}),\n \"boldsymbol\": (\"strong\", {}),\n \"mathit\": (\"em\", {}),\n \"textit\": (\"em\", {}),\n \"emph\": (\"em\", {}),\n \"textsc\": (\"span\", {\"style\": \"font-variant: small-caps;\"}),\n \"texttt\": (\"span\", {\"class\": \"text-monospace\"}),\n \"textsubscript\": (\"sub\", {}),\n \"textsuperscript\": (\"sup\", {}),\n}\nREMOVED_COMMANDS = (\"bf\", \"rm\", \"it\", \"sc\")\n\n\ndef _append_text(text, trg):\n if not text:\n return\n if len(trg):\n if trg[-1].tail is not None:\n trg[-1].tail += text\n else:\n trg[-1].tail = text\n else:\n if trg.text is not None:\n trg.text += text\n else:\n trg.text = text\n\n\nclass TexMath:\n \"\"\"Interpreter and converter for TeX inline math expressions.\n\n This class uses TexSoup (https://github.com/alvinwan/TexSoup) to parse a TeX\n expression and converts it to valid HTML. The conversion combines a small\n number of handwritten rules with a mapping of LaTeX math mode commands to\n Unicode symbols (http://milde.users.sourceforge.net/LUCR/Math/). Parts that\n cannot be interpreted using this simple method are preserved as raw LaTeX.\n \"\"\"\n\n def __init__(self, symbolsfile=None):\n self.cmd_map = {}\n if symbolsfile is None:\n symbolsfile = os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"unimathsymbols.txt\"\n )\n self.load_symbols(symbolsfile)\n\n def load_symbols(self, filename):\n with open(filename, \"r\", encoding=\"utf8\") as f:\n reader = csv.reader(f, delimiter=\"^\")\n for row in reader:\n if row[0].startswith(\"#\"): # comment\n continue\n assert len(row) == 8, \"Expect eight-column format\"\n char, cmd = row[1], row[2]\n if cmd.startswith(\"\\\\\"):\n self.cmd_map[cmd[1:]] = char\n if row[-1].startswith(\"= \") and \", \" in row[-1]:\n # last column sometimes contains alternative command\n cmd = row[-1][2:].split(\", \")[0]\n if cmd.startswith(\"\\\\\"):\n self.cmd_map[cmd[1:]] = char\n\n def _parse(self, everything, trg):\n \"\"\"Parses a list of TeX constituents into an lxml.etree._Element.\n\n Arguments:\n everything: An iterator over TeX constituents as provided by TexSoup\n trg: The lxml.etree._Element to parse the expression into\n\n The approach of iterating over the TeX constituents roughly follows\n <https://github.com/alvinwan/TexSoup/blob/master/examples/list_everything.py>.\n \"\"\"\n sxscript = False # Tracks whether we're in a subscript/superscript\n for code in everything:\n if isinstance(code, TexCmd):\n # code is a TeX command\n self._parse_command(code, trg)\n elif isinstance(code, TexText) or isinstance(code, str):\n # code is text\n sxscript = self._parse_text(code, trg)\n elif isinstance(code, TexGroup):\n # If in subscript/superscript, wrap the entire element in respective tag\n if sxscript:\n my_trg = etree.Element(sxscript)\n self._parse(code.contents, my_trg)\n trg.append(my_trg)\n sxscript = False\n # Otherwise, just parse it normally\n else:\n self._parse(code.contents, trg)\n else:\n log.error(f\"TeX-math parser got unhandled element: {type(code)}\")\n\n def _parse_command(self, code, trg):\n args = list(code.args)\n name = str(code.name)\n # Check if the command is in the list of known Unicode mappings\n if name in self.cmd_map:\n _append_text(self.cmd_map[name], trg)\n self._parse(args, trg)\n # Check if command + arguments is in the list of known Unicode mappings\n # (this covers commands like \"\\mathcal{A}\", which have their own entries)\n elif str(code)[1:] in self.cmd_map:\n _append_text(self.cmd_map[str(code)[1:]], trg)\n # Check if command is a known function name (e.g. \"log\")\n elif name in FUNCTION_NAMES:\n sx = etree.Element(\"span\")\n sx.attrib[\"class\"] = \"tex-math-function\"\n sx.text = str(name)\n trg.append(sx)\n self._parse(args, trg)\n # Handle fractions\n elif name == \"frac\":\n self._parse_fraction(args, trg)\n # Handle commands with simple HTML tag substitutions\n elif name in TEX_TO_HTML:\n elem_name, elem_attrib = TEX_TO_HTML[name]\n sx = etree.Element(elem_name, attrib=elem_attrib)\n self._parse(args, sx)\n trg.append(sx)\n # Known, but unsupported formatting tags that will just be removed\n elif name in REMOVED_COMMANDS and not args:\n pass\n # Give up, but preserve element\n else:\n log.warn(f\"Unknown TeX-math command: {code}\")\n self._append_unparsed(code, trg)\n\n def _parse_fraction(self, args, trg):\n if len(args) != 2:\n log.warn(f\"Couldn't parse \\\\frac: got {len(args)} arguments, expected 2\")\n self._append_unparsed({'name': 'frac', 'args': args}, trg)\n else:\n # Represent numerator of fraction as superscript\n sx = etree.Element(\"sup\")\n self._parse([args[0]], sx)\n trg.append(sx)\n # Unicode symbol for fraction slash\n _append_text(\"\\u2044\", trg)\n # Represent denominator of fraction as subscript\n sx = etree.Element(\"sub\")\n self._parse([args[1]], sx)\n trg.append(sx)\n\n def _parse_text(self, code, trg):\n text = str(code)\n # TexSoup doesn't parse any non-alpha command as a command. Ex: \\$\n # However it does seperate them into their own text part. Ex: 'r\\\\&dd' -> ['r', '\\\\&', 'dd']\n # Therefore try to do command mapping replacement of all text beginning with \\ and of length 2\n if len(text) == 2 and text[0] == '\\\\':\n text = self.cmd_map.get(text[1], text)\n _append_text(text, trg)\n return\n # parse ^ and _ (won't get recognized as separate nodes by TexSoup)\n sxscript = False\n if \"^\" in text or \"_\" in text:\n buf = \"\"\n for char in text:\n if char == \"^\" or char == \"_\":\n _append_text(buf, trg)\n buf = \"\"\n sxscript = \"sup\" if char == \"^\" else \"sub\"\n elif sxscript:\n sx = etree.Element(sxscript)\n sx.text = char\n trg.append(sx)\n sxscript = False\n else:\n buf += char\n text = buf\n # Append as text\n _append_text(text, trg)\n return sxscript\n\n def _append_unparsed(self, code, trg):\n pre = etree.Element(\"span\")\n pre.attrib[\"class\"] = \"tex-math-unparsed\"\n pre.text = f\"\\\\{code.name}{code.args}\"\n trg.append(pre)\n\n def etree_to_html(self, element):\n result = etree.Element(\"span\")\n result.attrib[\"class\"] = \"tex-math\"\n result.tail = element.tail # Preserve tail\n self._parse(TexSoup(element.text).expr.all, result)\n return result\n\n def to_html(self, element):\n \"\"\"Converts a TeX math expression to HTML markup.\"\"\"\n if isinstance(element, etree._Element):\n return self.etree_to_html(element)\n elif isinstance(element, str):\n value = self.etree_to_html(etree.fromstring(f\"<span>{element}</span>\"))\n return etree.tostring(value)\n raise NotImplementedError(f\"Cannot convert elements of type {type(element)}\")\n\n def to_unicode(self, element):\n \"\"\"Converts a TeX math expression to a Unicode string.\n\n This will perform the same conversions as `to_html()`, but strip out the\n HTML tags afterwards.\n \"\"\"\n element = self.to_html(element)\n return etree.tostring(element, encoding=\"unicode\", method=\"text\", with_tail=False)\n", "path": "bin/anthology/texmath.py"}]} | 3,158 | 109 |
gh_patches_debug_6154 | rasdani/github-patches | git_diff | litestar-org__litestar-1659 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `litestar/contrib/repository/filters.py`
Content:
```
1 """Collection filter datastructures."""
2 from __future__ import annotations
3
4 from dataclasses import dataclass
5 from datetime import datetime # noqa: TCH003
6 from typing import TYPE_CHECKING, Generic, Literal, TypeVar
7
8 if TYPE_CHECKING:
9 from collections import abc
10
11
12 T = TypeVar("T")
13
14 __all__ = ["BeforeAfter", "CollectionFilter", "LimitOffset", "OrderBy", "SearchFilter"]
15
16
17 @dataclass
18 class BeforeAfter:
19 """Data required to filter a query on a ``datetime`` column."""
20
21 field_name: str
22 """Name of the model attribute to filter on."""
23 before: datetime | None
24 """Filter results where field earlier than this."""
25 after: datetime | None
26 """Filter results where field later than this."""
27
28
29 @dataclass
30 class CollectionFilter(Generic[T]):
31 """Data required to construct a ``WHERE ... IN (...)`` clause."""
32
33 field_name: str
34 """Name of the model attribute to filter on."""
35 values: abc.Collection[T]
36 """Values for ``IN`` clause."""
37
38
39 @dataclass
40 class LimitOffset:
41 """Data required to add limit/offset filtering to a query."""
42
43 limit: int
44 """Value for ``LIMIT`` clause of query."""
45 offset: int
46 """Value for ``OFFSET`` clause of query."""
47
48
49 @dataclass
50 class OrderBy:
51 """Data required to construct a ``ORDER BY ...`` clause."""
52
53 field_name: str
54 """Name of the model attribute to sort on."""
55 sort_order: Literal["asc", "desc"] = "asc"
56 """Sort ascending or descending"""
57
58
59 @dataclass
60 class SearchFilter:
61 """Data required to construct a ``WHERE field_name LIKE '%' || :value || '%'`` clause."""
62
63 field_name: str
64 """Name of the model attribute to sort on."""
65 value: str
66 """Values for ``LIKE`` clause."""
67 ignore_case: bool | None = False
68 """Should the search be case insensitive."""
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/litestar/contrib/repository/filters.py b/litestar/contrib/repository/filters.py
--- a/litestar/contrib/repository/filters.py
+++ b/litestar/contrib/repository/filters.py
@@ -1,13 +1,10 @@
"""Collection filter datastructures."""
from __future__ import annotations
+from collections import abc # noqa: TCH003
from dataclasses import dataclass
from datetime import datetime # noqa: TCH003
-from typing import TYPE_CHECKING, Generic, Literal, TypeVar
-
-if TYPE_CHECKING:
- from collections import abc
-
+from typing import Generic, Literal, TypeVar
T = TypeVar("T")
| {"golden_diff": "diff --git a/litestar/contrib/repository/filters.py b/litestar/contrib/repository/filters.py\n--- a/litestar/contrib/repository/filters.py\n+++ b/litestar/contrib/repository/filters.py\n@@ -1,13 +1,10 @@\n \"\"\"Collection filter datastructures.\"\"\"\n from __future__ import annotations\n \n+from collections import abc # noqa: TCH003\n from dataclasses import dataclass\n from datetime import datetime # noqa: TCH003\n-from typing import TYPE_CHECKING, Generic, Literal, TypeVar\n-\n-if TYPE_CHECKING:\n- from collections import abc\n-\n+from typing import Generic, Literal, TypeVar\n \n T = TypeVar(\"T\")\n", "issue": "StaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "\"\"\"Collection filter datastructures.\"\"\"\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\nfrom datetime import datetime # noqa: TCH003\nfrom typing import TYPE_CHECKING, Generic, Literal, TypeVar\n\nif TYPE_CHECKING:\n from collections import abc\n\n\nT = TypeVar(\"T\")\n\n__all__ = [\"BeforeAfter\", \"CollectionFilter\", \"LimitOffset\", \"OrderBy\", \"SearchFilter\"]\n\n\n@dataclass\nclass BeforeAfter:\n \"\"\"Data required to filter a query on a ``datetime`` column.\"\"\"\n\n field_name: str\n \"\"\"Name of the model attribute to filter on.\"\"\"\n before: datetime | None\n \"\"\"Filter results where field earlier than this.\"\"\"\n after: datetime | None\n \"\"\"Filter results where field later than this.\"\"\"\n\n\n@dataclass\nclass CollectionFilter(Generic[T]):\n \"\"\"Data required to construct a ``WHERE ... IN (...)`` clause.\"\"\"\n\n field_name: str\n \"\"\"Name of the model attribute to filter on.\"\"\"\n values: abc.Collection[T]\n \"\"\"Values for ``IN`` clause.\"\"\"\n\n\n@dataclass\nclass LimitOffset:\n \"\"\"Data required to add limit/offset filtering to a query.\"\"\"\n\n limit: int\n \"\"\"Value for ``LIMIT`` clause of query.\"\"\"\n offset: int\n \"\"\"Value for ``OFFSET`` clause of query.\"\"\"\n\n\n@dataclass\nclass OrderBy:\n \"\"\"Data required to construct a ``ORDER BY ...`` clause.\"\"\"\n\n field_name: str\n \"\"\"Name of the model attribute to sort on.\"\"\"\n sort_order: Literal[\"asc\", \"desc\"] = \"asc\"\n \"\"\"Sort ascending or descending\"\"\"\n\n\n@dataclass\nclass SearchFilter:\n \"\"\"Data required to construct a ``WHERE field_name LIKE '%' || :value || '%'`` clause.\"\"\"\n\n field_name: str\n \"\"\"Name of the model attribute to sort on.\"\"\"\n value: str\n \"\"\"Values for ``LIKE`` clause.\"\"\"\n ignore_case: bool | None = False\n \"\"\"Should the search be case insensitive.\"\"\"\n", "path": "litestar/contrib/repository/filters.py"}], "after_files": [{"content": "\"\"\"Collection filter datastructures.\"\"\"\nfrom __future__ import annotations\n\nfrom collections import abc # noqa: TCH003\nfrom dataclasses import dataclass\nfrom datetime import datetime # noqa: TCH003\nfrom typing import Generic, Literal, TypeVar\n\nT = TypeVar(\"T\")\n\n__all__ = [\"BeforeAfter\", \"CollectionFilter\", \"LimitOffset\", \"OrderBy\", \"SearchFilter\"]\n\n\n@dataclass\nclass BeforeAfter:\n \"\"\"Data required to filter a query on a ``datetime`` column.\"\"\"\n\n field_name: str\n \"\"\"Name of the model attribute to filter on.\"\"\"\n before: datetime | None\n \"\"\"Filter results where field earlier than this.\"\"\"\n after: datetime | None\n \"\"\"Filter results where field later than this.\"\"\"\n\n\n@dataclass\nclass CollectionFilter(Generic[T]):\n \"\"\"Data required to construct a ``WHERE ... IN (...)`` clause.\"\"\"\n\n field_name: str\n \"\"\"Name of the model attribute to filter on.\"\"\"\n values: abc.Collection[T]\n \"\"\"Values for ``IN`` clause.\"\"\"\n\n\n@dataclass\nclass LimitOffset:\n \"\"\"Data required to add limit/offset filtering to a query.\"\"\"\n\n limit: int\n \"\"\"Value for ``LIMIT`` clause of query.\"\"\"\n offset: int\n \"\"\"Value for ``OFFSET`` clause of query.\"\"\"\n\n\n@dataclass\nclass OrderBy:\n \"\"\"Data required to construct a ``ORDER BY ...`` clause.\"\"\"\n\n field_name: str\n \"\"\"Name of the model attribute to sort on.\"\"\"\n sort_order: Literal[\"asc\", \"desc\"] = \"asc\"\n \"\"\"Sort ascending or descending\"\"\"\n\n\n@dataclass\nclass SearchFilter:\n \"\"\"Data required to construct a ``WHERE field_name LIKE '%' || :value || '%'`` clause.\"\"\"\n\n field_name: str\n \"\"\"Name of the model attribute to sort on.\"\"\"\n value: str\n \"\"\"Values for ``LIKE`` clause.\"\"\"\n ignore_case: bool | None = False\n \"\"\"Should the search be case insensitive.\"\"\"\n", "path": "litestar/contrib/repository/filters.py"}]} | 994 | 155 |
gh_patches_debug_26852 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-1999 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
tiles on plans and container: blue corner missing for external projects
for external projects the little blue corner is missing
mac on chrome and firefox
<img width="400" alt="bildschirmfoto 2019-02-11 um 16 45 01" src="https://user-images.githubusercontent.com/35491681/52574395-7d708980-2e1c-11e9-8cfd-b9f8be74ea16.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/dashboard/__init__.py`
Content:
```
1 from adhocracy4.dashboard import components
2 from adhocracy4.dashboard import ProjectDashboard
3 from meinberlin.apps.projects import get_project_type
4
5
6 default_app_config = 'meinberlin.apps.dashboard.apps.Config'
7
8
9 class TypedProjectDashboard(ProjectDashboard):
10 def __init__(self, project):
11 self.project_type = get_project_type(project)
12 if self.project_type == 'bplan':
13 project = project.externalproject.bplan
14 elif self.project_type == 'external':
15 project = project.externalproject
16 elif self.project_type == 'container':
17 project = project.projectcontainer
18 super().__init__(project)
19
20 def get_project_components(self):
21 if self.project_type == 'bplan':
22 return [components.projects.get('bplan'),
23 components.projects.get('adminlog')]
24 elif self.project_type == 'external':
25 return [components.projects.get('external'),
26 components.projects.get('adminlog')]
27 elif self.project_type == 'container':
28 return [components.projects.get('container-basic'),
29 components.projects.get('container-information'),
30 components.projects.get('topics'),
31 components.projects.get('point'),
32 components.projects.get('container-projects')]
33
34 return [component for component in components.get_project_components()
35 if component.is_effective(self.project)]
36
37 def get_module_components(self):
38 if self.project_type == 'bplan':
39 return []
40 elif self.project_type == 'external':
41 return []
42 elif self.project_type == 'container':
43 return []
44
45 return components.get_module_components()
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/meinberlin/apps/dashboard/__init__.py b/meinberlin/apps/dashboard/__init__.py
--- a/meinberlin/apps/dashboard/__init__.py
+++ b/meinberlin/apps/dashboard/__init__.py
@@ -20,15 +20,20 @@
def get_project_components(self):
if self.project_type == 'bplan':
return [components.projects.get('bplan'),
+ components.projects.get('plans'),
components.projects.get('adminlog')]
elif self.project_type == 'external':
return [components.projects.get('external'),
+ components.projects.get('topics'),
+ components.projects.get('point'),
+ components.projects.get('plans'),
components.projects.get('adminlog')]
elif self.project_type == 'container':
return [components.projects.get('container-basic'),
components.projects.get('container-information'),
components.projects.get('topics'),
components.projects.get('point'),
+ components.projects.get('plans'),
components.projects.get('container-projects')]
return [component for component in components.get_project_components()
| {"golden_diff": "diff --git a/meinberlin/apps/dashboard/__init__.py b/meinberlin/apps/dashboard/__init__.py\n--- a/meinberlin/apps/dashboard/__init__.py\n+++ b/meinberlin/apps/dashboard/__init__.py\n@@ -20,15 +20,20 @@\n def get_project_components(self):\n if self.project_type == 'bplan':\n return [components.projects.get('bplan'),\n+ components.projects.get('plans'),\n components.projects.get('adminlog')]\n elif self.project_type == 'external':\n return [components.projects.get('external'),\n+ components.projects.get('topics'),\n+ components.projects.get('point'),\n+ components.projects.get('plans'),\n components.projects.get('adminlog')]\n elif self.project_type == 'container':\n return [components.projects.get('container-basic'),\n components.projects.get('container-information'),\n components.projects.get('topics'),\n components.projects.get('point'),\n+ components.projects.get('plans'),\n components.projects.get('container-projects')]\n \n return [component for component in components.get_project_components()\n", "issue": "tiles on plans and container: blue corner missing for external projects\nfor external projects the little blue corner is missing\r\n\r\nmac on chrome and firefox\r\n\r\n<img width=\"400\" alt=\"bildschirmfoto 2019-02-11 um 16 45 01\" src=\"https://user-images.githubusercontent.com/35491681/52574395-7d708980-2e1c-11e9-8cfd-b9f8be74ea16.png\">\r\n\n", "before_files": [{"content": "from adhocracy4.dashboard import components\nfrom adhocracy4.dashboard import ProjectDashboard\nfrom meinberlin.apps.projects import get_project_type\n\n\ndefault_app_config = 'meinberlin.apps.dashboard.apps.Config'\n\n\nclass TypedProjectDashboard(ProjectDashboard):\n def __init__(self, project):\n self.project_type = get_project_type(project)\n if self.project_type == 'bplan':\n project = project.externalproject.bplan\n elif self.project_type == 'external':\n project = project.externalproject\n elif self.project_type == 'container':\n project = project.projectcontainer\n super().__init__(project)\n\n def get_project_components(self):\n if self.project_type == 'bplan':\n return [components.projects.get('bplan'),\n components.projects.get('adminlog')]\n elif self.project_type == 'external':\n return [components.projects.get('external'),\n components.projects.get('adminlog')]\n elif self.project_type == 'container':\n return [components.projects.get('container-basic'),\n components.projects.get('container-information'),\n components.projects.get('topics'),\n components.projects.get('point'),\n components.projects.get('container-projects')]\n\n return [component for component in components.get_project_components()\n if component.is_effective(self.project)]\n\n def get_module_components(self):\n if self.project_type == 'bplan':\n return []\n elif self.project_type == 'external':\n return []\n elif self.project_type == 'container':\n return []\n\n return components.get_module_components()\n", "path": "meinberlin/apps/dashboard/__init__.py"}], "after_files": [{"content": "from adhocracy4.dashboard import components\nfrom adhocracy4.dashboard import ProjectDashboard\nfrom meinberlin.apps.projects import get_project_type\n\n\ndefault_app_config = 'meinberlin.apps.dashboard.apps.Config'\n\n\nclass TypedProjectDashboard(ProjectDashboard):\n def __init__(self, project):\n self.project_type = get_project_type(project)\n if self.project_type == 'bplan':\n project = project.externalproject.bplan\n elif self.project_type == 'external':\n project = project.externalproject\n elif self.project_type == 'container':\n project = project.projectcontainer\n super().__init__(project)\n\n def get_project_components(self):\n if self.project_type == 'bplan':\n return [components.projects.get('bplan'),\n components.projects.get('plans'),\n components.projects.get('adminlog')]\n elif self.project_type == 'external':\n return [components.projects.get('external'),\n components.projects.get('topics'),\n components.projects.get('point'),\n components.projects.get('plans'),\n components.projects.get('adminlog')]\n elif self.project_type == 'container':\n return [components.projects.get('container-basic'),\n components.projects.get('container-information'),\n components.projects.get('topics'),\n components.projects.get('point'),\n components.projects.get('plans'),\n components.projects.get('container-projects')]\n\n return [component for component in components.get_project_components()\n if component.is_effective(self.project)]\n\n def get_module_components(self):\n if self.project_type == 'bplan':\n return []\n elif self.project_type == 'external':\n return []\n elif self.project_type == 'container':\n return []\n\n return components.get_module_components()\n", "path": "meinberlin/apps/dashboard/__init__.py"}]} | 793 | 230 |
gh_patches_debug_53786 | rasdani/github-patches | git_diff | psychopy__psychopy-667 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Inconsistent method signature (or missing hook?)
Windows XP, Psychopy 1.80.06
window.flip calls dispatch_events() for each _eventDispatchers (pyglet winType), but DirectInputDevice (hardware/joystick/pyglet_input/directinput.py) has a different signature for that method: _dispatch_event. Needs to be fixed somewhere.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `psychopy/hardware/joystick/pyglet_input/directinput.py`
Content:
```
1 #!/usr/bin/env python2
2 # $Id:$
3
4 import ctypes
5
6 import app
7 import base
8 import pyglet
9 from pyglet.window import win32
10 import dinput
11 from pyglet.window.win32 import _kernel32
12
13 # These instance names are not defined anywhere, obtained by experiment. The
14 # GUID names (which seem to be ideally what are needed) are wrong/missing for
15 # most of my devices.
16
17 _abs_instance_names = {
18 0: 'x',
19 1: 'y',
20 2: 'z',
21 3: 'rx',
22 4: 'ry',
23 5: 'rz',
24 }
25
26 _rel_instance_names = {
27 0: 'x',
28 1: 'y',
29 2: 'wheel',
30 }
31
32 _btn_instance_names = {}
33
34 def _create_control(object_instance):
35 raw_name = object_instance.tszName
36 type = object_instance.dwType
37 instance = dinput.DIDFT_GETINSTANCE(type)
38
39 if type & dinput.DIDFT_ABSAXIS:
40 name = _abs_instance_names.get(instance)
41 control = base.AbsoluteAxis(name, 0, 0xffff, raw_name)
42 elif type & dinput.DIDFT_RELAXIS:
43 name = _rel_instance_names.get(instance)
44 control = base.RelativeAxis(name, raw_name)
45 elif type & dinput.DIDFT_BUTTON:
46 name = _btn_instance_names.get(instance)
47 control = base.Button(name, raw_name)
48 elif type & dinput.DIDFT_POV:
49 control = base.AbsoluteAxis(base.AbsoluteAxis.HAT,
50 0, 0xffffffff, raw_name)
51 else:
52 return
53
54 control._type = object_instance.dwType
55 return control
56
57 class DirectInputDevice(base.Device):
58 def __init__(self, display, device, device_instance):
59 name = device_instance.tszInstanceName
60 super(DirectInputDevice, self).__init__(display, name)
61
62 self._type = device_instance.dwDevType & 0xff
63 self._subtype = device_instance.dwDevType & 0xff00
64
65 self._device = device
66 self._init_controls()
67 self._set_format()
68
69 def _init_controls(self):
70 self.controls = []
71 self._device.EnumObjects(
72 dinput.LPDIENUMDEVICEOBJECTSCALLBACK(self._object_enum),
73 None, dinput.DIDFT_ALL)
74
75 def _object_enum(self, object_instance, arg):
76 control = _create_control(object_instance.contents)
77 if control:
78 self.controls.append(control)
79 return dinput.DIENUM_CONTINUE
80
81 def _set_format(self):
82 if not self.controls:
83 return
84
85 object_formats = (dinput.DIOBJECTDATAFORMAT * len(self.controls))()
86 offset = 0
87 for object_format, control in zip(object_formats, self.controls):
88 object_format.dwOfs = offset
89 object_format.dwType = control._type
90 offset += 4
91
92 format = dinput.DIDATAFORMAT()
93 format.dwSize = ctypes.sizeof(format)
94 format.dwObjSize = ctypes.sizeof(dinput.DIOBJECTDATAFORMAT)
95 format.dwFlags = 0
96 format.dwDataSize = offset
97 format.dwNumObjs = len(object_formats)
98 format.rgodf = ctypes.cast(ctypes.pointer(object_formats),
99 dinput.LPDIOBJECTDATAFORMAT)
100 self._device.SetDataFormat(format)
101
102 prop = dinput.DIPROPDWORD()
103 prop.diph.dwSize = ctypes.sizeof(prop)
104 prop.diph.dwHeaderSize = ctypes.sizeof(prop.diph)
105 prop.diph.dwObj = 0
106 prop.diph.dwHow = dinput.DIPH_DEVICE
107 prop.dwData = 64 * ctypes.sizeof(dinput.DIDATAFORMAT)
108 self._device.SetProperty(dinput.DIPROP_BUFFERSIZE,
109 ctypes.byref(prop.diph))
110
111 def open(self, window=None, exclusive=False):
112 if not self.controls:
113 return
114
115 if window is None:
116 # Pick any open window, or the shadow window if no windows
117 # have been created yet.
118 window = pyglet.gl._shadow_window
119 for window in app.windows:
120 break
121
122 flags = dinput.DISCL_BACKGROUND
123 if exclusive:
124 flags |= dinput.DISCL_EXCLUSIVE
125 else:
126 flags |= dinput.DISCL_NONEXCLUSIVE
127
128 self._wait_object = _kernel32.CreateEventW(None, False, False, None)
129 self._device.SetEventNotification(self._wait_object)
130 app.platform_event_loop.add_wait_object(self._wait_object,
131 self._dispatch_events)
132
133 self._device.SetCooperativeLevel(window._hwnd, flags)
134 self._device.Acquire()
135
136 def close(self):
137 if not self.controls:
138 return
139
140 app.platform_event_loop.remove_wait_object(self._wait_object)
141
142 self._device.Unacquire()
143 self._device.SetEventNotification(None)
144
145 _kernel32.CloseHandle(self._wait_object)
146
147 def get_controls(self):
148 return self.controls
149
150 def _dispatch_events(self):
151 if not self.controls:
152 return
153
154 events = (dinput.DIDEVICEOBJECTDATA * 64)()
155 n_events = win32.DWORD(len(events))
156 self._device.GetDeviceData(ctypes.sizeof(dinput.DIDEVICEOBJECTDATA),
157 ctypes.cast(ctypes.pointer(events),
158 dinput.LPDIDEVICEOBJECTDATA),
159 ctypes.byref(n_events),
160 0)
161 for event in events[:n_events.value]:
162 index = event.dwOfs // 4
163 self.controls[index]._set_value(event.dwData)
164
165 _i_dinput = None
166
167 def _init_directinput():
168 global _i_dinput
169 if _i_dinput:
170 return
171
172 _i_dinput = dinput.IDirectInput8()
173 module = _kernel32.GetModuleHandleW(None)
174 dinput.DirectInput8Create(module, dinput.DIRECTINPUT_VERSION,
175 dinput.IID_IDirectInput8W,
176 ctypes.byref(_i_dinput), None)
177
178 def get_devices(display=None):
179 _init_directinput()
180 _devices = []
181
182 def _device_enum(device_instance, arg):
183 device = dinput.IDirectInputDevice8()
184 _i_dinput.CreateDevice(device_instance.contents.guidInstance,
185 ctypes.byref(device),
186 None)
187 _devices.append(DirectInputDevice(display,
188 device, device_instance.contents))
189
190 return dinput.DIENUM_CONTINUE
191
192 _i_dinput.EnumDevices(dinput.DI8DEVCLASS_ALL,
193 dinput.LPDIENUMDEVICESCALLBACK(_device_enum),
194 None, dinput.DIEDFL_ATTACHEDONLY)
195 return _devices
196
197 def _create_joystick(device):
198 if device._type in (dinput.DI8DEVTYPE_JOYSTICK,
199 dinput.DI8DEVTYPE_GAMEPAD):
200 return base.Joystick(device)
201
202 def get_joysticks(display=None):
203 return filter(None, [_create_joystick(d) for d in get_devices(display)])
204
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/psychopy/hardware/joystick/pyglet_input/directinput.py b/psychopy/hardware/joystick/pyglet_input/directinput.py
--- a/psychopy/hardware/joystick/pyglet_input/directinput.py
+++ b/psychopy/hardware/joystick/pyglet_input/directinput.py
@@ -162,6 +162,10 @@
index = event.dwOfs // 4
self.controls[index]._set_value(event.dwData)
+ def dispatch_events(self):
+ # standardize the method signature
+ self._dispatch_events()
+
_i_dinput = None
def _init_directinput():
| {"golden_diff": "diff --git a/psychopy/hardware/joystick/pyglet_input/directinput.py b/psychopy/hardware/joystick/pyglet_input/directinput.py\n--- a/psychopy/hardware/joystick/pyglet_input/directinput.py\n+++ b/psychopy/hardware/joystick/pyglet_input/directinput.py\n@@ -162,6 +162,10 @@\n index = event.dwOfs // 4\n self.controls[index]._set_value(event.dwData)\n \n+ def dispatch_events(self):\n+ # standardize the method signature\n+ self._dispatch_events()\n+\n _i_dinput = None\n \n def _init_directinput():\n", "issue": "Inconsistent method signature (or missing hook?)\nWindows XP, Psychopy 1.80.06\n\nwindow.flip calls dispatch_events() for each _eventDispatchers (pyglet winType), but DirectInputDevice (hardware/joystick/pyglet_input/directinput.py) has a different signature for that method: _dispatch_event. Needs to be fixed somewhere.\n\n", "before_files": [{"content": "#!/usr/bin/env python2\n# $Id:$\n\nimport ctypes\n\nimport app\nimport base\nimport pyglet\nfrom pyglet.window import win32\nimport dinput\nfrom pyglet.window.win32 import _kernel32\n\n# These instance names are not defined anywhere, obtained by experiment. The\n# GUID names (which seem to be ideally what are needed) are wrong/missing for\n# most of my devices.\n\n_abs_instance_names = {\n 0: 'x',\n 1: 'y',\n 2: 'z',\n 3: 'rx',\n 4: 'ry',\n 5: 'rz',\n}\n\n_rel_instance_names = {\n 0: 'x',\n 1: 'y',\n 2: 'wheel',\n}\n\n_btn_instance_names = {}\n\ndef _create_control(object_instance):\n raw_name = object_instance.tszName\n type = object_instance.dwType\n instance = dinput.DIDFT_GETINSTANCE(type)\n\n if type & dinput.DIDFT_ABSAXIS:\n name = _abs_instance_names.get(instance)\n control = base.AbsoluteAxis(name, 0, 0xffff, raw_name)\n elif type & dinput.DIDFT_RELAXIS:\n name = _rel_instance_names.get(instance)\n control = base.RelativeAxis(name, raw_name)\n elif type & dinput.DIDFT_BUTTON:\n name = _btn_instance_names.get(instance)\n control = base.Button(name, raw_name)\n elif type & dinput.DIDFT_POV:\n control = base.AbsoluteAxis(base.AbsoluteAxis.HAT,\n 0, 0xffffffff, raw_name)\n else:\n return\n\n control._type = object_instance.dwType\n return control\n\nclass DirectInputDevice(base.Device):\n def __init__(self, display, device, device_instance):\n name = device_instance.tszInstanceName\n super(DirectInputDevice, self).__init__(display, name)\n\n self._type = device_instance.dwDevType & 0xff\n self._subtype = device_instance.dwDevType & 0xff00\n\n self._device = device\n self._init_controls()\n self._set_format()\n\n def _init_controls(self):\n self.controls = []\n self._device.EnumObjects(\n dinput.LPDIENUMDEVICEOBJECTSCALLBACK(self._object_enum),\n None, dinput.DIDFT_ALL)\n\n def _object_enum(self, object_instance, arg):\n control = _create_control(object_instance.contents)\n if control:\n self.controls.append(control)\n return dinput.DIENUM_CONTINUE\n\n def _set_format(self):\n if not self.controls:\n return\n\n object_formats = (dinput.DIOBJECTDATAFORMAT * len(self.controls))()\n offset = 0\n for object_format, control in zip(object_formats, self.controls):\n object_format.dwOfs = offset\n object_format.dwType = control._type\n offset += 4\n\n format = dinput.DIDATAFORMAT()\n format.dwSize = ctypes.sizeof(format)\n format.dwObjSize = ctypes.sizeof(dinput.DIOBJECTDATAFORMAT)\n format.dwFlags = 0\n format.dwDataSize = offset\n format.dwNumObjs = len(object_formats)\n format.rgodf = ctypes.cast(ctypes.pointer(object_formats),\n dinput.LPDIOBJECTDATAFORMAT)\n self._device.SetDataFormat(format)\n\n prop = dinput.DIPROPDWORD()\n prop.diph.dwSize = ctypes.sizeof(prop)\n prop.diph.dwHeaderSize = ctypes.sizeof(prop.diph)\n prop.diph.dwObj = 0\n prop.diph.dwHow = dinput.DIPH_DEVICE\n prop.dwData = 64 * ctypes.sizeof(dinput.DIDATAFORMAT)\n self._device.SetProperty(dinput.DIPROP_BUFFERSIZE,\n ctypes.byref(prop.diph))\n\n def open(self, window=None, exclusive=False):\n if not self.controls:\n return\n\n if window is None:\n # Pick any open window, or the shadow window if no windows\n # have been created yet.\n window = pyglet.gl._shadow_window\n for window in app.windows:\n break\n\n flags = dinput.DISCL_BACKGROUND\n if exclusive:\n flags |= dinput.DISCL_EXCLUSIVE\n else:\n flags |= dinput.DISCL_NONEXCLUSIVE\n\n self._wait_object = _kernel32.CreateEventW(None, False, False, None)\n self._device.SetEventNotification(self._wait_object)\n app.platform_event_loop.add_wait_object(self._wait_object,\n self._dispatch_events)\n\n self._device.SetCooperativeLevel(window._hwnd, flags)\n self._device.Acquire()\n\n def close(self):\n if not self.controls:\n return\n\n app.platform_event_loop.remove_wait_object(self._wait_object)\n\n self._device.Unacquire()\n self._device.SetEventNotification(None)\n\n _kernel32.CloseHandle(self._wait_object)\n\n def get_controls(self):\n return self.controls\n\n def _dispatch_events(self):\n if not self.controls:\n return\n\n events = (dinput.DIDEVICEOBJECTDATA * 64)()\n n_events = win32.DWORD(len(events))\n self._device.GetDeviceData(ctypes.sizeof(dinput.DIDEVICEOBJECTDATA),\n ctypes.cast(ctypes.pointer(events),\n dinput.LPDIDEVICEOBJECTDATA),\n ctypes.byref(n_events),\n 0)\n for event in events[:n_events.value]:\n index = event.dwOfs // 4\n self.controls[index]._set_value(event.dwData)\n\n_i_dinput = None\n\ndef _init_directinput():\n global _i_dinput\n if _i_dinput:\n return\n\n _i_dinput = dinput.IDirectInput8()\n module = _kernel32.GetModuleHandleW(None)\n dinput.DirectInput8Create(module, dinput.DIRECTINPUT_VERSION,\n dinput.IID_IDirectInput8W,\n ctypes.byref(_i_dinput), None)\n\ndef get_devices(display=None):\n _init_directinput()\n _devices = []\n\n def _device_enum(device_instance, arg):\n device = dinput.IDirectInputDevice8()\n _i_dinput.CreateDevice(device_instance.contents.guidInstance,\n ctypes.byref(device),\n None)\n _devices.append(DirectInputDevice(display,\n device, device_instance.contents))\n\n return dinput.DIENUM_CONTINUE\n\n _i_dinput.EnumDevices(dinput.DI8DEVCLASS_ALL,\n dinput.LPDIENUMDEVICESCALLBACK(_device_enum),\n None, dinput.DIEDFL_ATTACHEDONLY)\n return _devices\n\ndef _create_joystick(device):\n if device._type in (dinput.DI8DEVTYPE_JOYSTICK,\n dinput.DI8DEVTYPE_GAMEPAD):\n return base.Joystick(device)\n\ndef get_joysticks(display=None):\n return filter(None, [_create_joystick(d) for d in get_devices(display)])\n", "path": "psychopy/hardware/joystick/pyglet_input/directinput.py"}], "after_files": [{"content": "#!/usr/bin/env python2\n# $Id:$\n\nimport ctypes\n\nimport app\nimport base\nimport pyglet\nfrom pyglet.window import win32\nimport dinput\nfrom pyglet.window.win32 import _kernel32\n\n# These instance names are not defined anywhere, obtained by experiment. The\n# GUID names (which seem to be ideally what are needed) are wrong/missing for\n# most of my devices.\n\n_abs_instance_names = {\n 0: 'x',\n 1: 'y',\n 2: 'z',\n 3: 'rx',\n 4: 'ry',\n 5: 'rz',\n}\n\n_rel_instance_names = {\n 0: 'x',\n 1: 'y',\n 2: 'wheel',\n}\n\n_btn_instance_names = {}\n\ndef _create_control(object_instance):\n raw_name = object_instance.tszName\n type = object_instance.dwType\n instance = dinput.DIDFT_GETINSTANCE(type)\n\n if type & dinput.DIDFT_ABSAXIS:\n name = _abs_instance_names.get(instance)\n control = base.AbsoluteAxis(name, 0, 0xffff, raw_name)\n elif type & dinput.DIDFT_RELAXIS:\n name = _rel_instance_names.get(instance)\n control = base.RelativeAxis(name, raw_name)\n elif type & dinput.DIDFT_BUTTON:\n name = _btn_instance_names.get(instance)\n control = base.Button(name, raw_name)\n elif type & dinput.DIDFT_POV:\n control = base.AbsoluteAxis(base.AbsoluteAxis.HAT,\n 0, 0xffffffff, raw_name)\n else:\n return\n\n control._type = object_instance.dwType\n return control\n\nclass DirectInputDevice(base.Device):\n def __init__(self, display, device, device_instance):\n name = device_instance.tszInstanceName\n super(DirectInputDevice, self).__init__(display, name)\n\n self._type = device_instance.dwDevType & 0xff\n self._subtype = device_instance.dwDevType & 0xff00\n\n self._device = device\n self._init_controls()\n self._set_format()\n\n def _init_controls(self):\n self.controls = []\n self._device.EnumObjects(\n dinput.LPDIENUMDEVICEOBJECTSCALLBACK(self._object_enum),\n None, dinput.DIDFT_ALL)\n\n def _object_enum(self, object_instance, arg):\n control = _create_control(object_instance.contents)\n if control:\n self.controls.append(control)\n return dinput.DIENUM_CONTINUE\n\n def _set_format(self):\n if not self.controls:\n return\n\n object_formats = (dinput.DIOBJECTDATAFORMAT * len(self.controls))()\n offset = 0\n for object_format, control in zip(object_formats, self.controls):\n object_format.dwOfs = offset\n object_format.dwType = control._type\n offset += 4\n\n format = dinput.DIDATAFORMAT()\n format.dwSize = ctypes.sizeof(format)\n format.dwObjSize = ctypes.sizeof(dinput.DIOBJECTDATAFORMAT)\n format.dwFlags = 0\n format.dwDataSize = offset\n format.dwNumObjs = len(object_formats)\n format.rgodf = ctypes.cast(ctypes.pointer(object_formats),\n dinput.LPDIOBJECTDATAFORMAT)\n self._device.SetDataFormat(format)\n\n prop = dinput.DIPROPDWORD()\n prop.diph.dwSize = ctypes.sizeof(prop)\n prop.diph.dwHeaderSize = ctypes.sizeof(prop.diph)\n prop.diph.dwObj = 0\n prop.diph.dwHow = dinput.DIPH_DEVICE\n prop.dwData = 64 * ctypes.sizeof(dinput.DIDATAFORMAT)\n self._device.SetProperty(dinput.DIPROP_BUFFERSIZE,\n ctypes.byref(prop.diph))\n\n def open(self, window=None, exclusive=False):\n if not self.controls:\n return\n\n if window is None:\n # Pick any open window, or the shadow window if no windows\n # have been created yet.\n window = pyglet.gl._shadow_window\n for window in app.windows:\n break\n\n flags = dinput.DISCL_BACKGROUND\n if exclusive:\n flags |= dinput.DISCL_EXCLUSIVE\n else:\n flags |= dinput.DISCL_NONEXCLUSIVE\n\n self._wait_object = _kernel32.CreateEventW(None, False, False, None)\n self._device.SetEventNotification(self._wait_object)\n app.platform_event_loop.add_wait_object(self._wait_object,\n self._dispatch_events)\n\n self._device.SetCooperativeLevel(window._hwnd, flags)\n self._device.Acquire()\n\n def close(self):\n if not self.controls:\n return\n\n app.platform_event_loop.remove_wait_object(self._wait_object)\n\n self._device.Unacquire()\n self._device.SetEventNotification(None)\n\n _kernel32.CloseHandle(self._wait_object)\n\n def get_controls(self):\n return self.controls\n\n def _dispatch_events(self):\n if not self.controls:\n return\n\n events = (dinput.DIDEVICEOBJECTDATA * 64)()\n n_events = win32.DWORD(len(events))\n self._device.GetDeviceData(ctypes.sizeof(dinput.DIDEVICEOBJECTDATA),\n ctypes.cast(ctypes.pointer(events),\n dinput.LPDIDEVICEOBJECTDATA),\n ctypes.byref(n_events),\n 0)\n for event in events[:n_events.value]:\n index = event.dwOfs // 4\n self.controls[index]._set_value(event.dwData)\n\n def dispatch_events(self):\n # standardize the method signature\n self._dispatch_events()\n\n_i_dinput = None\n\ndef _init_directinput():\n global _i_dinput\n if _i_dinput:\n return\n\n _i_dinput = dinput.IDirectInput8()\n module = _kernel32.GetModuleHandleW(None)\n dinput.DirectInput8Create(module, dinput.DIRECTINPUT_VERSION,\n dinput.IID_IDirectInput8W,\n ctypes.byref(_i_dinput), None)\n\ndef get_devices(display=None):\n _init_directinput()\n _devices = []\n\n def _device_enum(device_instance, arg):\n device = dinput.IDirectInputDevice8()\n _i_dinput.CreateDevice(device_instance.contents.guidInstance,\n ctypes.byref(device),\n None)\n _devices.append(DirectInputDevice(display,\n device, device_instance.contents))\n\n return dinput.DIENUM_CONTINUE\n\n _i_dinput.EnumDevices(dinput.DI8DEVCLASS_ALL,\n dinput.LPDIENUMDEVICESCALLBACK(_device_enum),\n None, dinput.DIEDFL_ATTACHEDONLY)\n return _devices\n\ndef _create_joystick(device):\n if device._type in (dinput.DI8DEVTYPE_JOYSTICK,\n dinput.DI8DEVTYPE_GAMEPAD):\n return base.Joystick(device)\n\ndef get_joysticks(display=None):\n return filter(None, [_create_joystick(d) for d in get_devices(display)])\n", "path": "psychopy/hardware/joystick/pyglet_input/directinput.py"}]} | 2,394 | 147 |
gh_patches_debug_4607 | rasdani/github-patches | git_diff | CTFd__CTFd-1726 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect update alert in Admin panel
<!--
If this is a bug report please fill out the template below.
If this is a feature request please describe the behavior that you'd like to see.
-->
**Environment**:
- CTFd Version/Commit: 3.1.1
- Operating System: Ubuntu 20.4
- Web Browser and Version: Chrome 85
**What happened?**
The admin panel shows an alert: "A new CTFd version is available!", which links to "https://github.com/CTFd/CTFd/releases/tag/2.4.2". I encountered the issue with version 3.0.2. as well. After complete reinstall and upgrade to version 3.1.1 the problem persisted
**What did you expect to happen?**
I expected no alert, as my CTFd version is the newest, and certainly newer than 2.4.2.
**How to reproduce your issue**
Go to the admin pages.
**Any associated stack traces or error logs**
No
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/utils/updates/__init__.py`
Content:
```
1 import sys
2 import time
3 from distutils.version import StrictVersion
4 from platform import python_version
5
6 import requests
7 from flask import current_app as app
8
9 from CTFd.models import Challenges, Teams, Users, db
10 from CTFd.utils import get_app_config, get_config, set_config
11 from CTFd.utils.config import is_setup
12 from CTFd.utils.crypto import sha256
13
14
15 def update_check(force=False):
16 """
17 Makes a request to ctfd.io to check if there is a new version of CTFd available. The service is provided in return
18 for users opting in to anonymous usage data collection. Users can opt-out of update checks by specifying
19 UPDATE_CHECK = False in config.py
20
21 :param force:
22 :return:
23 """
24 # If UPDATE_CHECK is disabled don't check for updates at all.
25 if app.config.get("UPDATE_CHECK") is False:
26 return
27
28 # Don't do an update check if not setup
29 if is_setup() is False:
30 return
31
32 # Get when we should check for updates next.
33 next_update_check = get_config("next_update_check") or 0
34
35 # If we have passed our saved time or we are forcing we should check.
36 update = (next_update_check < time.time()) or force
37
38 if update:
39 try:
40 name = str(get_config("ctf_name")) or ""
41 params = {
42 "ctf_id": sha256(name),
43 "current": app.VERSION,
44 "python_version_raw": sys.hexversion,
45 "python_version": python_version(),
46 "db_driver": db.session.bind.dialect.name,
47 "challenge_count": Challenges.query.count(),
48 "user_mode": get_config("user_mode"),
49 "user_count": Users.query.count(),
50 "team_count": Teams.query.count(),
51 "theme": get_config("ctf_theme"),
52 "upload_provider": get_app_config("UPLOAD_PROVIDER"),
53 "channel": app.CHANNEL,
54 }
55 check = requests.get(
56 "https://versioning.ctfd.io/check", params=params, timeout=0.1
57 ).json()
58 except requests.exceptions.RequestException:
59 pass
60 except ValueError:
61 pass
62 else:
63 try:
64 latest = check["resource"]["tag"]
65 html_url = check["resource"]["html_url"]
66 if StrictVersion(latest) > StrictVersion(app.VERSION):
67 set_config("version_latest", html_url)
68 elif StrictVersion(latest) <= StrictVersion(app.VERSION):
69 set_config("version_latest", None)
70 next_update_check_time = check["resource"].get(
71 "next", int(time.time() + 43200)
72 )
73 set_config("next_update_check", next_update_check_time)
74 except KeyError:
75 set_config("version_latest", None)
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/CTFd/utils/updates/__init__.py b/CTFd/utils/updates/__init__.py
--- a/CTFd/utils/updates/__init__.py
+++ b/CTFd/utils/updates/__init__.py
@@ -53,7 +53,7 @@
"channel": app.CHANNEL,
}
check = requests.get(
- "https://versioning.ctfd.io/check", params=params, timeout=0.1
+ "https://versioning.ctfd.io/check", params=params, timeout=3
).json()
except requests.exceptions.RequestException:
pass
| {"golden_diff": "diff --git a/CTFd/utils/updates/__init__.py b/CTFd/utils/updates/__init__.py\n--- a/CTFd/utils/updates/__init__.py\n+++ b/CTFd/utils/updates/__init__.py\n@@ -53,7 +53,7 @@\n \"channel\": app.CHANNEL,\n }\n check = requests.get(\n- \"https://versioning.ctfd.io/check\", params=params, timeout=0.1\n+ \"https://versioning.ctfd.io/check\", params=params, timeout=3\n ).json()\n except requests.exceptions.RequestException:\n pass\n", "issue": "Incorrect update alert in Admin panel\n<!--\r\nIf this is a bug report please fill out the template below.\r\n\r\nIf this is a feature request please describe the behavior that you'd like to see.\r\n-->\r\n\r\n**Environment**:\r\n\r\n- CTFd Version/Commit: 3.1.1\r\n- Operating System: Ubuntu 20.4\r\n- Web Browser and Version: Chrome 85\r\n\r\n**What happened?**\r\nThe admin panel shows an alert: \"A new CTFd version is available!\", which links to \"https://github.com/CTFd/CTFd/releases/tag/2.4.2\". I encountered the issue with version 3.0.2. as well. After complete reinstall and upgrade to version 3.1.1 the problem persisted\r\n\r\n**What did you expect to happen?**\r\nI expected no alert, as my CTFd version is the newest, and certainly newer than 2.4.2.\r\n\r\n**How to reproduce your issue**\r\nGo to the admin pages.\r\n\r\n**Any associated stack traces or error logs**\r\nNo\n", "before_files": [{"content": "import sys\nimport time\nfrom distutils.version import StrictVersion\nfrom platform import python_version\n\nimport requests\nfrom flask import current_app as app\n\nfrom CTFd.models import Challenges, Teams, Users, db\nfrom CTFd.utils import get_app_config, get_config, set_config\nfrom CTFd.utils.config import is_setup\nfrom CTFd.utils.crypto import sha256\n\n\ndef update_check(force=False):\n \"\"\"\n Makes a request to ctfd.io to check if there is a new version of CTFd available. The service is provided in return\n for users opting in to anonymous usage data collection. Users can opt-out of update checks by specifying\n UPDATE_CHECK = False in config.py\n\n :param force:\n :return:\n \"\"\"\n # If UPDATE_CHECK is disabled don't check for updates at all.\n if app.config.get(\"UPDATE_CHECK\") is False:\n return\n\n # Don't do an update check if not setup\n if is_setup() is False:\n return\n\n # Get when we should check for updates next.\n next_update_check = get_config(\"next_update_check\") or 0\n\n # If we have passed our saved time or we are forcing we should check.\n update = (next_update_check < time.time()) or force\n\n if update:\n try:\n name = str(get_config(\"ctf_name\")) or \"\"\n params = {\n \"ctf_id\": sha256(name),\n \"current\": app.VERSION,\n \"python_version_raw\": sys.hexversion,\n \"python_version\": python_version(),\n \"db_driver\": db.session.bind.dialect.name,\n \"challenge_count\": Challenges.query.count(),\n \"user_mode\": get_config(\"user_mode\"),\n \"user_count\": Users.query.count(),\n \"team_count\": Teams.query.count(),\n \"theme\": get_config(\"ctf_theme\"),\n \"upload_provider\": get_app_config(\"UPLOAD_PROVIDER\"),\n \"channel\": app.CHANNEL,\n }\n check = requests.get(\n \"https://versioning.ctfd.io/check\", params=params, timeout=0.1\n ).json()\n except requests.exceptions.RequestException:\n pass\n except ValueError:\n pass\n else:\n try:\n latest = check[\"resource\"][\"tag\"]\n html_url = check[\"resource\"][\"html_url\"]\n if StrictVersion(latest) > StrictVersion(app.VERSION):\n set_config(\"version_latest\", html_url)\n elif StrictVersion(latest) <= StrictVersion(app.VERSION):\n set_config(\"version_latest\", None)\n next_update_check_time = check[\"resource\"].get(\n \"next\", int(time.time() + 43200)\n )\n set_config(\"next_update_check\", next_update_check_time)\n except KeyError:\n set_config(\"version_latest\", None)\n", "path": "CTFd/utils/updates/__init__.py"}], "after_files": [{"content": "import sys\nimport time\nfrom distutils.version import StrictVersion\nfrom platform import python_version\n\nimport requests\nfrom flask import current_app as app\n\nfrom CTFd.models import Challenges, Teams, Users, db\nfrom CTFd.utils import get_app_config, get_config, set_config\nfrom CTFd.utils.config import is_setup\nfrom CTFd.utils.crypto import sha256\n\n\ndef update_check(force=False):\n \"\"\"\n Makes a request to ctfd.io to check if there is a new version of CTFd available. The service is provided in return\n for users opting in to anonymous usage data collection. Users can opt-out of update checks by specifying\n UPDATE_CHECK = False in config.py\n\n :param force:\n :return:\n \"\"\"\n # If UPDATE_CHECK is disabled don't check for updates at all.\n if app.config.get(\"UPDATE_CHECK\") is False:\n return\n\n # Don't do an update check if not setup\n if is_setup() is False:\n return\n\n # Get when we should check for updates next.\n next_update_check = get_config(\"next_update_check\") or 0\n\n # If we have passed our saved time or we are forcing we should check.\n update = (next_update_check < time.time()) or force\n\n if update:\n try:\n name = str(get_config(\"ctf_name\")) or \"\"\n params = {\n \"ctf_id\": sha256(name),\n \"current\": app.VERSION,\n \"python_version_raw\": sys.hexversion,\n \"python_version\": python_version(),\n \"db_driver\": db.session.bind.dialect.name,\n \"challenge_count\": Challenges.query.count(),\n \"user_mode\": get_config(\"user_mode\"),\n \"user_count\": Users.query.count(),\n \"team_count\": Teams.query.count(),\n \"theme\": get_config(\"ctf_theme\"),\n \"upload_provider\": get_app_config(\"UPLOAD_PROVIDER\"),\n \"channel\": app.CHANNEL,\n }\n check = requests.get(\n \"https://versioning.ctfd.io/check\", params=params, timeout=3\n ).json()\n except requests.exceptions.RequestException:\n pass\n except ValueError:\n pass\n else:\n try:\n latest = check[\"resource\"][\"tag\"]\n html_url = check[\"resource\"][\"html_url\"]\n if StrictVersion(latest) > StrictVersion(app.VERSION):\n set_config(\"version_latest\", html_url)\n elif StrictVersion(latest) <= StrictVersion(app.VERSION):\n set_config(\"version_latest\", None)\n next_update_check_time = check[\"resource\"].get(\n \"next\", int(time.time() + 43200)\n )\n set_config(\"next_update_check\", next_update_check_time)\n except KeyError:\n set_config(\"version_latest\", None)\n", "path": "CTFd/utils/updates/__init__.py"}]} | 1,224 | 134 |
gh_patches_debug_28226 | rasdani/github-patches | git_diff | pyca__cryptography-7020 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Invalid ecdsa certificate can trigger "cryptography.exceptions.InternalError: Unknown OpenSSL error."
Trying to access the public numbers of an ECDSA key where the public key is "0" (which I believe is the encoding for point at infinity, it's not a valid key, but one can of course encode such a key) causes an InternalError. Based on the error output I believe this is not supposed to happen.
Such a key is part of OpenSSL's test suite:
https://github.com/openssl/openssl/blob/master/test/recipes/91-test_pkey_check_data/ec_p256_bad_0.pem
reproducer:
```
#!/usr/bin/python3
from cryptography.hazmat.primitives.serialization import load_pem_private_key
privkey = """-----BEGIN PRIVATE KEY-----
MEECAQAwEwYHKoZIzj0CAQYIKoZIzj0DAQcEJzAlAgEBBCD/////AAAAAP//////
////vOb6racXnoTzucrC/GMlUQ==
-----END PRIVATE KEY-----"""
pk = load_pem_private_key(privkey.encode(), password=None)
print(pk.public_key().public_numbers())
```
Exception message:
```
cryptography.exceptions.InternalError: Unknown OpenSSL error. This error is commonly encountered when another library is not cleaning up the OpenSSL error stack. If you are using cryptography with another library that uses OpenSSL try disabling it before reporting a bug. Otherwise please file an issue at https://github.com/pyca/cryptography/issues with information on how to reproduce this. ([_OpenSSLErrorWithText(code=269635690, lib=16, reason=106, reason_text=b'error:1012506A:elliptic curve routines:EC_POINT_get_affine_coordinates:point at infinity')])
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cryptography/hazmat/backends/openssl/ec.py`
Content:
```
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 import typing
6
7 from cryptography.exceptions import (
8 InvalidSignature,
9 UnsupportedAlgorithm,
10 _Reasons,
11 )
12 from cryptography.hazmat.backends.openssl.utils import (
13 _calculate_digest_and_algorithm,
14 _evp_pkey_derive,
15 )
16 from cryptography.hazmat.primitives import serialization
17 from cryptography.hazmat.primitives.asymmetric import ec
18
19 if typing.TYPE_CHECKING:
20 from cryptography.hazmat.backends.openssl.backend import Backend
21
22
23 def _check_signature_algorithm(
24 signature_algorithm: ec.EllipticCurveSignatureAlgorithm,
25 ) -> None:
26 if not isinstance(signature_algorithm, ec.ECDSA):
27 raise UnsupportedAlgorithm(
28 "Unsupported elliptic curve signature algorithm.",
29 _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM,
30 )
31
32
33 def _ec_key_curve_sn(backend: "Backend", ec_key) -> str:
34 group = backend._lib.EC_KEY_get0_group(ec_key)
35 backend.openssl_assert(group != backend._ffi.NULL)
36
37 nid = backend._lib.EC_GROUP_get_curve_name(group)
38 # The following check is to find EC keys with unnamed curves and raise
39 # an error for now.
40 if nid == backend._lib.NID_undef:
41 raise ValueError(
42 "ECDSA keys with explicit parameters are unsupported at this time"
43 )
44
45 # This is like the above check, but it also catches the case where you
46 # explicitly encoded a curve with the same parameters as a named curve.
47 # Don't do that.
48 if (
49 not backend._lib.CRYPTOGRAPHY_IS_LIBRESSL
50 and backend._lib.EC_GROUP_get_asn1_flag(group) == 0
51 ):
52 raise ValueError(
53 "ECDSA keys with explicit parameters are unsupported at this time"
54 )
55
56 curve_name = backend._lib.OBJ_nid2sn(nid)
57 backend.openssl_assert(curve_name != backend._ffi.NULL)
58
59 sn = backend._ffi.string(curve_name).decode("ascii")
60 return sn
61
62
63 def _mark_asn1_named_ec_curve(backend: "Backend", ec_cdata):
64 """
65 Set the named curve flag on the EC_KEY. This causes OpenSSL to
66 serialize EC keys along with their curve OID which makes
67 deserialization easier.
68 """
69
70 backend._lib.EC_KEY_set_asn1_flag(
71 ec_cdata, backend._lib.OPENSSL_EC_NAMED_CURVE
72 )
73
74
75 def _sn_to_elliptic_curve(backend: "Backend", sn: str) -> ec.EllipticCurve:
76 try:
77 return ec._CURVE_TYPES[sn]()
78 except KeyError:
79 raise UnsupportedAlgorithm(
80 "{} is not a supported elliptic curve".format(sn),
81 _Reasons.UNSUPPORTED_ELLIPTIC_CURVE,
82 )
83
84
85 def _ecdsa_sig_sign(
86 backend: "Backend", private_key: "_EllipticCurvePrivateKey", data: bytes
87 ) -> bytes:
88 max_size = backend._lib.ECDSA_size(private_key._ec_key)
89 backend.openssl_assert(max_size > 0)
90
91 sigbuf = backend._ffi.new("unsigned char[]", max_size)
92 siglen_ptr = backend._ffi.new("unsigned int[]", 1)
93 res = backend._lib.ECDSA_sign(
94 0, data, len(data), sigbuf, siglen_ptr, private_key._ec_key
95 )
96 backend.openssl_assert(res == 1)
97 return backend._ffi.buffer(sigbuf)[: siglen_ptr[0]]
98
99
100 def _ecdsa_sig_verify(
101 backend: "Backend",
102 public_key: "_EllipticCurvePublicKey",
103 signature: bytes,
104 data: bytes,
105 ) -> None:
106 res = backend._lib.ECDSA_verify(
107 0, data, len(data), signature, len(signature), public_key._ec_key
108 )
109 if res != 1:
110 backend._consume_errors()
111 raise InvalidSignature
112
113
114 class _EllipticCurvePrivateKey(ec.EllipticCurvePrivateKey):
115 def __init__(self, backend: "Backend", ec_key_cdata, evp_pkey):
116 self._backend = backend
117 self._ec_key = ec_key_cdata
118 self._evp_pkey = evp_pkey
119
120 sn = _ec_key_curve_sn(backend, ec_key_cdata)
121 self._curve = _sn_to_elliptic_curve(backend, sn)
122 _mark_asn1_named_ec_curve(backend, ec_key_cdata)
123
124 @property
125 def curve(self) -> ec.EllipticCurve:
126 return self._curve
127
128 @property
129 def key_size(self) -> int:
130 return self.curve.key_size
131
132 def exchange(
133 self, algorithm: ec.ECDH, peer_public_key: ec.EllipticCurvePublicKey
134 ) -> bytes:
135 if not (
136 self._backend.elliptic_curve_exchange_algorithm_supported(
137 algorithm, self.curve
138 )
139 ):
140 raise UnsupportedAlgorithm(
141 "This backend does not support the ECDH algorithm.",
142 _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM,
143 )
144
145 if peer_public_key.curve.name != self.curve.name:
146 raise ValueError(
147 "peer_public_key and self are not on the same curve"
148 )
149
150 return _evp_pkey_derive(self._backend, self._evp_pkey, peer_public_key)
151
152 def public_key(self) -> ec.EllipticCurvePublicKey:
153 group = self._backend._lib.EC_KEY_get0_group(self._ec_key)
154 self._backend.openssl_assert(group != self._backend._ffi.NULL)
155
156 curve_nid = self._backend._lib.EC_GROUP_get_curve_name(group)
157 public_ec_key = self._backend._ec_key_new_by_curve_nid(curve_nid)
158
159 point = self._backend._lib.EC_KEY_get0_public_key(self._ec_key)
160 self._backend.openssl_assert(point != self._backend._ffi.NULL)
161
162 res = self._backend._lib.EC_KEY_set_public_key(public_ec_key, point)
163 self._backend.openssl_assert(res == 1)
164
165 evp_pkey = self._backend._ec_cdata_to_evp_pkey(public_ec_key)
166
167 return _EllipticCurvePublicKey(self._backend, public_ec_key, evp_pkey)
168
169 def private_numbers(self) -> ec.EllipticCurvePrivateNumbers:
170 bn = self._backend._lib.EC_KEY_get0_private_key(self._ec_key)
171 private_value = self._backend._bn_to_int(bn)
172 return ec.EllipticCurvePrivateNumbers(
173 private_value=private_value,
174 public_numbers=self.public_key().public_numbers(),
175 )
176
177 def private_bytes(
178 self,
179 encoding: serialization.Encoding,
180 format: serialization.PrivateFormat,
181 encryption_algorithm: serialization.KeySerializationEncryption,
182 ) -> bytes:
183 return self._backend._private_key_bytes(
184 encoding,
185 format,
186 encryption_algorithm,
187 self,
188 self._evp_pkey,
189 self._ec_key,
190 )
191
192 def sign(
193 self,
194 data: bytes,
195 signature_algorithm: ec.EllipticCurveSignatureAlgorithm,
196 ) -> bytes:
197 _check_signature_algorithm(signature_algorithm)
198 data, _ = _calculate_digest_and_algorithm(
199 data,
200 signature_algorithm.algorithm,
201 )
202 return _ecdsa_sig_sign(self._backend, self, data)
203
204
205 class _EllipticCurvePublicKey(ec.EllipticCurvePublicKey):
206 def __init__(self, backend: "Backend", ec_key_cdata, evp_pkey):
207 self._backend = backend
208 self._ec_key = ec_key_cdata
209 self._evp_pkey = evp_pkey
210
211 sn = _ec_key_curve_sn(backend, ec_key_cdata)
212 self._curve = _sn_to_elliptic_curve(backend, sn)
213 _mark_asn1_named_ec_curve(backend, ec_key_cdata)
214
215 @property
216 def curve(self) -> ec.EllipticCurve:
217 return self._curve
218
219 @property
220 def key_size(self) -> int:
221 return self.curve.key_size
222
223 def public_numbers(self) -> ec.EllipticCurvePublicNumbers:
224 get_func, group = self._backend._ec_key_determine_group_get_func(
225 self._ec_key
226 )
227 point = self._backend._lib.EC_KEY_get0_public_key(self._ec_key)
228 self._backend.openssl_assert(point != self._backend._ffi.NULL)
229
230 with self._backend._tmp_bn_ctx() as bn_ctx:
231 bn_x = self._backend._lib.BN_CTX_get(bn_ctx)
232 bn_y = self._backend._lib.BN_CTX_get(bn_ctx)
233
234 res = get_func(group, point, bn_x, bn_y, bn_ctx)
235 self._backend.openssl_assert(res == 1)
236
237 x = self._backend._bn_to_int(bn_x)
238 y = self._backend._bn_to_int(bn_y)
239
240 return ec.EllipticCurvePublicNumbers(x=x, y=y, curve=self._curve)
241
242 def _encode_point(self, format: serialization.PublicFormat) -> bytes:
243 if format is serialization.PublicFormat.CompressedPoint:
244 conversion = self._backend._lib.POINT_CONVERSION_COMPRESSED
245 else:
246 assert format is serialization.PublicFormat.UncompressedPoint
247 conversion = self._backend._lib.POINT_CONVERSION_UNCOMPRESSED
248
249 group = self._backend._lib.EC_KEY_get0_group(self._ec_key)
250 self._backend.openssl_assert(group != self._backend._ffi.NULL)
251 point = self._backend._lib.EC_KEY_get0_public_key(self._ec_key)
252 self._backend.openssl_assert(point != self._backend._ffi.NULL)
253 with self._backend._tmp_bn_ctx() as bn_ctx:
254 buflen = self._backend._lib.EC_POINT_point2oct(
255 group, point, conversion, self._backend._ffi.NULL, 0, bn_ctx
256 )
257 self._backend.openssl_assert(buflen > 0)
258 buf = self._backend._ffi.new("char[]", buflen)
259 res = self._backend._lib.EC_POINT_point2oct(
260 group, point, conversion, buf, buflen, bn_ctx
261 )
262 self._backend.openssl_assert(buflen == res)
263
264 return self._backend._ffi.buffer(buf)[:]
265
266 def public_bytes(
267 self,
268 encoding: serialization.Encoding,
269 format: serialization.PublicFormat,
270 ) -> bytes:
271 if (
272 encoding is serialization.Encoding.X962
273 or format is serialization.PublicFormat.CompressedPoint
274 or format is serialization.PublicFormat.UncompressedPoint
275 ):
276 if encoding is not serialization.Encoding.X962 or format not in (
277 serialization.PublicFormat.CompressedPoint,
278 serialization.PublicFormat.UncompressedPoint,
279 ):
280 raise ValueError(
281 "X962 encoding must be used with CompressedPoint or "
282 "UncompressedPoint format"
283 )
284
285 return self._encode_point(format)
286 else:
287 return self._backend._public_key_bytes(
288 encoding, format, self, self._evp_pkey, None
289 )
290
291 def verify(
292 self,
293 signature: bytes,
294 data: bytes,
295 signature_algorithm: ec.EllipticCurveSignatureAlgorithm,
296 ) -> None:
297 _check_signature_algorithm(signature_algorithm)
298 data, _ = _calculate_digest_and_algorithm(
299 data,
300 signature_algorithm.algorithm,
301 )
302 _ecdsa_sig_verify(self._backend, self, signature, data)
303
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/cryptography/hazmat/backends/openssl/ec.py b/src/cryptography/hazmat/backends/openssl/ec.py
--- a/src/cryptography/hazmat/backends/openssl/ec.py
+++ b/src/cryptography/hazmat/backends/openssl/ec.py
@@ -72,6 +72,17 @@
)
+def _check_key_infinity(backend: "Backend", ec_cdata) -> None:
+ point = backend._lib.EC_KEY_get0_public_key(ec_cdata)
+ backend.openssl_assert(point != backend._ffi.NULL)
+ group = backend._lib.EC_KEY_get0_group(ec_cdata)
+ backend.openssl_assert(group != backend._ffi.NULL)
+ if backend._lib.EC_POINT_is_at_infinity(group, point):
+ raise ValueError(
+ "Cannot load an EC public key where the point is at infinity"
+ )
+
+
def _sn_to_elliptic_curve(backend: "Backend", sn: str) -> ec.EllipticCurve:
try:
return ec._CURVE_TYPES[sn]()
@@ -120,6 +131,7 @@
sn = _ec_key_curve_sn(backend, ec_key_cdata)
self._curve = _sn_to_elliptic_curve(backend, sn)
_mark_asn1_named_ec_curve(backend, ec_key_cdata)
+ _check_key_infinity(backend, ec_key_cdata)
@property
def curve(self) -> ec.EllipticCurve:
@@ -211,6 +223,7 @@
sn = _ec_key_curve_sn(backend, ec_key_cdata)
self._curve = _sn_to_elliptic_curve(backend, sn)
_mark_asn1_named_ec_curve(backend, ec_key_cdata)
+ _check_key_infinity(backend, ec_key_cdata)
@property
def curve(self) -> ec.EllipticCurve:
| {"golden_diff": "diff --git a/src/cryptography/hazmat/backends/openssl/ec.py b/src/cryptography/hazmat/backends/openssl/ec.py\n--- a/src/cryptography/hazmat/backends/openssl/ec.py\n+++ b/src/cryptography/hazmat/backends/openssl/ec.py\n@@ -72,6 +72,17 @@\n )\n \n \n+def _check_key_infinity(backend: \"Backend\", ec_cdata) -> None:\n+ point = backend._lib.EC_KEY_get0_public_key(ec_cdata)\n+ backend.openssl_assert(point != backend._ffi.NULL)\n+ group = backend._lib.EC_KEY_get0_group(ec_cdata)\n+ backend.openssl_assert(group != backend._ffi.NULL)\n+ if backend._lib.EC_POINT_is_at_infinity(group, point):\n+ raise ValueError(\n+ \"Cannot load an EC public key where the point is at infinity\"\n+ )\n+\n+\n def _sn_to_elliptic_curve(backend: \"Backend\", sn: str) -> ec.EllipticCurve:\n try:\n return ec._CURVE_TYPES[sn]()\n@@ -120,6 +131,7 @@\n sn = _ec_key_curve_sn(backend, ec_key_cdata)\n self._curve = _sn_to_elliptic_curve(backend, sn)\n _mark_asn1_named_ec_curve(backend, ec_key_cdata)\n+ _check_key_infinity(backend, ec_key_cdata)\n \n @property\n def curve(self) -> ec.EllipticCurve:\n@@ -211,6 +223,7 @@\n sn = _ec_key_curve_sn(backend, ec_key_cdata)\n self._curve = _sn_to_elliptic_curve(backend, sn)\n _mark_asn1_named_ec_curve(backend, ec_key_cdata)\n+ _check_key_infinity(backend, ec_key_cdata)\n \n @property\n def curve(self) -> ec.EllipticCurve:\n", "issue": "Invalid ecdsa certificate can trigger \"cryptography.exceptions.InternalError: Unknown OpenSSL error.\"\nTrying to access the public numbers of an ECDSA key where the public key is \"0\" (which I believe is the encoding for point at infinity, it's not a valid key, but one can of course encode such a key) causes an InternalError. Based on the error output I believe this is not supposed to happen.\r\n\r\nSuch a key is part of OpenSSL's test suite:\r\nhttps://github.com/openssl/openssl/blob/master/test/recipes/91-test_pkey_check_data/ec_p256_bad_0.pem\r\n\r\nreproducer:\r\n```\r\n#!/usr/bin/python3\r\n\r\nfrom cryptography.hazmat.primitives.serialization import load_pem_private_key\r\n\r\nprivkey = \"\"\"-----BEGIN PRIVATE KEY-----\r\nMEECAQAwEwYHKoZIzj0CAQYIKoZIzj0DAQcEJzAlAgEBBCD/////AAAAAP//////\r\n////vOb6racXnoTzucrC/GMlUQ==\r\n-----END PRIVATE KEY-----\"\"\"\r\n\r\npk = load_pem_private_key(privkey.encode(), password=None)\r\n\r\nprint(pk.public_key().public_numbers())\r\n```\r\n\r\nException message:\r\n```\r\ncryptography.exceptions.InternalError: Unknown OpenSSL error. This error is commonly encountered when another library is not cleaning up the OpenSSL error stack. If you are using cryptography with another library that uses OpenSSL try disabling it before reporting a bug. Otherwise please file an issue at https://github.com/pyca/cryptography/issues with information on how to reproduce this. ([_OpenSSLErrorWithText(code=269635690, lib=16, reason=106, reason_text=b'error:1012506A:elliptic curve routines:EC_POINT_get_affine_coordinates:point at infinity')])\r\n```\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nimport typing\n\nfrom cryptography.exceptions import (\n InvalidSignature,\n UnsupportedAlgorithm,\n _Reasons,\n)\nfrom cryptography.hazmat.backends.openssl.utils import (\n _calculate_digest_and_algorithm,\n _evp_pkey_derive,\n)\nfrom cryptography.hazmat.primitives import serialization\nfrom cryptography.hazmat.primitives.asymmetric import ec\n\nif typing.TYPE_CHECKING:\n from cryptography.hazmat.backends.openssl.backend import Backend\n\n\ndef _check_signature_algorithm(\n signature_algorithm: ec.EllipticCurveSignatureAlgorithm,\n) -> None:\n if not isinstance(signature_algorithm, ec.ECDSA):\n raise UnsupportedAlgorithm(\n \"Unsupported elliptic curve signature algorithm.\",\n _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM,\n )\n\n\ndef _ec_key_curve_sn(backend: \"Backend\", ec_key) -> str:\n group = backend._lib.EC_KEY_get0_group(ec_key)\n backend.openssl_assert(group != backend._ffi.NULL)\n\n nid = backend._lib.EC_GROUP_get_curve_name(group)\n # The following check is to find EC keys with unnamed curves and raise\n # an error for now.\n if nid == backend._lib.NID_undef:\n raise ValueError(\n \"ECDSA keys with explicit parameters are unsupported at this time\"\n )\n\n # This is like the above check, but it also catches the case where you\n # explicitly encoded a curve with the same parameters as a named curve.\n # Don't do that.\n if (\n not backend._lib.CRYPTOGRAPHY_IS_LIBRESSL\n and backend._lib.EC_GROUP_get_asn1_flag(group) == 0\n ):\n raise ValueError(\n \"ECDSA keys with explicit parameters are unsupported at this time\"\n )\n\n curve_name = backend._lib.OBJ_nid2sn(nid)\n backend.openssl_assert(curve_name != backend._ffi.NULL)\n\n sn = backend._ffi.string(curve_name).decode(\"ascii\")\n return sn\n\n\ndef _mark_asn1_named_ec_curve(backend: \"Backend\", ec_cdata):\n \"\"\"\n Set the named curve flag on the EC_KEY. This causes OpenSSL to\n serialize EC keys along with their curve OID which makes\n deserialization easier.\n \"\"\"\n\n backend._lib.EC_KEY_set_asn1_flag(\n ec_cdata, backend._lib.OPENSSL_EC_NAMED_CURVE\n )\n\n\ndef _sn_to_elliptic_curve(backend: \"Backend\", sn: str) -> ec.EllipticCurve:\n try:\n return ec._CURVE_TYPES[sn]()\n except KeyError:\n raise UnsupportedAlgorithm(\n \"{} is not a supported elliptic curve\".format(sn),\n _Reasons.UNSUPPORTED_ELLIPTIC_CURVE,\n )\n\n\ndef _ecdsa_sig_sign(\n backend: \"Backend\", private_key: \"_EllipticCurvePrivateKey\", data: bytes\n) -> bytes:\n max_size = backend._lib.ECDSA_size(private_key._ec_key)\n backend.openssl_assert(max_size > 0)\n\n sigbuf = backend._ffi.new(\"unsigned char[]\", max_size)\n siglen_ptr = backend._ffi.new(\"unsigned int[]\", 1)\n res = backend._lib.ECDSA_sign(\n 0, data, len(data), sigbuf, siglen_ptr, private_key._ec_key\n )\n backend.openssl_assert(res == 1)\n return backend._ffi.buffer(sigbuf)[: siglen_ptr[0]]\n\n\ndef _ecdsa_sig_verify(\n backend: \"Backend\",\n public_key: \"_EllipticCurvePublicKey\",\n signature: bytes,\n data: bytes,\n) -> None:\n res = backend._lib.ECDSA_verify(\n 0, data, len(data), signature, len(signature), public_key._ec_key\n )\n if res != 1:\n backend._consume_errors()\n raise InvalidSignature\n\n\nclass _EllipticCurvePrivateKey(ec.EllipticCurvePrivateKey):\n def __init__(self, backend: \"Backend\", ec_key_cdata, evp_pkey):\n self._backend = backend\n self._ec_key = ec_key_cdata\n self._evp_pkey = evp_pkey\n\n sn = _ec_key_curve_sn(backend, ec_key_cdata)\n self._curve = _sn_to_elliptic_curve(backend, sn)\n _mark_asn1_named_ec_curve(backend, ec_key_cdata)\n\n @property\n def curve(self) -> ec.EllipticCurve:\n return self._curve\n\n @property\n def key_size(self) -> int:\n return self.curve.key_size\n\n def exchange(\n self, algorithm: ec.ECDH, peer_public_key: ec.EllipticCurvePublicKey\n ) -> bytes:\n if not (\n self._backend.elliptic_curve_exchange_algorithm_supported(\n algorithm, self.curve\n )\n ):\n raise UnsupportedAlgorithm(\n \"This backend does not support the ECDH algorithm.\",\n _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM,\n )\n\n if peer_public_key.curve.name != self.curve.name:\n raise ValueError(\n \"peer_public_key and self are not on the same curve\"\n )\n\n return _evp_pkey_derive(self._backend, self._evp_pkey, peer_public_key)\n\n def public_key(self) -> ec.EllipticCurvePublicKey:\n group = self._backend._lib.EC_KEY_get0_group(self._ec_key)\n self._backend.openssl_assert(group != self._backend._ffi.NULL)\n\n curve_nid = self._backend._lib.EC_GROUP_get_curve_name(group)\n public_ec_key = self._backend._ec_key_new_by_curve_nid(curve_nid)\n\n point = self._backend._lib.EC_KEY_get0_public_key(self._ec_key)\n self._backend.openssl_assert(point != self._backend._ffi.NULL)\n\n res = self._backend._lib.EC_KEY_set_public_key(public_ec_key, point)\n self._backend.openssl_assert(res == 1)\n\n evp_pkey = self._backend._ec_cdata_to_evp_pkey(public_ec_key)\n\n return _EllipticCurvePublicKey(self._backend, public_ec_key, evp_pkey)\n\n def private_numbers(self) -> ec.EllipticCurvePrivateNumbers:\n bn = self._backend._lib.EC_KEY_get0_private_key(self._ec_key)\n private_value = self._backend._bn_to_int(bn)\n return ec.EllipticCurvePrivateNumbers(\n private_value=private_value,\n public_numbers=self.public_key().public_numbers(),\n )\n\n def private_bytes(\n self,\n encoding: serialization.Encoding,\n format: serialization.PrivateFormat,\n encryption_algorithm: serialization.KeySerializationEncryption,\n ) -> bytes:\n return self._backend._private_key_bytes(\n encoding,\n format,\n encryption_algorithm,\n self,\n self._evp_pkey,\n self._ec_key,\n )\n\n def sign(\n self,\n data: bytes,\n signature_algorithm: ec.EllipticCurveSignatureAlgorithm,\n ) -> bytes:\n _check_signature_algorithm(signature_algorithm)\n data, _ = _calculate_digest_and_algorithm(\n data,\n signature_algorithm.algorithm,\n )\n return _ecdsa_sig_sign(self._backend, self, data)\n\n\nclass _EllipticCurvePublicKey(ec.EllipticCurvePublicKey):\n def __init__(self, backend: \"Backend\", ec_key_cdata, evp_pkey):\n self._backend = backend\n self._ec_key = ec_key_cdata\n self._evp_pkey = evp_pkey\n\n sn = _ec_key_curve_sn(backend, ec_key_cdata)\n self._curve = _sn_to_elliptic_curve(backend, sn)\n _mark_asn1_named_ec_curve(backend, ec_key_cdata)\n\n @property\n def curve(self) -> ec.EllipticCurve:\n return self._curve\n\n @property\n def key_size(self) -> int:\n return self.curve.key_size\n\n def public_numbers(self) -> ec.EllipticCurvePublicNumbers:\n get_func, group = self._backend._ec_key_determine_group_get_func(\n self._ec_key\n )\n point = self._backend._lib.EC_KEY_get0_public_key(self._ec_key)\n self._backend.openssl_assert(point != self._backend._ffi.NULL)\n\n with self._backend._tmp_bn_ctx() as bn_ctx:\n bn_x = self._backend._lib.BN_CTX_get(bn_ctx)\n bn_y = self._backend._lib.BN_CTX_get(bn_ctx)\n\n res = get_func(group, point, bn_x, bn_y, bn_ctx)\n self._backend.openssl_assert(res == 1)\n\n x = self._backend._bn_to_int(bn_x)\n y = self._backend._bn_to_int(bn_y)\n\n return ec.EllipticCurvePublicNumbers(x=x, y=y, curve=self._curve)\n\n def _encode_point(self, format: serialization.PublicFormat) -> bytes:\n if format is serialization.PublicFormat.CompressedPoint:\n conversion = self._backend._lib.POINT_CONVERSION_COMPRESSED\n else:\n assert format is serialization.PublicFormat.UncompressedPoint\n conversion = self._backend._lib.POINT_CONVERSION_UNCOMPRESSED\n\n group = self._backend._lib.EC_KEY_get0_group(self._ec_key)\n self._backend.openssl_assert(group != self._backend._ffi.NULL)\n point = self._backend._lib.EC_KEY_get0_public_key(self._ec_key)\n self._backend.openssl_assert(point != self._backend._ffi.NULL)\n with self._backend._tmp_bn_ctx() as bn_ctx:\n buflen = self._backend._lib.EC_POINT_point2oct(\n group, point, conversion, self._backend._ffi.NULL, 0, bn_ctx\n )\n self._backend.openssl_assert(buflen > 0)\n buf = self._backend._ffi.new(\"char[]\", buflen)\n res = self._backend._lib.EC_POINT_point2oct(\n group, point, conversion, buf, buflen, bn_ctx\n )\n self._backend.openssl_assert(buflen == res)\n\n return self._backend._ffi.buffer(buf)[:]\n\n def public_bytes(\n self,\n encoding: serialization.Encoding,\n format: serialization.PublicFormat,\n ) -> bytes:\n if (\n encoding is serialization.Encoding.X962\n or format is serialization.PublicFormat.CompressedPoint\n or format is serialization.PublicFormat.UncompressedPoint\n ):\n if encoding is not serialization.Encoding.X962 or format not in (\n serialization.PublicFormat.CompressedPoint,\n serialization.PublicFormat.UncompressedPoint,\n ):\n raise ValueError(\n \"X962 encoding must be used with CompressedPoint or \"\n \"UncompressedPoint format\"\n )\n\n return self._encode_point(format)\n else:\n return self._backend._public_key_bytes(\n encoding, format, self, self._evp_pkey, None\n )\n\n def verify(\n self,\n signature: bytes,\n data: bytes,\n signature_algorithm: ec.EllipticCurveSignatureAlgorithm,\n ) -> None:\n _check_signature_algorithm(signature_algorithm)\n data, _ = _calculate_digest_and_algorithm(\n data,\n signature_algorithm.algorithm,\n )\n _ecdsa_sig_verify(self._backend, self, signature, data)\n", "path": "src/cryptography/hazmat/backends/openssl/ec.py"}], "after_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nimport typing\n\nfrom cryptography.exceptions import (\n InvalidSignature,\n UnsupportedAlgorithm,\n _Reasons,\n)\nfrom cryptography.hazmat.backends.openssl.utils import (\n _calculate_digest_and_algorithm,\n _evp_pkey_derive,\n)\nfrom cryptography.hazmat.primitives import serialization\nfrom cryptography.hazmat.primitives.asymmetric import ec\n\nif typing.TYPE_CHECKING:\n from cryptography.hazmat.backends.openssl.backend import Backend\n\n\ndef _check_signature_algorithm(\n signature_algorithm: ec.EllipticCurveSignatureAlgorithm,\n) -> None:\n if not isinstance(signature_algorithm, ec.ECDSA):\n raise UnsupportedAlgorithm(\n \"Unsupported elliptic curve signature algorithm.\",\n _Reasons.UNSUPPORTED_PUBLIC_KEY_ALGORITHM,\n )\n\n\ndef _ec_key_curve_sn(backend: \"Backend\", ec_key) -> str:\n group = backend._lib.EC_KEY_get0_group(ec_key)\n backend.openssl_assert(group != backend._ffi.NULL)\n\n nid = backend._lib.EC_GROUP_get_curve_name(group)\n # The following check is to find EC keys with unnamed curves and raise\n # an error for now.\n if nid == backend._lib.NID_undef:\n raise ValueError(\n \"ECDSA keys with explicit parameters are unsupported at this time\"\n )\n\n # This is like the above check, but it also catches the case where you\n # explicitly encoded a curve with the same parameters as a named curve.\n # Don't do that.\n if (\n not backend._lib.CRYPTOGRAPHY_IS_LIBRESSL\n and backend._lib.EC_GROUP_get_asn1_flag(group) == 0\n ):\n raise ValueError(\n \"ECDSA keys with explicit parameters are unsupported at this time\"\n )\n\n curve_name = backend._lib.OBJ_nid2sn(nid)\n backend.openssl_assert(curve_name != backend._ffi.NULL)\n\n sn = backend._ffi.string(curve_name).decode(\"ascii\")\n return sn\n\n\ndef _mark_asn1_named_ec_curve(backend: \"Backend\", ec_cdata):\n \"\"\"\n Set the named curve flag on the EC_KEY. This causes OpenSSL to\n serialize EC keys along with their curve OID which makes\n deserialization easier.\n \"\"\"\n\n backend._lib.EC_KEY_set_asn1_flag(\n ec_cdata, backend._lib.OPENSSL_EC_NAMED_CURVE\n )\n\n\ndef _check_key_infinity(backend: \"Backend\", ec_cdata) -> None:\n point = backend._lib.EC_KEY_get0_public_key(ec_cdata)\n backend.openssl_assert(point != backend._ffi.NULL)\n group = backend._lib.EC_KEY_get0_group(ec_cdata)\n backend.openssl_assert(group != backend._ffi.NULL)\n if backend._lib.EC_POINT_is_at_infinity(group, point):\n raise ValueError(\n \"Cannot load an EC public key where the point is at infinity\"\n )\n\n\ndef _sn_to_elliptic_curve(backend: \"Backend\", sn: str) -> ec.EllipticCurve:\n try:\n return ec._CURVE_TYPES[sn]()\n except KeyError:\n raise UnsupportedAlgorithm(\n \"{} is not a supported elliptic curve\".format(sn),\n _Reasons.UNSUPPORTED_ELLIPTIC_CURVE,\n )\n\n\ndef _ecdsa_sig_sign(\n backend: \"Backend\", private_key: \"_EllipticCurvePrivateKey\", data: bytes\n) -> bytes:\n max_size = backend._lib.ECDSA_size(private_key._ec_key)\n backend.openssl_assert(max_size > 0)\n\n sigbuf = backend._ffi.new(\"unsigned char[]\", max_size)\n siglen_ptr = backend._ffi.new(\"unsigned int[]\", 1)\n res = backend._lib.ECDSA_sign(\n 0, data, len(data), sigbuf, siglen_ptr, private_key._ec_key\n )\n backend.openssl_assert(res == 1)\n return backend._ffi.buffer(sigbuf)[: siglen_ptr[0]]\n\n\ndef _ecdsa_sig_verify(\n backend: \"Backend\",\n public_key: \"_EllipticCurvePublicKey\",\n signature: bytes,\n data: bytes,\n) -> None:\n res = backend._lib.ECDSA_verify(\n 0, data, len(data), signature, len(signature), public_key._ec_key\n )\n if res != 1:\n backend._consume_errors()\n raise InvalidSignature\n\n\nclass _EllipticCurvePrivateKey(ec.EllipticCurvePrivateKey):\n def __init__(self, backend: \"Backend\", ec_key_cdata, evp_pkey):\n self._backend = backend\n self._ec_key = ec_key_cdata\n self._evp_pkey = evp_pkey\n\n sn = _ec_key_curve_sn(backend, ec_key_cdata)\n self._curve = _sn_to_elliptic_curve(backend, sn)\n _mark_asn1_named_ec_curve(backend, ec_key_cdata)\n _check_key_infinity(backend, ec_key_cdata)\n\n @property\n def curve(self) -> ec.EllipticCurve:\n return self._curve\n\n @property\n def key_size(self) -> int:\n return self.curve.key_size\n\n def exchange(\n self, algorithm: ec.ECDH, peer_public_key: ec.EllipticCurvePublicKey\n ) -> bytes:\n if not (\n self._backend.elliptic_curve_exchange_algorithm_supported(\n algorithm, self.curve\n )\n ):\n raise UnsupportedAlgorithm(\n \"This backend does not support the ECDH algorithm.\",\n _Reasons.UNSUPPORTED_EXCHANGE_ALGORITHM,\n )\n\n if peer_public_key.curve.name != self.curve.name:\n raise ValueError(\n \"peer_public_key and self are not on the same curve\"\n )\n\n return _evp_pkey_derive(self._backend, self._evp_pkey, peer_public_key)\n\n def public_key(self) -> ec.EllipticCurvePublicKey:\n group = self._backend._lib.EC_KEY_get0_group(self._ec_key)\n self._backend.openssl_assert(group != self._backend._ffi.NULL)\n\n curve_nid = self._backend._lib.EC_GROUP_get_curve_name(group)\n public_ec_key = self._backend._ec_key_new_by_curve_nid(curve_nid)\n\n point = self._backend._lib.EC_KEY_get0_public_key(self._ec_key)\n self._backend.openssl_assert(point != self._backend._ffi.NULL)\n\n res = self._backend._lib.EC_KEY_set_public_key(public_ec_key, point)\n self._backend.openssl_assert(res == 1)\n\n evp_pkey = self._backend._ec_cdata_to_evp_pkey(public_ec_key)\n\n return _EllipticCurvePublicKey(self._backend, public_ec_key, evp_pkey)\n\n def private_numbers(self) -> ec.EllipticCurvePrivateNumbers:\n bn = self._backend._lib.EC_KEY_get0_private_key(self._ec_key)\n private_value = self._backend._bn_to_int(bn)\n return ec.EllipticCurvePrivateNumbers(\n private_value=private_value,\n public_numbers=self.public_key().public_numbers(),\n )\n\n def private_bytes(\n self,\n encoding: serialization.Encoding,\n format: serialization.PrivateFormat,\n encryption_algorithm: serialization.KeySerializationEncryption,\n ) -> bytes:\n return self._backend._private_key_bytes(\n encoding,\n format,\n encryption_algorithm,\n self,\n self._evp_pkey,\n self._ec_key,\n )\n\n def sign(\n self,\n data: bytes,\n signature_algorithm: ec.EllipticCurveSignatureAlgorithm,\n ) -> bytes:\n _check_signature_algorithm(signature_algorithm)\n data, _ = _calculate_digest_and_algorithm(\n data,\n signature_algorithm.algorithm,\n )\n return _ecdsa_sig_sign(self._backend, self, data)\n\n\nclass _EllipticCurvePublicKey(ec.EllipticCurvePublicKey):\n def __init__(self, backend: \"Backend\", ec_key_cdata, evp_pkey):\n self._backend = backend\n self._ec_key = ec_key_cdata\n self._evp_pkey = evp_pkey\n\n sn = _ec_key_curve_sn(backend, ec_key_cdata)\n self._curve = _sn_to_elliptic_curve(backend, sn)\n _mark_asn1_named_ec_curve(backend, ec_key_cdata)\n _check_key_infinity(backend, ec_key_cdata)\n\n @property\n def curve(self) -> ec.EllipticCurve:\n return self._curve\n\n @property\n def key_size(self) -> int:\n return self.curve.key_size\n\n def public_numbers(self) -> ec.EllipticCurvePublicNumbers:\n get_func, group = self._backend._ec_key_determine_group_get_func(\n self._ec_key\n )\n point = self._backend._lib.EC_KEY_get0_public_key(self._ec_key)\n self._backend.openssl_assert(point != self._backend._ffi.NULL)\n\n with self._backend._tmp_bn_ctx() as bn_ctx:\n bn_x = self._backend._lib.BN_CTX_get(bn_ctx)\n bn_y = self._backend._lib.BN_CTX_get(bn_ctx)\n\n res = get_func(group, point, bn_x, bn_y, bn_ctx)\n self._backend.openssl_assert(res == 1)\n\n x = self._backend._bn_to_int(bn_x)\n y = self._backend._bn_to_int(bn_y)\n\n return ec.EllipticCurvePublicNumbers(x=x, y=y, curve=self._curve)\n\n def _encode_point(self, format: serialization.PublicFormat) -> bytes:\n if format is serialization.PublicFormat.CompressedPoint:\n conversion = self._backend._lib.POINT_CONVERSION_COMPRESSED\n else:\n assert format is serialization.PublicFormat.UncompressedPoint\n conversion = self._backend._lib.POINT_CONVERSION_UNCOMPRESSED\n\n group = self._backend._lib.EC_KEY_get0_group(self._ec_key)\n self._backend.openssl_assert(group != self._backend._ffi.NULL)\n point = self._backend._lib.EC_KEY_get0_public_key(self._ec_key)\n self._backend.openssl_assert(point != self._backend._ffi.NULL)\n with self._backend._tmp_bn_ctx() as bn_ctx:\n buflen = self._backend._lib.EC_POINT_point2oct(\n group, point, conversion, self._backend._ffi.NULL, 0, bn_ctx\n )\n self._backend.openssl_assert(buflen > 0)\n buf = self._backend._ffi.new(\"char[]\", buflen)\n res = self._backend._lib.EC_POINT_point2oct(\n group, point, conversion, buf, buflen, bn_ctx\n )\n self._backend.openssl_assert(buflen == res)\n\n return self._backend._ffi.buffer(buf)[:]\n\n def public_bytes(\n self,\n encoding: serialization.Encoding,\n format: serialization.PublicFormat,\n ) -> bytes:\n if (\n encoding is serialization.Encoding.X962\n or format is serialization.PublicFormat.CompressedPoint\n or format is serialization.PublicFormat.UncompressedPoint\n ):\n if encoding is not serialization.Encoding.X962 or format not in (\n serialization.PublicFormat.CompressedPoint,\n serialization.PublicFormat.UncompressedPoint,\n ):\n raise ValueError(\n \"X962 encoding must be used with CompressedPoint or \"\n \"UncompressedPoint format\"\n )\n\n return self._encode_point(format)\n else:\n return self._backend._public_key_bytes(\n encoding, format, self, self._evp_pkey, None\n )\n\n def verify(\n self,\n signature: bytes,\n data: bytes,\n signature_algorithm: ec.EllipticCurveSignatureAlgorithm,\n ) -> None:\n _check_signature_algorithm(signature_algorithm)\n data, _ = _calculate_digest_and_algorithm(\n data,\n signature_algorithm.algorithm,\n )\n _ecdsa_sig_verify(self._backend, self, signature, data)\n", "path": "src/cryptography/hazmat/backends/openssl/ec.py"}]} | 4,003 | 436 |
gh_patches_debug_35103 | rasdani/github-patches | git_diff | comic__grand-challenge.org-1001 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
libvips chokes on dzi conversion with sparse inputs
Upload a tif file which is sparse, in `tiff.py` line 212:
```
image = pyvips.Image.new_from_file(
str(tiff_file.path.absolute()), access="sequential"
)
pyvips.Image.dzsave(
image, dzi_output, tile_size=settings.DZI_TILE_SIZE
)
```
The `dzsave` will fail with empty tiles, eg:
```
["Image can't be converted to dzi: unable to call dzsave\n TIFFFillTile: 0: Invalid tile byte count, tile 1\nTIFFFillTile: 0: Invalid tile byte count, tile 3\nTIFFFillTile: 0: Invalid tile byte count, tile 4\nTIFFFillTile: 0: Invalid tile byte count, tile 5\nTIFFFillTile: 0: Invalid tile byte count, tile 6\nTIFFFillTile: 0: Invalid tile byte count, tile 7\nTIFFFillTile: 0: Invalid tile byte count, tile 8\nTIFFFillTile: 0: Invalid tile byte count, tile 9\nTIFFFillTile: 0: Invalid tile byte count, tile 10\nTIFFFillTile: 0: Invalid tile byte count, tile 11\nTIFFFillTile: 0: Invalid tile byte count, tile 12\nTIFFFillTile: 0: Invalid tile byte count, tile 13\nTIFFFillTile: 0: Invalid tile byte count, tile 14\nTIFFFillTile: 0: Invalid tile byte count, tile 15\nTIFFFillTile: 0: Invalid tile byte count, tile 16\nTIFFFillTile: 0: Invalid tile byte count, tile 17\nTIFFFillTile: 0: Invalid tile byte count, tile 18\nTIFFFillTile: 0: Invalid tile byte count, tile 19\nTIFFFillTile: 0: Invalid tile byte count, tile 20\nTIFFFillTile: 0: Invalid tile byte count, tile 23\n"]
```
Test image shared with Miriam on slack.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/cases/image_builders/tiff.py`
Content:
```
1 from pathlib import Path
2 from tempfile import TemporaryFile
3 from typing import NamedTuple
4 from uuid import uuid4
5
6 import pyvips
7 import tifffile
8 from django.conf import settings
9 from django.core.exceptions import ValidationError
10 from django.core.files import File
11
12 from grandchallenge.cases.image_builders import ImageBuilderResult
13 from grandchallenge.cases.models import FolderUpload, Image, ImageFile
14
15
16 class GrandChallengeTiffFileTags(NamedTuple):
17 image_width: int
18 image_height: int
19 resolution_levels: int
20 color_space: str
21
22
23 class GrandChallengeTiffFile(NamedTuple):
24 path: Path
25 tags: GrandChallengeTiffFileTags
26
27
28 def load_tiff_file(*, path: Path) -> GrandChallengeTiffFile:
29 """
30 Loads and validates a file using tifffile
31 :param path: The path to the potential tiff file
32 :return: A tiff file that can be used in the rest of grand challenge
33 """
34 try:
35 file = tifffile.TiffFile(str(path.absolute()))
36 except ValueError:
37 raise ValidationError("Image isn't a TIFF file")
38
39 tags = _validate_tifffile(pages=file.pages)
40
41 return GrandChallengeTiffFile(path=path, tags=tags)
42
43
44 def _validate_tifffile( # noqa: C901
45 *, pages: tifffile.tifffile.TiffPages
46 ) -> GrandChallengeTiffFileTags:
47 """
48 Validates a tiff file loaded with tifffile for use in grand challenge
49 :param pages: The pages and tags from tiffile
50 :return: The extracted tags that are needed by the rest of the framework
51 """
52 required_tile_tags = ("TileOffsets", "TileByteCounts")
53
54 forbidden_description_tags = ("dicom", "xml")
55
56 tags = pages[0].tags
57
58 # Checks if the image description exists,
59 # if so, ensure there's no DICOM or XML data
60 try:
61 image_description = str(tags["ImageDescription"].value).lower()
62 for forbidden in forbidden_description_tags:
63 if forbidden in image_description:
64 raise ValidationError(
65 "Image contains unauthorized information"
66 )
67 except KeyError:
68 pass
69
70 # Fails if the image doesn't have all required tile tags
71 if not all(tag in tags for tag in required_tile_tags):
72 raise ValidationError("Image has incomplete tile information")
73
74 # Fails if the image only has a single resolution page
75 resolution_levels = len(pages)
76 if resolution_levels == 1:
77 raise ValidationError("Image only has a single resolution level")
78
79 # Fails if the image doesn't have the chunky format
80 if str(tags["PlanarConfiguration"].value) != "PLANARCONFIG.CONTIG":
81 raise ValidationError(
82 "Image planar configuration isn't configured as 'Chunky' format"
83 )
84
85 # Fails if the color space isn't supported
86 try:
87 color_space = get_color_space(
88 str(tags["PhotometricInterpretation"].value)
89 )
90 except KeyError:
91 raise ValidationError("Image lacks color space information")
92
93 # Fails if the amount of bytes per sample doesn't correspond to the
94 # colour space
95 tif_color_channels = tags["SamplesPerPixel"].value
96 if Image.COLOR_SPACE_COMPONENTS[color_space] != tif_color_channels:
97 raise ValidationError("Image contains invalid amount of channels.")
98
99 try:
100 image_width = tags["ImageWidth"].value
101 image_height = tags["ImageLength"].value
102 except KeyError:
103 raise ValidationError("Missing tags in tiff file")
104
105 return GrandChallengeTiffFileTags(
106 image_width=image_width,
107 image_height=image_height,
108 color_space=color_space,
109 resolution_levels=resolution_levels,
110 )
111
112
113 def get_color_space(color_space_string) -> Image.COLOR_SPACES:
114 color_space_string = color_space_string.split(".")[1].upper()
115
116 if color_space_string == "MINISBLACK":
117 color_space = Image.COLOR_SPACE_GRAY
118 else:
119 try:
120 color_space = dict(Image.COLOR_SPACES)[color_space_string]
121 except KeyError:
122 raise ValidationError("Invalid color space")
123
124 return color_space
125
126
127 def image_builder_tiff(path: Path) -> ImageBuilderResult:
128 new_images = []
129 new_image_files = []
130 consumed_files = set()
131 invalid_file_errors = {}
132 new_folder_upload = []
133
134 for file_path in path.iterdir():
135 pk = uuid4()
136
137 try:
138 tiff_file = load_tiff_file(path=file_path)
139 dzi_output = create_dzi_images(tiff_file=tiff_file, pk=pk)
140 except ValidationError as e:
141 invalid_file_errors[file_path.name] = str(e)
142 continue
143
144 image = create_tiff_image_entry(tiff_file=tiff_file, pk=pk)
145
146 temp_file = TemporaryFile()
147 with open(tiff_file.path.absolute(), "rb") as open_file:
148 buffer = True
149 while buffer:
150 buffer = open_file.read(1024)
151 temp_file.write(buffer)
152
153 new_image_files.append(
154 ImageFile(
155 image=image,
156 image_type=ImageFile.IMAGE_TYPE_TIFF,
157 file=File(temp_file, name=f"{image.pk}.tif"),
158 )
159 )
160
161 temp_dzi_file = TemporaryFile()
162 with open(dzi_output + ".dzi", "rb") as open_file:
163 buffer = True
164 while buffer:
165 buffer = open_file.read(1024)
166 temp_dzi_file.write(buffer)
167
168 new_image_files.append(
169 ImageFile(
170 image=image,
171 image_type=ImageFile.IMAGE_TYPE_DZI,
172 file=File(temp_dzi_file, name=f"{image.pk}.dzi"),
173 )
174 )
175
176 dzi_folder_upload = FolderUpload(
177 folder=dzi_output + "_files", image=image
178 )
179 new_images.append(image)
180 consumed_files.add(tiff_file.path.name)
181 new_folder_upload.append(dzi_folder_upload)
182
183 return ImageBuilderResult(
184 consumed_files=consumed_files,
185 file_errors_map=invalid_file_errors,
186 new_images=new_images,
187 new_image_files=new_image_files,
188 new_folder_upload=new_folder_upload,
189 )
190
191
192 def create_tiff_image_entry(*, tiff_file: GrandChallengeTiffFile, pk) -> Image:
193 # Builds a new Image model item
194 return Image(
195 pk=pk,
196 name=tiff_file.path.name,
197 width=tiff_file.tags.image_width,
198 height=tiff_file.tags.image_height,
199 depth=1,
200 resolution_levels=tiff_file.tags.resolution_levels,
201 color_space=tiff_file.tags.color_space,
202 eye_choice=Image.EYE_UNKNOWN,
203 stereoscopic_choice=Image.STEREOSCOPIC_UNKNOWN,
204 field_of_view=Image.FOV_UNKNOWN,
205 )
206
207
208 def create_dzi_images(*, tiff_file: GrandChallengeTiffFile, pk) -> str:
209 # Creates a dzi file(out.dzi) and corresponding tiles in folder {pk}_files
210 dzi_output = str(tiff_file.path.parent / str(pk))
211 try:
212 image = pyvips.Image.new_from_file(
213 str(tiff_file.path.absolute()), access="sequential"
214 )
215
216 pyvips.Image.dzsave(
217 image, dzi_output, tile_size=settings.DZI_TILE_SIZE
218 )
219 except Exception as e:
220 raise ValidationError("Image can't be converted to dzi: " + str(e))
221
222 return dzi_output
223
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/grandchallenge/cases/image_builders/tiff.py b/app/grandchallenge/cases/image_builders/tiff.py
--- a/app/grandchallenge/cases/image_builders/tiff.py
+++ b/app/grandchallenge/cases/image_builders/tiff.py
@@ -133,14 +133,18 @@
for file_path in path.iterdir():
pk = uuid4()
-
+ dzi_output = None
try:
tiff_file = load_tiff_file(path=file_path)
- dzi_output = create_dzi_images(tiff_file=tiff_file, pk=pk)
except ValidationError as e:
invalid_file_errors[file_path.name] = str(e)
continue
+ try:
+ dzi_output = create_dzi_images(tiff_file=tiff_file, pk=pk)
+ except ValidationError as e:
+ invalid_file_errors[file_path.name] = str(e)
+
image = create_tiff_image_entry(tiff_file=tiff_file, pk=pk)
temp_file = TemporaryFile()
@@ -158,27 +162,29 @@
)
)
- temp_dzi_file = TemporaryFile()
- with open(dzi_output + ".dzi", "rb") as open_file:
- buffer = True
- while buffer:
- buffer = open_file.read(1024)
- temp_dzi_file.write(buffer)
+ if dzi_output:
+ temp_dzi_file = TemporaryFile()
+ with open(dzi_output + ".dzi", "rb") as open_file:
+ buffer = True
+ while buffer:
+ buffer = open_file.read(1024)
+ temp_dzi_file.write(buffer)
+
+ new_image_files.append(
+ ImageFile(
+ image=image,
+ image_type=ImageFile.IMAGE_TYPE_DZI,
+ file=File(temp_dzi_file, name=f"{image.pk}.dzi"),
+ )
+ )
- new_image_files.append(
- ImageFile(
- image=image,
- image_type=ImageFile.IMAGE_TYPE_DZI,
- file=File(temp_dzi_file, name=f"{image.pk}.dzi"),
+ dzi_folder_upload = FolderUpload(
+ folder=dzi_output + "_files", image=image
)
- )
+ new_folder_upload.append(dzi_folder_upload)
- dzi_folder_upload = FolderUpload(
- folder=dzi_output + "_files", image=image
- )
new_images.append(image)
consumed_files.add(tiff_file.path.name)
- new_folder_upload.append(dzi_folder_upload)
return ImageBuilderResult(
consumed_files=consumed_files,
| {"golden_diff": "diff --git a/app/grandchallenge/cases/image_builders/tiff.py b/app/grandchallenge/cases/image_builders/tiff.py\n--- a/app/grandchallenge/cases/image_builders/tiff.py\n+++ b/app/grandchallenge/cases/image_builders/tiff.py\n@@ -133,14 +133,18 @@\n \n for file_path in path.iterdir():\n pk = uuid4()\n-\n+ dzi_output = None\n try:\n tiff_file = load_tiff_file(path=file_path)\n- dzi_output = create_dzi_images(tiff_file=tiff_file, pk=pk)\n except ValidationError as e:\n invalid_file_errors[file_path.name] = str(e)\n continue\n \n+ try:\n+ dzi_output = create_dzi_images(tiff_file=tiff_file, pk=pk)\n+ except ValidationError as e:\n+ invalid_file_errors[file_path.name] = str(e)\n+\n image = create_tiff_image_entry(tiff_file=tiff_file, pk=pk)\n \n temp_file = TemporaryFile()\n@@ -158,27 +162,29 @@\n )\n )\n \n- temp_dzi_file = TemporaryFile()\n- with open(dzi_output + \".dzi\", \"rb\") as open_file:\n- buffer = True\n- while buffer:\n- buffer = open_file.read(1024)\n- temp_dzi_file.write(buffer)\n+ if dzi_output:\n+ temp_dzi_file = TemporaryFile()\n+ with open(dzi_output + \".dzi\", \"rb\") as open_file:\n+ buffer = True\n+ while buffer:\n+ buffer = open_file.read(1024)\n+ temp_dzi_file.write(buffer)\n+\n+ new_image_files.append(\n+ ImageFile(\n+ image=image,\n+ image_type=ImageFile.IMAGE_TYPE_DZI,\n+ file=File(temp_dzi_file, name=f\"{image.pk}.dzi\"),\n+ )\n+ )\n \n- new_image_files.append(\n- ImageFile(\n- image=image,\n- image_type=ImageFile.IMAGE_TYPE_DZI,\n- file=File(temp_dzi_file, name=f\"{image.pk}.dzi\"),\n+ dzi_folder_upload = FolderUpload(\n+ folder=dzi_output + \"_files\", image=image\n )\n- )\n+ new_folder_upload.append(dzi_folder_upload)\n \n- dzi_folder_upload = FolderUpload(\n- folder=dzi_output + \"_files\", image=image\n- )\n new_images.append(image)\n consumed_files.add(tiff_file.path.name)\n- new_folder_upload.append(dzi_folder_upload)\n \n return ImageBuilderResult(\n consumed_files=consumed_files,\n", "issue": "libvips chokes on dzi conversion with sparse inputs\nUpload a tif file which is sparse, in `tiff.py` line 212:\r\n\r\n```\r\nimage = pyvips.Image.new_from_file(\r\n str(tiff_file.path.absolute()), access=\"sequential\"\r\n )\r\n\r\n pyvips.Image.dzsave(\r\n image, dzi_output, tile_size=settings.DZI_TILE_SIZE\r\n )\r\n```\r\n\r\nThe `dzsave` will fail with empty tiles, eg:\r\n\r\n```\r\n[\"Image can't be converted to dzi: unable to call dzsave\\n TIFFFillTile: 0: Invalid tile byte count, tile 1\\nTIFFFillTile: 0: Invalid tile byte count, tile 3\\nTIFFFillTile: 0: Invalid tile byte count, tile 4\\nTIFFFillTile: 0: Invalid tile byte count, tile 5\\nTIFFFillTile: 0: Invalid tile byte count, tile 6\\nTIFFFillTile: 0: Invalid tile byte count, tile 7\\nTIFFFillTile: 0: Invalid tile byte count, tile 8\\nTIFFFillTile: 0: Invalid tile byte count, tile 9\\nTIFFFillTile: 0: Invalid tile byte count, tile 10\\nTIFFFillTile: 0: Invalid tile byte count, tile 11\\nTIFFFillTile: 0: Invalid tile byte count, tile 12\\nTIFFFillTile: 0: Invalid tile byte count, tile 13\\nTIFFFillTile: 0: Invalid tile byte count, tile 14\\nTIFFFillTile: 0: Invalid tile byte count, tile 15\\nTIFFFillTile: 0: Invalid tile byte count, tile 16\\nTIFFFillTile: 0: Invalid tile byte count, tile 17\\nTIFFFillTile: 0: Invalid tile byte count, tile 18\\nTIFFFillTile: 0: Invalid tile byte count, tile 19\\nTIFFFillTile: 0: Invalid tile byte count, tile 20\\nTIFFFillTile: 0: Invalid tile byte count, tile 23\\n\"]\r\n```\r\n\r\nTest image shared with Miriam on slack.\n", "before_files": [{"content": "from pathlib import Path\nfrom tempfile import TemporaryFile\nfrom typing import NamedTuple\nfrom uuid import uuid4\n\nimport pyvips\nimport tifffile\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.core.files import File\n\nfrom grandchallenge.cases.image_builders import ImageBuilderResult\nfrom grandchallenge.cases.models import FolderUpload, Image, ImageFile\n\n\nclass GrandChallengeTiffFileTags(NamedTuple):\n image_width: int\n image_height: int\n resolution_levels: int\n color_space: str\n\n\nclass GrandChallengeTiffFile(NamedTuple):\n path: Path\n tags: GrandChallengeTiffFileTags\n\n\ndef load_tiff_file(*, path: Path) -> GrandChallengeTiffFile:\n \"\"\"\n Loads and validates a file using tifffile\n :param path: The path to the potential tiff file\n :return: A tiff file that can be used in the rest of grand challenge\n \"\"\"\n try:\n file = tifffile.TiffFile(str(path.absolute()))\n except ValueError:\n raise ValidationError(\"Image isn't a TIFF file\")\n\n tags = _validate_tifffile(pages=file.pages)\n\n return GrandChallengeTiffFile(path=path, tags=tags)\n\n\ndef _validate_tifffile( # noqa: C901\n *, pages: tifffile.tifffile.TiffPages\n) -> GrandChallengeTiffFileTags:\n \"\"\"\n Validates a tiff file loaded with tifffile for use in grand challenge\n :param pages: The pages and tags from tiffile\n :return: The extracted tags that are needed by the rest of the framework\n \"\"\"\n required_tile_tags = (\"TileOffsets\", \"TileByteCounts\")\n\n forbidden_description_tags = (\"dicom\", \"xml\")\n\n tags = pages[0].tags\n\n # Checks if the image description exists,\n # if so, ensure there's no DICOM or XML data\n try:\n image_description = str(tags[\"ImageDescription\"].value).lower()\n for forbidden in forbidden_description_tags:\n if forbidden in image_description:\n raise ValidationError(\n \"Image contains unauthorized information\"\n )\n except KeyError:\n pass\n\n # Fails if the image doesn't have all required tile tags\n if not all(tag in tags for tag in required_tile_tags):\n raise ValidationError(\"Image has incomplete tile information\")\n\n # Fails if the image only has a single resolution page\n resolution_levels = len(pages)\n if resolution_levels == 1:\n raise ValidationError(\"Image only has a single resolution level\")\n\n # Fails if the image doesn't have the chunky format\n if str(tags[\"PlanarConfiguration\"].value) != \"PLANARCONFIG.CONTIG\":\n raise ValidationError(\n \"Image planar configuration isn't configured as 'Chunky' format\"\n )\n\n # Fails if the color space isn't supported\n try:\n color_space = get_color_space(\n str(tags[\"PhotometricInterpretation\"].value)\n )\n except KeyError:\n raise ValidationError(\"Image lacks color space information\")\n\n # Fails if the amount of bytes per sample doesn't correspond to the\n # colour space\n tif_color_channels = tags[\"SamplesPerPixel\"].value\n if Image.COLOR_SPACE_COMPONENTS[color_space] != tif_color_channels:\n raise ValidationError(\"Image contains invalid amount of channels.\")\n\n try:\n image_width = tags[\"ImageWidth\"].value\n image_height = tags[\"ImageLength\"].value\n except KeyError:\n raise ValidationError(\"Missing tags in tiff file\")\n\n return GrandChallengeTiffFileTags(\n image_width=image_width,\n image_height=image_height,\n color_space=color_space,\n resolution_levels=resolution_levels,\n )\n\n\ndef get_color_space(color_space_string) -> Image.COLOR_SPACES:\n color_space_string = color_space_string.split(\".\")[1].upper()\n\n if color_space_string == \"MINISBLACK\":\n color_space = Image.COLOR_SPACE_GRAY\n else:\n try:\n color_space = dict(Image.COLOR_SPACES)[color_space_string]\n except KeyError:\n raise ValidationError(\"Invalid color space\")\n\n return color_space\n\n\ndef image_builder_tiff(path: Path) -> ImageBuilderResult:\n new_images = []\n new_image_files = []\n consumed_files = set()\n invalid_file_errors = {}\n new_folder_upload = []\n\n for file_path in path.iterdir():\n pk = uuid4()\n\n try:\n tiff_file = load_tiff_file(path=file_path)\n dzi_output = create_dzi_images(tiff_file=tiff_file, pk=pk)\n except ValidationError as e:\n invalid_file_errors[file_path.name] = str(e)\n continue\n\n image = create_tiff_image_entry(tiff_file=tiff_file, pk=pk)\n\n temp_file = TemporaryFile()\n with open(tiff_file.path.absolute(), \"rb\") as open_file:\n buffer = True\n while buffer:\n buffer = open_file.read(1024)\n temp_file.write(buffer)\n\n new_image_files.append(\n ImageFile(\n image=image,\n image_type=ImageFile.IMAGE_TYPE_TIFF,\n file=File(temp_file, name=f\"{image.pk}.tif\"),\n )\n )\n\n temp_dzi_file = TemporaryFile()\n with open(dzi_output + \".dzi\", \"rb\") as open_file:\n buffer = True\n while buffer:\n buffer = open_file.read(1024)\n temp_dzi_file.write(buffer)\n\n new_image_files.append(\n ImageFile(\n image=image,\n image_type=ImageFile.IMAGE_TYPE_DZI,\n file=File(temp_dzi_file, name=f\"{image.pk}.dzi\"),\n )\n )\n\n dzi_folder_upload = FolderUpload(\n folder=dzi_output + \"_files\", image=image\n )\n new_images.append(image)\n consumed_files.add(tiff_file.path.name)\n new_folder_upload.append(dzi_folder_upload)\n\n return ImageBuilderResult(\n consumed_files=consumed_files,\n file_errors_map=invalid_file_errors,\n new_images=new_images,\n new_image_files=new_image_files,\n new_folder_upload=new_folder_upload,\n )\n\n\ndef create_tiff_image_entry(*, tiff_file: GrandChallengeTiffFile, pk) -> Image:\n # Builds a new Image model item\n return Image(\n pk=pk,\n name=tiff_file.path.name,\n width=tiff_file.tags.image_width,\n height=tiff_file.tags.image_height,\n depth=1,\n resolution_levels=tiff_file.tags.resolution_levels,\n color_space=tiff_file.tags.color_space,\n eye_choice=Image.EYE_UNKNOWN,\n stereoscopic_choice=Image.STEREOSCOPIC_UNKNOWN,\n field_of_view=Image.FOV_UNKNOWN,\n )\n\n\ndef create_dzi_images(*, tiff_file: GrandChallengeTiffFile, pk) -> str:\n # Creates a dzi file(out.dzi) and corresponding tiles in folder {pk}_files\n dzi_output = str(tiff_file.path.parent / str(pk))\n try:\n image = pyvips.Image.new_from_file(\n str(tiff_file.path.absolute()), access=\"sequential\"\n )\n\n pyvips.Image.dzsave(\n image, dzi_output, tile_size=settings.DZI_TILE_SIZE\n )\n except Exception as e:\n raise ValidationError(\"Image can't be converted to dzi: \" + str(e))\n\n return dzi_output\n", "path": "app/grandchallenge/cases/image_builders/tiff.py"}], "after_files": [{"content": "from pathlib import Path\nfrom tempfile import TemporaryFile\nfrom typing import NamedTuple\nfrom uuid import uuid4\n\nimport pyvips\nimport tifffile\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.core.files import File\n\nfrom grandchallenge.cases.image_builders import ImageBuilderResult\nfrom grandchallenge.cases.models import FolderUpload, Image, ImageFile\n\n\nclass GrandChallengeTiffFileTags(NamedTuple):\n image_width: int\n image_height: int\n resolution_levels: int\n color_space: str\n\n\nclass GrandChallengeTiffFile(NamedTuple):\n path: Path\n tags: GrandChallengeTiffFileTags\n\n\ndef load_tiff_file(*, path: Path) -> GrandChallengeTiffFile:\n \"\"\"\n Loads and validates a file using tifffile\n :param path: The path to the potential tiff file\n :return: A tiff file that can be used in the rest of grand challenge\n \"\"\"\n try:\n file = tifffile.TiffFile(str(path.absolute()))\n except ValueError:\n raise ValidationError(\"Image isn't a TIFF file\")\n\n tags = _validate_tifffile(pages=file.pages)\n\n return GrandChallengeTiffFile(path=path, tags=tags)\n\n\ndef _validate_tifffile( # noqa: C901\n *, pages: tifffile.tifffile.TiffPages\n) -> GrandChallengeTiffFileTags:\n \"\"\"\n Validates a tiff file loaded with tifffile for use in grand challenge\n :param pages: The pages and tags from tiffile\n :return: The extracted tags that are needed by the rest of the framework\n \"\"\"\n required_tile_tags = (\"TileOffsets\", \"TileByteCounts\")\n\n forbidden_description_tags = (\"dicom\", \"xml\")\n\n tags = pages[0].tags\n\n # Checks if the image description exists,\n # if so, ensure there's no DICOM or XML data\n try:\n image_description = str(tags[\"ImageDescription\"].value).lower()\n for forbidden in forbidden_description_tags:\n if forbidden in image_description:\n raise ValidationError(\n \"Image contains unauthorized information\"\n )\n except KeyError:\n pass\n\n # Fails if the image doesn't have all required tile tags\n if not all(tag in tags for tag in required_tile_tags):\n raise ValidationError(\"Image has incomplete tile information\")\n\n # Fails if the image only has a single resolution page\n resolution_levels = len(pages)\n if resolution_levels == 1:\n raise ValidationError(\"Image only has a single resolution level\")\n\n # Fails if the image doesn't have the chunky format\n if str(tags[\"PlanarConfiguration\"].value) != \"PLANARCONFIG.CONTIG\":\n raise ValidationError(\n \"Image planar configuration isn't configured as 'Chunky' format\"\n )\n\n # Fails if the color space isn't supported\n try:\n color_space = get_color_space(\n str(tags[\"PhotometricInterpretation\"].value)\n )\n except KeyError:\n raise ValidationError(\"Image lacks color space information\")\n\n # Fails if the amount of bytes per sample doesn't correspond to the\n # colour space\n tif_color_channels = tags[\"SamplesPerPixel\"].value\n if Image.COLOR_SPACE_COMPONENTS[color_space] != tif_color_channels:\n raise ValidationError(\"Image contains invalid amount of channels.\")\n\n try:\n image_width = tags[\"ImageWidth\"].value\n image_height = tags[\"ImageLength\"].value\n except KeyError:\n raise ValidationError(\"Missing tags in tiff file\")\n\n return GrandChallengeTiffFileTags(\n image_width=image_width,\n image_height=image_height,\n color_space=color_space,\n resolution_levels=resolution_levels,\n )\n\n\ndef get_color_space(color_space_string) -> Image.COLOR_SPACES:\n color_space_string = color_space_string.split(\".\")[1].upper()\n\n if color_space_string == \"MINISBLACK\":\n color_space = Image.COLOR_SPACE_GRAY\n else:\n try:\n color_space = dict(Image.COLOR_SPACES)[color_space_string]\n except KeyError:\n raise ValidationError(\"Invalid color space\")\n\n return color_space\n\n\ndef image_builder_tiff(path: Path) -> ImageBuilderResult:\n new_images = []\n new_image_files = []\n consumed_files = set()\n invalid_file_errors = {}\n new_folder_upload = []\n\n for file_path in path.iterdir():\n pk = uuid4()\n dzi_output = None\n try:\n tiff_file = load_tiff_file(path=file_path)\n except ValidationError as e:\n invalid_file_errors[file_path.name] = str(e)\n continue\n\n try:\n dzi_output = create_dzi_images(tiff_file=tiff_file, pk=pk)\n except ValidationError as e:\n invalid_file_errors[file_path.name] = str(e)\n\n image = create_tiff_image_entry(tiff_file=tiff_file, pk=pk)\n\n temp_file = TemporaryFile()\n with open(tiff_file.path.absolute(), \"rb\") as open_file:\n buffer = True\n while buffer:\n buffer = open_file.read(1024)\n temp_file.write(buffer)\n\n new_image_files.append(\n ImageFile(\n image=image,\n image_type=ImageFile.IMAGE_TYPE_TIFF,\n file=File(temp_file, name=f\"{image.pk}.tif\"),\n )\n )\n\n if dzi_output:\n temp_dzi_file = TemporaryFile()\n with open(dzi_output + \".dzi\", \"rb\") as open_file:\n buffer = True\n while buffer:\n buffer = open_file.read(1024)\n temp_dzi_file.write(buffer)\n\n new_image_files.append(\n ImageFile(\n image=image,\n image_type=ImageFile.IMAGE_TYPE_DZI,\n file=File(temp_dzi_file, name=f\"{image.pk}.dzi\"),\n )\n )\n\n dzi_folder_upload = FolderUpload(\n folder=dzi_output + \"_files\", image=image\n )\n new_folder_upload.append(dzi_folder_upload)\n\n new_images.append(image)\n consumed_files.add(tiff_file.path.name)\n\n return ImageBuilderResult(\n consumed_files=consumed_files,\n file_errors_map=invalid_file_errors,\n new_images=new_images,\n new_image_files=new_image_files,\n new_folder_upload=new_folder_upload,\n )\n\n\ndef create_tiff_image_entry(*, tiff_file: GrandChallengeTiffFile, pk) -> Image:\n # Builds a new Image model item\n return Image(\n pk=pk,\n name=tiff_file.path.name,\n width=tiff_file.tags.image_width,\n height=tiff_file.tags.image_height,\n depth=1,\n resolution_levels=tiff_file.tags.resolution_levels,\n color_space=tiff_file.tags.color_space,\n eye_choice=Image.EYE_UNKNOWN,\n stereoscopic_choice=Image.STEREOSCOPIC_UNKNOWN,\n field_of_view=Image.FOV_UNKNOWN,\n )\n\n\ndef create_dzi_images(*, tiff_file: GrandChallengeTiffFile, pk) -> str:\n # Creates a dzi file(out.dzi) and corresponding tiles in folder {pk}_files\n dzi_output = str(tiff_file.path.parent / str(pk))\n try:\n image = pyvips.Image.new_from_file(\n str(tiff_file.path.absolute()), access=\"sequential\"\n )\n\n pyvips.Image.dzsave(\n image, dzi_output, tile_size=settings.DZI_TILE_SIZE\n )\n except Exception as e:\n raise ValidationError(\"Image can't be converted to dzi: \" + str(e))\n\n return dzi_output\n", "path": "app/grandchallenge/cases/image_builders/tiff.py"}]} | 2,920 | 585 |
gh_patches_debug_8634 | rasdani/github-patches | git_diff | nipy__nipype-3216 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unknown version string with Freesurfer 7
### Summary
After update to freesurfer 7 `import nipype.interfaces.freesurfer` returns `RuntimeError: Unknown version string` error.
### Actual behavior
```
$ python
Python 3.8.2 (default, May 7 2020, 20:00:49)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from nipype.interfaces.freesurfer import Info
200518-18:18:10,4 nipype.utils WARNING:
Could not check for version updates:
Connection to server could not be made
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda3/envs/neuromet_dev/lib/python3.8/site-packages/nipype/interfaces/freesurfer/__init__.py", line 7, in <module>
from .preprocess import (
File "/opt/anaconda3/envs/neuromet_dev/lib/python3.8/site-packages/nipype/interfaces/freesurfer/preprocess.py", line 36, in <module>
FSVersion = Info.looseversion().vstring
File "/opt/anaconda3/envs/neuromet_dev/lib/python3.8/site-packages/nipype/interfaces/freesurfer/base.py", line 80, in looseversion
raise RuntimeError("Unknown version string: " + ver)
RuntimeError: Unknown version string: freesurfer-linux-centos7_x86_64-7.1.0-20200511-813297b
```
### How to replicate the behavior
1. Install Freesurfer 7 as described [here](https://surfer.nmr.mgh.harvard.edu/fswiki//FS7_linux) with Freesurfer from [here](https://surfer.nmr.mgh.harvard.edu/fswiki/rel7downloads) (file: `freesurfer-linux-centos7_x86_64-7.1.0.tar.gz`.
2. create a fresh new conda environment with
`conda create -n $name python=3.8 nipype`
3. open python and try to import 'nipype.interfaces.freesurfer`
### Platform details:
```
$cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.6 LTS"
```
<!-- Please run the following code from your shell and place the output between the triple ticks, below.
python -c "import nipype; from pprint import pprint; pprint(nipype.get_info())"
-->
```
$python -c "import nipype; from pprint import pprint; pprint(nipype.get_info())"
200518-18:29:34,82 nipype.utils WARNING:
Could not check for version updates:
Connection to server could not be made
{'commit_hash': '%h',
'commit_source': 'archive substitution',
'networkx_version': '2.4',
'nibabel_version': '3.1.0',
'nipype_version': '1.5.0-rc1',
'numpy_version': '1.18.1',
'pkg_path': '/opt/anaconda3/envs/neuromet_dev/lib/python3.8/site-packages/nipype',
'scipy_version': '1.4.1',
'sys_executable': '/opt/anaconda3/envs/neuromet_dev/bin/python',
'sys_platform': 'linux',
'sys_version': '3.8.2 (default, May 7 2020, 20:00:49) \n[GCC 7.3.0]',
'traits_version': '5.2.0'}
```
### Execution environment
Choose one
- My python environment outside container
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nipype/interfaces/freesurfer/base.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
3 # vi: set ft=python sts=4 ts=4 sw=4 et:
4 """The freesurfer module provides basic functions for interfacing with
5 freesurfer tools.
6
7 Currently these tools are supported:
8
9 * Dicom2Nifti: using mri_convert
10 * Resample: using mri_convert
11
12 Examples
13 --------
14 See the docstrings for the individual classes for 'working' examples.
15
16 """
17 import os
18
19 from ... import LooseVersion
20 from ...utils.filemanip import fname_presuffix
21 from ..base import (
22 CommandLine,
23 Directory,
24 CommandLineInputSpec,
25 isdefined,
26 traits,
27 TraitedSpec,
28 File,
29 PackageInfo,
30 )
31
32 __docformat__ = "restructuredtext"
33
34
35 class Info(PackageInfo):
36 """ Freesurfer subject directory and version information.
37
38 Examples
39 --------
40
41 >>> from nipype.interfaces.freesurfer import Info
42 >>> Info.version() # doctest: +SKIP
43 >>> Info.subjectsdir() # doctest: +SKIP
44
45 """
46
47 if os.getenv("FREESURFER_HOME"):
48 version_file = os.path.join(os.getenv("FREESURFER_HOME"), "build-stamp.txt")
49
50 @staticmethod
51 def parse_version(raw_info):
52 return raw_info.splitlines()[0]
53
54 @classmethod
55 def looseversion(cls):
56 """ Return a comparable version object
57
58 If no version found, use LooseVersion('0.0.0')
59 """
60 ver = cls.version()
61 if ver is None:
62 return LooseVersion("0.0.0")
63
64 vinfo = ver.rstrip().split("-")
65 try:
66 int(vinfo[-1], 16)
67 except ValueError:
68 githash = ""
69 else:
70 githash = "." + vinfo[-1]
71
72 # As of FreeSurfer v6.0.0, the final component is a githash
73 if githash:
74 if vinfo[3] == "dev":
75 # This will need updating when v6.0.1 comes out
76 vstr = "6.0.0-dev" + githash
77 elif vinfo[5][0] == "v":
78 vstr = vinfo[5][1:]
79 else:
80 raise RuntimeError("Unknown version string: " + ver)
81 # Retain pre-6.0.0 heuristics
82 elif "dev" in ver:
83 vstr = vinfo[-1] + "-dev"
84 else:
85 vstr = ver.rstrip().split("-v")[-1]
86
87 return LooseVersion(vstr)
88
89 @classmethod
90 def subjectsdir(cls):
91 """Check the global SUBJECTS_DIR
92
93 Parameters
94 ----------
95
96 subjects_dir : string
97 The system defined subjects directory
98
99 Returns
100 -------
101
102 subject_dir : string
103 Represents the current environment setting of SUBJECTS_DIR
104
105 """
106 if cls.version():
107 return os.environ["SUBJECTS_DIR"]
108 return None
109
110
111 class FSTraitedSpec(CommandLineInputSpec):
112 subjects_dir = Directory(exists=True, desc="subjects directory")
113
114
115 class FSCommand(CommandLine):
116 """General support for FreeSurfer commands.
117
118 Every FS command accepts 'subjects_dir' input.
119 """
120
121 input_spec = FSTraitedSpec
122
123 _subjects_dir = None
124
125 def __init__(self, **inputs):
126 super(FSCommand, self).__init__(**inputs)
127 self.inputs.on_trait_change(self._subjects_dir_update, "subjects_dir")
128 if not self._subjects_dir:
129 self._subjects_dir = Info.subjectsdir()
130 if not isdefined(self.inputs.subjects_dir) and self._subjects_dir:
131 self.inputs.subjects_dir = self._subjects_dir
132 self._subjects_dir_update()
133
134 def _subjects_dir_update(self):
135 if self.inputs.subjects_dir:
136 self.inputs.environ.update({"SUBJECTS_DIR": self.inputs.subjects_dir})
137
138 @classmethod
139 def set_default_subjects_dir(cls, subjects_dir):
140 cls._subjects_dir = subjects_dir
141
142 def run(self, **inputs):
143 if "subjects_dir" in inputs:
144 self.inputs.subjects_dir = inputs["subjects_dir"]
145 self._subjects_dir_update()
146 return super(FSCommand, self).run(**inputs)
147
148 def _gen_fname(self, basename, fname=None, cwd=None, suffix="_fs", use_ext=True):
149 """Define a generic mapping for a single outfile
150
151 The filename is potentially autogenerated by suffixing inputs.infile
152
153 Parameters
154 ----------
155 basename : string (required)
156 filename to base the new filename on
157 fname : string
158 if not None, just use this fname
159 cwd : string
160 prefix paths with cwd, otherwise os.getcwd()
161 suffix : string
162 default suffix
163 """
164 if basename == "":
165 msg = "Unable to generate filename for command %s. " % self.cmd
166 msg += "basename is not set!"
167 raise ValueError(msg)
168 if cwd is None:
169 cwd = os.getcwd()
170 fname = fname_presuffix(basename, suffix=suffix, use_ext=use_ext, newpath=cwd)
171 return fname
172
173 @property
174 def version(self):
175 ver = Info.looseversion()
176 if ver > LooseVersion("0.0.0"):
177 return ver.vstring
178
179
180 class FSSurfaceCommand(FSCommand):
181 """Support for FreeSurfer surface-related functions.
182 For some functions, if the output file is not specified starting with 'lh.'
183 or 'rh.', FreeSurfer prepends the prefix from the input file to the output
184 filename. Output out_file must be adjusted to accommodate this. By
185 including the full path in the filename, we can also avoid this behavior.
186 """
187
188 @staticmethod
189 def _associated_file(in_file, out_name):
190 """Based on MRIsBuildFileName in freesurfer/utils/mrisurf.c
191
192 If no path information is provided for out_name, use path and
193 hemisphere (if also unspecified) from in_file to determine the path
194 of the associated file.
195 Use in_file prefix to indicate hemisphere for out_name, rather than
196 inspecting the surface data structure.
197 """
198 path, base = os.path.split(out_name)
199 if path == "":
200 path, in_file = os.path.split(in_file)
201 hemis = ("lh.", "rh.")
202 if in_file[:3] in hemis and base[:3] not in hemis:
203 base = in_file[:3] + base
204 return os.path.join(path, base)
205
206
207 class FSScriptCommand(FSCommand):
208 """ Support for Freesurfer script commands with log terminal_output
209 """
210
211 _terminal_output = "file"
212 _always_run = False
213
214 def _list_outputs(self):
215 outputs = self._outputs().get()
216 outputs["log_file"] = os.path.abspath("output.nipype")
217 return outputs
218
219
220 class FSScriptOutputSpec(TraitedSpec):
221 log_file = File(
222 "output.nipype", usedefault=True, exists=True, desc="The output log"
223 )
224
225
226 class FSTraitedSpecOpenMP(FSTraitedSpec):
227 num_threads = traits.Int(desc="allows for specifying more threads")
228
229
230 class FSCommandOpenMP(FSCommand):
231 """Support for FS commands that utilize OpenMP
232
233 Sets the environment variable 'OMP_NUM_THREADS' to the number
234 of threads specified by the input num_threads.
235 """
236
237 input_spec = FSTraitedSpecOpenMP
238
239 _num_threads = None
240
241 def __init__(self, **inputs):
242 super(FSCommandOpenMP, self).__init__(**inputs)
243 self.inputs.on_trait_change(self._num_threads_update, "num_threads")
244 if not self._num_threads:
245 self._num_threads = os.environ.get("OMP_NUM_THREADS", None)
246 if not self._num_threads:
247 self._num_threads = os.environ.get("NSLOTS", None)
248 if not isdefined(self.inputs.num_threads) and self._num_threads:
249 self.inputs.num_threads = int(self._num_threads)
250 self._num_threads_update()
251
252 def _num_threads_update(self):
253 if self.inputs.num_threads:
254 self.inputs.environ.update(
255 {"OMP_NUM_THREADS": str(self.inputs.num_threads)}
256 )
257
258 def run(self, **inputs):
259 if "num_threads" in inputs:
260 self.inputs.num_threads = inputs["num_threads"]
261 self._num_threads_update()
262 return super(FSCommandOpenMP, self).run(**inputs)
263
264
265 def no_freesurfer():
266 """Checks if FreeSurfer is NOT installed
267 used with skipif to skip tests that will
268 fail if FreeSurfer is not installed"""
269
270 if Info.version() is None:
271 return True
272 else:
273 return False
274
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nipype/interfaces/freesurfer/base.py b/nipype/interfaces/freesurfer/base.py
--- a/nipype/interfaces/freesurfer/base.py
+++ b/nipype/interfaces/freesurfer/base.py
@@ -76,6 +76,9 @@
vstr = "6.0.0-dev" + githash
elif vinfo[5][0] == "v":
vstr = vinfo[5][1:]
+ elif len([1 for val in vinfo[3] if val == '.']) == 2:
+ "version string: freesurfer-linux-centos7_x86_64-7.1.0-20200511-813297b"
+ vstr = vinfo[3]
else:
raise RuntimeError("Unknown version string: " + ver)
# Retain pre-6.0.0 heuristics
| {"golden_diff": "diff --git a/nipype/interfaces/freesurfer/base.py b/nipype/interfaces/freesurfer/base.py\n--- a/nipype/interfaces/freesurfer/base.py\n+++ b/nipype/interfaces/freesurfer/base.py\n@@ -76,6 +76,9 @@\n vstr = \"6.0.0-dev\" + githash\n elif vinfo[5][0] == \"v\":\n vstr = vinfo[5][1:]\n+ elif len([1 for val in vinfo[3] if val == '.']) == 2:\n+ \"version string: freesurfer-linux-centos7_x86_64-7.1.0-20200511-813297b\"\n+ vstr = vinfo[3]\n else:\n raise RuntimeError(\"Unknown version string: \" + ver)\n # Retain pre-6.0.0 heuristics\n", "issue": "Unknown version string with Freesurfer 7\n### Summary\r\n\r\nAfter update to freesurfer 7 `import nipype.interfaces.freesurfer` returns `RuntimeError: Unknown version string` error.\r\n\r\n### Actual behavior\r\n```\r\n$ python\r\nPython 3.8.2 (default, May 7 2020, 20:00:49) \r\n[GCC 7.3.0] :: Anaconda, Inc. on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from nipype.interfaces.freesurfer import Info\r\n200518-18:18:10,4 nipype.utils WARNING:\r\n\t Could not check for version updates: \r\nConnection to server could not be made\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/opt/anaconda3/envs/neuromet_dev/lib/python3.8/site-packages/nipype/interfaces/freesurfer/__init__.py\", line 7, in <module>\r\n from .preprocess import (\r\n File \"/opt/anaconda3/envs/neuromet_dev/lib/python3.8/site-packages/nipype/interfaces/freesurfer/preprocess.py\", line 36, in <module>\r\n FSVersion = Info.looseversion().vstring\r\n File \"/opt/anaconda3/envs/neuromet_dev/lib/python3.8/site-packages/nipype/interfaces/freesurfer/base.py\", line 80, in looseversion\r\n raise RuntimeError(\"Unknown version string: \" + ver)\r\nRuntimeError: Unknown version string: freesurfer-linux-centos7_x86_64-7.1.0-20200511-813297b\r\n```\r\n\r\n### How to replicate the behavior\r\n\r\n1. Install Freesurfer 7 as described [here](https://surfer.nmr.mgh.harvard.edu/fswiki//FS7_linux) with Freesurfer from [here](https://surfer.nmr.mgh.harvard.edu/fswiki/rel7downloads) (file: `freesurfer-linux-centos7_x86_64-7.1.0.tar.gz`.\r\n2. create a fresh new conda environment with \r\n`conda create -n $name python=3.8 nipype`\r\n3. open python and try to import 'nipype.interfaces.freesurfer`\r\n\r\n### Platform details:\r\n```\r\n$cat /etc/lsb-release \r\nDISTRIB_ID=Ubuntu\r\nDISTRIB_RELEASE=16.04\r\nDISTRIB_CODENAME=xenial\r\nDISTRIB_DESCRIPTION=\"Ubuntu 16.04.6 LTS\"\r\n```\r\n\r\n<!-- Please run the following code from your shell and place the output between the triple ticks, below.\r\npython -c \"import nipype; from pprint import pprint; pprint(nipype.get_info())\"\r\n-->\r\n\r\n```\r\n$python -c \"import nipype; from pprint import pprint; pprint(nipype.get_info())\"\r\n200518-18:29:34,82 nipype.utils WARNING:\r\n\t Could not check for version updates: \r\nConnection to server could not be made\r\n{'commit_hash': '%h',\r\n 'commit_source': 'archive substitution',\r\n 'networkx_version': '2.4',\r\n 'nibabel_version': '3.1.0',\r\n 'nipype_version': '1.5.0-rc1',\r\n 'numpy_version': '1.18.1',\r\n 'pkg_path': '/opt/anaconda3/envs/neuromet_dev/lib/python3.8/site-packages/nipype',\r\n 'scipy_version': '1.4.1',\r\n 'sys_executable': '/opt/anaconda3/envs/neuromet_dev/bin/python',\r\n 'sys_platform': 'linux',\r\n 'sys_version': '3.8.2 (default, May 7 2020, 20:00:49) \\n[GCC 7.3.0]',\r\n 'traits_version': '5.2.0'}\r\n```\r\n\r\n### Execution environment\r\n\r\nChoose one\r\n- My python environment outside container\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n\"\"\"The freesurfer module provides basic functions for interfacing with\nfreesurfer tools.\n\nCurrently these tools are supported:\n\n * Dicom2Nifti: using mri_convert\n * Resample: using mri_convert\n\nExamples\n--------\nSee the docstrings for the individual classes for 'working' examples.\n\n\"\"\"\nimport os\n\nfrom ... import LooseVersion\nfrom ...utils.filemanip import fname_presuffix\nfrom ..base import (\n CommandLine,\n Directory,\n CommandLineInputSpec,\n isdefined,\n traits,\n TraitedSpec,\n File,\n PackageInfo,\n)\n\n__docformat__ = \"restructuredtext\"\n\n\nclass Info(PackageInfo):\n \"\"\" Freesurfer subject directory and version information.\n\n Examples\n --------\n\n >>> from nipype.interfaces.freesurfer import Info\n >>> Info.version() # doctest: +SKIP\n >>> Info.subjectsdir() # doctest: +SKIP\n\n \"\"\"\n\n if os.getenv(\"FREESURFER_HOME\"):\n version_file = os.path.join(os.getenv(\"FREESURFER_HOME\"), \"build-stamp.txt\")\n\n @staticmethod\n def parse_version(raw_info):\n return raw_info.splitlines()[0]\n\n @classmethod\n def looseversion(cls):\n \"\"\" Return a comparable version object\n\n If no version found, use LooseVersion('0.0.0')\n \"\"\"\n ver = cls.version()\n if ver is None:\n return LooseVersion(\"0.0.0\")\n\n vinfo = ver.rstrip().split(\"-\")\n try:\n int(vinfo[-1], 16)\n except ValueError:\n githash = \"\"\n else:\n githash = \".\" + vinfo[-1]\n\n # As of FreeSurfer v6.0.0, the final component is a githash\n if githash:\n if vinfo[3] == \"dev\":\n # This will need updating when v6.0.1 comes out\n vstr = \"6.0.0-dev\" + githash\n elif vinfo[5][0] == \"v\":\n vstr = vinfo[5][1:]\n else:\n raise RuntimeError(\"Unknown version string: \" + ver)\n # Retain pre-6.0.0 heuristics\n elif \"dev\" in ver:\n vstr = vinfo[-1] + \"-dev\"\n else:\n vstr = ver.rstrip().split(\"-v\")[-1]\n\n return LooseVersion(vstr)\n\n @classmethod\n def subjectsdir(cls):\n \"\"\"Check the global SUBJECTS_DIR\n\n Parameters\n ----------\n\n subjects_dir : string\n The system defined subjects directory\n\n Returns\n -------\n\n subject_dir : string\n Represents the current environment setting of SUBJECTS_DIR\n\n \"\"\"\n if cls.version():\n return os.environ[\"SUBJECTS_DIR\"]\n return None\n\n\nclass FSTraitedSpec(CommandLineInputSpec):\n subjects_dir = Directory(exists=True, desc=\"subjects directory\")\n\n\nclass FSCommand(CommandLine):\n \"\"\"General support for FreeSurfer commands.\n\n Every FS command accepts 'subjects_dir' input.\n \"\"\"\n\n input_spec = FSTraitedSpec\n\n _subjects_dir = None\n\n def __init__(self, **inputs):\n super(FSCommand, self).__init__(**inputs)\n self.inputs.on_trait_change(self._subjects_dir_update, \"subjects_dir\")\n if not self._subjects_dir:\n self._subjects_dir = Info.subjectsdir()\n if not isdefined(self.inputs.subjects_dir) and self._subjects_dir:\n self.inputs.subjects_dir = self._subjects_dir\n self._subjects_dir_update()\n\n def _subjects_dir_update(self):\n if self.inputs.subjects_dir:\n self.inputs.environ.update({\"SUBJECTS_DIR\": self.inputs.subjects_dir})\n\n @classmethod\n def set_default_subjects_dir(cls, subjects_dir):\n cls._subjects_dir = subjects_dir\n\n def run(self, **inputs):\n if \"subjects_dir\" in inputs:\n self.inputs.subjects_dir = inputs[\"subjects_dir\"]\n self._subjects_dir_update()\n return super(FSCommand, self).run(**inputs)\n\n def _gen_fname(self, basename, fname=None, cwd=None, suffix=\"_fs\", use_ext=True):\n \"\"\"Define a generic mapping for a single outfile\n\n The filename is potentially autogenerated by suffixing inputs.infile\n\n Parameters\n ----------\n basename : string (required)\n filename to base the new filename on\n fname : string\n if not None, just use this fname\n cwd : string\n prefix paths with cwd, otherwise os.getcwd()\n suffix : string\n default suffix\n \"\"\"\n if basename == \"\":\n msg = \"Unable to generate filename for command %s. \" % self.cmd\n msg += \"basename is not set!\"\n raise ValueError(msg)\n if cwd is None:\n cwd = os.getcwd()\n fname = fname_presuffix(basename, suffix=suffix, use_ext=use_ext, newpath=cwd)\n return fname\n\n @property\n def version(self):\n ver = Info.looseversion()\n if ver > LooseVersion(\"0.0.0\"):\n return ver.vstring\n\n\nclass FSSurfaceCommand(FSCommand):\n \"\"\"Support for FreeSurfer surface-related functions.\n For some functions, if the output file is not specified starting with 'lh.'\n or 'rh.', FreeSurfer prepends the prefix from the input file to the output\n filename. Output out_file must be adjusted to accommodate this. By\n including the full path in the filename, we can also avoid this behavior.\n \"\"\"\n\n @staticmethod\n def _associated_file(in_file, out_name):\n \"\"\"Based on MRIsBuildFileName in freesurfer/utils/mrisurf.c\n\n If no path information is provided for out_name, use path and\n hemisphere (if also unspecified) from in_file to determine the path\n of the associated file.\n Use in_file prefix to indicate hemisphere for out_name, rather than\n inspecting the surface data structure.\n \"\"\"\n path, base = os.path.split(out_name)\n if path == \"\":\n path, in_file = os.path.split(in_file)\n hemis = (\"lh.\", \"rh.\")\n if in_file[:3] in hemis and base[:3] not in hemis:\n base = in_file[:3] + base\n return os.path.join(path, base)\n\n\nclass FSScriptCommand(FSCommand):\n \"\"\" Support for Freesurfer script commands with log terminal_output\n \"\"\"\n\n _terminal_output = \"file\"\n _always_run = False\n\n def _list_outputs(self):\n outputs = self._outputs().get()\n outputs[\"log_file\"] = os.path.abspath(\"output.nipype\")\n return outputs\n\n\nclass FSScriptOutputSpec(TraitedSpec):\n log_file = File(\n \"output.nipype\", usedefault=True, exists=True, desc=\"The output log\"\n )\n\n\nclass FSTraitedSpecOpenMP(FSTraitedSpec):\n num_threads = traits.Int(desc=\"allows for specifying more threads\")\n\n\nclass FSCommandOpenMP(FSCommand):\n \"\"\"Support for FS commands that utilize OpenMP\n\n Sets the environment variable 'OMP_NUM_THREADS' to the number\n of threads specified by the input num_threads.\n \"\"\"\n\n input_spec = FSTraitedSpecOpenMP\n\n _num_threads = None\n\n def __init__(self, **inputs):\n super(FSCommandOpenMP, self).__init__(**inputs)\n self.inputs.on_trait_change(self._num_threads_update, \"num_threads\")\n if not self._num_threads:\n self._num_threads = os.environ.get(\"OMP_NUM_THREADS\", None)\n if not self._num_threads:\n self._num_threads = os.environ.get(\"NSLOTS\", None)\n if not isdefined(self.inputs.num_threads) and self._num_threads:\n self.inputs.num_threads = int(self._num_threads)\n self._num_threads_update()\n\n def _num_threads_update(self):\n if self.inputs.num_threads:\n self.inputs.environ.update(\n {\"OMP_NUM_THREADS\": str(self.inputs.num_threads)}\n )\n\n def run(self, **inputs):\n if \"num_threads\" in inputs:\n self.inputs.num_threads = inputs[\"num_threads\"]\n self._num_threads_update()\n return super(FSCommandOpenMP, self).run(**inputs)\n\n\ndef no_freesurfer():\n \"\"\"Checks if FreeSurfer is NOT installed\n used with skipif to skip tests that will\n fail if FreeSurfer is not installed\"\"\"\n\n if Info.version() is None:\n return True\n else:\n return False\n", "path": "nipype/interfaces/freesurfer/base.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n\"\"\"The freesurfer module provides basic functions for interfacing with\nfreesurfer tools.\n\nCurrently these tools are supported:\n\n * Dicom2Nifti: using mri_convert\n * Resample: using mri_convert\n\nExamples\n--------\nSee the docstrings for the individual classes for 'working' examples.\n\n\"\"\"\nimport os\n\nfrom ... import LooseVersion\nfrom ...utils.filemanip import fname_presuffix\nfrom ..base import (\n CommandLine,\n Directory,\n CommandLineInputSpec,\n isdefined,\n traits,\n TraitedSpec,\n File,\n PackageInfo,\n)\n\n__docformat__ = \"restructuredtext\"\n\n\nclass Info(PackageInfo):\n \"\"\" Freesurfer subject directory and version information.\n\n Examples\n --------\n\n >>> from nipype.interfaces.freesurfer import Info\n >>> Info.version() # doctest: +SKIP\n >>> Info.subjectsdir() # doctest: +SKIP\n\n \"\"\"\n\n if os.getenv(\"FREESURFER_HOME\"):\n version_file = os.path.join(os.getenv(\"FREESURFER_HOME\"), \"build-stamp.txt\")\n\n @staticmethod\n def parse_version(raw_info):\n return raw_info.splitlines()[0]\n\n @classmethod\n def looseversion(cls):\n \"\"\" Return a comparable version object\n\n If no version found, use LooseVersion('0.0.0')\n \"\"\"\n ver = cls.version()\n if ver is None:\n return LooseVersion(\"0.0.0\")\n\n vinfo = ver.rstrip().split(\"-\")\n try:\n int(vinfo[-1], 16)\n except ValueError:\n githash = \"\"\n else:\n githash = \".\" + vinfo[-1]\n\n # As of FreeSurfer v6.0.0, the final component is a githash\n if githash:\n if vinfo[3] == \"dev\":\n # This will need updating when v6.0.1 comes out\n vstr = \"6.0.0-dev\" + githash\n elif vinfo[5][0] == \"v\":\n vstr = vinfo[5][1:]\n elif len([1 for val in vinfo[3] if val == '.']) == 2:\n \"version string: freesurfer-linux-centos7_x86_64-7.1.0-20200511-813297b\"\n vstr = vinfo[3]\n else:\n raise RuntimeError(\"Unknown version string: \" + ver)\n # Retain pre-6.0.0 heuristics\n elif \"dev\" in ver:\n vstr = vinfo[-1] + \"-dev\"\n else:\n vstr = ver.rstrip().split(\"-v\")[-1]\n\n return LooseVersion(vstr)\n\n @classmethod\n def subjectsdir(cls):\n \"\"\"Check the global SUBJECTS_DIR\n\n Parameters\n ----------\n\n subjects_dir : string\n The system defined subjects directory\n\n Returns\n -------\n\n subject_dir : string\n Represents the current environment setting of SUBJECTS_DIR\n\n \"\"\"\n if cls.version():\n return os.environ[\"SUBJECTS_DIR\"]\n return None\n\n\nclass FSTraitedSpec(CommandLineInputSpec):\n subjects_dir = Directory(exists=True, desc=\"subjects directory\")\n\n\nclass FSCommand(CommandLine):\n \"\"\"General support for FreeSurfer commands.\n\n Every FS command accepts 'subjects_dir' input.\n \"\"\"\n\n input_spec = FSTraitedSpec\n\n _subjects_dir = None\n\n def __init__(self, **inputs):\n super(FSCommand, self).__init__(**inputs)\n self.inputs.on_trait_change(self._subjects_dir_update, \"subjects_dir\")\n if not self._subjects_dir:\n self._subjects_dir = Info.subjectsdir()\n if not isdefined(self.inputs.subjects_dir) and self._subjects_dir:\n self.inputs.subjects_dir = self._subjects_dir\n self._subjects_dir_update()\n\n def _subjects_dir_update(self):\n if self.inputs.subjects_dir:\n self.inputs.environ.update({\"SUBJECTS_DIR\": self.inputs.subjects_dir})\n\n @classmethod\n def set_default_subjects_dir(cls, subjects_dir):\n cls._subjects_dir = subjects_dir\n\n def run(self, **inputs):\n if \"subjects_dir\" in inputs:\n self.inputs.subjects_dir = inputs[\"subjects_dir\"]\n self._subjects_dir_update()\n return super(FSCommand, self).run(**inputs)\n\n def _gen_fname(self, basename, fname=None, cwd=None, suffix=\"_fs\", use_ext=True):\n \"\"\"Define a generic mapping for a single outfile\n\n The filename is potentially autogenerated by suffixing inputs.infile\n\n Parameters\n ----------\n basename : string (required)\n filename to base the new filename on\n fname : string\n if not None, just use this fname\n cwd : string\n prefix paths with cwd, otherwise os.getcwd()\n suffix : string\n default suffix\n \"\"\"\n if basename == \"\":\n msg = \"Unable to generate filename for command %s. \" % self.cmd\n msg += \"basename is not set!\"\n raise ValueError(msg)\n if cwd is None:\n cwd = os.getcwd()\n fname = fname_presuffix(basename, suffix=suffix, use_ext=use_ext, newpath=cwd)\n return fname\n\n @property\n def version(self):\n ver = Info.looseversion()\n if ver > LooseVersion(\"0.0.0\"):\n return ver.vstring\n\n\nclass FSSurfaceCommand(FSCommand):\n \"\"\"Support for FreeSurfer surface-related functions.\n For some functions, if the output file is not specified starting with 'lh.'\n or 'rh.', FreeSurfer prepends the prefix from the input file to the output\n filename. Output out_file must be adjusted to accommodate this. By\n including the full path in the filename, we can also avoid this behavior.\n \"\"\"\n\n @staticmethod\n def _associated_file(in_file, out_name):\n \"\"\"Based on MRIsBuildFileName in freesurfer/utils/mrisurf.c\n\n If no path information is provided for out_name, use path and\n hemisphere (if also unspecified) from in_file to determine the path\n of the associated file.\n Use in_file prefix to indicate hemisphere for out_name, rather than\n inspecting the surface data structure.\n \"\"\"\n path, base = os.path.split(out_name)\n if path == \"\":\n path, in_file = os.path.split(in_file)\n hemis = (\"lh.\", \"rh.\")\n if in_file[:3] in hemis and base[:3] not in hemis:\n base = in_file[:3] + base\n return os.path.join(path, base)\n\n\nclass FSScriptCommand(FSCommand):\n \"\"\" Support for Freesurfer script commands with log terminal_output\n \"\"\"\n\n _terminal_output = \"file\"\n _always_run = False\n\n def _list_outputs(self):\n outputs = self._outputs().get()\n outputs[\"log_file\"] = os.path.abspath(\"output.nipype\")\n return outputs\n\n\nclass FSScriptOutputSpec(TraitedSpec):\n log_file = File(\n \"output.nipype\", usedefault=True, exists=True, desc=\"The output log\"\n )\n\n\nclass FSTraitedSpecOpenMP(FSTraitedSpec):\n num_threads = traits.Int(desc=\"allows for specifying more threads\")\n\n\nclass FSCommandOpenMP(FSCommand):\n \"\"\"Support for FS commands that utilize OpenMP\n\n Sets the environment variable 'OMP_NUM_THREADS' to the number\n of threads specified by the input num_threads.\n \"\"\"\n\n input_spec = FSTraitedSpecOpenMP\n\n _num_threads = None\n\n def __init__(self, **inputs):\n super(FSCommandOpenMP, self).__init__(**inputs)\n self.inputs.on_trait_change(self._num_threads_update, \"num_threads\")\n if not self._num_threads:\n self._num_threads = os.environ.get(\"OMP_NUM_THREADS\", None)\n if not self._num_threads:\n self._num_threads = os.environ.get(\"NSLOTS\", None)\n if not isdefined(self.inputs.num_threads) and self._num_threads:\n self.inputs.num_threads = int(self._num_threads)\n self._num_threads_update()\n\n def _num_threads_update(self):\n if self.inputs.num_threads:\n self.inputs.environ.update(\n {\"OMP_NUM_THREADS\": str(self.inputs.num_threads)}\n )\n\n def run(self, **inputs):\n if \"num_threads\" in inputs:\n self.inputs.num_threads = inputs[\"num_threads\"]\n self._num_threads_update()\n return super(FSCommandOpenMP, self).run(**inputs)\n\n\ndef no_freesurfer():\n \"\"\"Checks if FreeSurfer is NOT installed\n used with skipif to skip tests that will\n fail if FreeSurfer is not installed\"\"\"\n\n if Info.version() is None:\n return True\n else:\n return False\n", "path": "nipype/interfaces/freesurfer/base.py"}]} | 3,842 | 211 |
gh_patches_debug_32432 | rasdani/github-patches | git_diff | optuna__optuna-3087 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add missing sampler and pruner classes to a tutorial page
<!-- Please write a clear and concise description of what content in https://optuna.readthedocs.io/ is an issue. -->
[Efficient Optimization Algorithms](https://optuna.readthedocs.io/en/latest/tutorial/10_key_features/003_efficient_optimization_algorithms.html) tutorial provides lists of samplers/pruners implemented in Optuna. However a few classes are not on the list. It would be great to mention them comprehensively from the tutorial pages for Optuna users.
More concretely, please add the following entries to the tutorial page.
## Samplers
- `optuna.samplers.PartialFixedSampler`
- `optuna.samplers.NSGAIISampler`
## Pruners
- `optuna.pruners.PatientPruner`
- `optuna.pruners.PercentilePruner`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tutorial/10_key_features/003_efficient_optimization_algorithms.py`
Content:
```
1 """
2 .. _pruning:
3
4 Efficient Optimization Algorithms
5 =================================
6
7 Optuna enables efficient hyperparameter optimization by
8 adopting state-of-the-art algorithms for sampling hyperparameters and
9 pruning efficiently unpromising trials.
10
11 Sampling Algorithms
12 -------------------
13
14 Samplers basically continually narrow down the search space using the records of suggested parameter values and evaluated objective values,
15 leading to an optimal search space which giving off parameters leading to better objective values.
16 More detailed explanation of how samplers suggest parameters is in :class:`optuna.samplers.BaseSampler`.
17
18 Optuna provides the following sampling algorithms:
19
20 - Tree-structured Parzen Estimator algorithm implemented in :class:`optuna.samplers.TPESampler`
21
22 - CMA-ES based algorithm implemented in :class:`optuna.samplers.CmaEsSampler`
23
24 - Grid Search implemented in :class:`optuna.samplers.GridSampler`
25
26 - Random Search implemented in :class:`optuna.samplers.RandomSampler`
27
28 The default sampler is :class:`optuna.samplers.TPESampler`.
29
30 Switching Samplers
31 ------------------
32
33 """
34
35 import optuna
36
37
38 ###################################################################################################
39 # By default, Optuna uses :class:`~optuna.samplers.TPESampler` as follows.
40
41 study = optuna.create_study()
42 print(f"Sampler is {study.sampler.__class__.__name__}")
43
44 ###################################################################################################
45 # If you want to use different samplers for example :class:`~optuna.samplers.RandomSampler`
46 # and :class:`~optuna.samplers.CmaEsSampler`,
47
48 study = optuna.create_study(sampler=optuna.samplers.RandomSampler())
49 print(f"Sampler is {study.sampler.__class__.__name__}")
50
51 study = optuna.create_study(sampler=optuna.samplers.CmaEsSampler())
52 print(f"Sampler is {study.sampler.__class__.__name__}")
53
54
55 ###################################################################################################
56 # Pruning Algorithms
57 # ------------------
58 #
59 # ``Pruners`` automatically stop unpromising trials at the early stages of the training (a.k.a., automated early-stopping).
60 #
61 # Optuna provides the following pruning algorithms:
62 #
63 # - Asynchronous Successive Halving algorithm implemented in :class:`optuna.pruners.SuccessiveHalvingPruner`
64 #
65 # - Hyperband algorithm implemented in :class:`optuna.pruners.HyperbandPruner`
66 #
67 # - Median pruning algorithm implemented in :class:`optuna.pruners.MedianPruner`
68 #
69 # - Threshold pruning algorithm implemented in :class:`optuna.pruners.ThresholdPruner`
70 #
71 # We use :class:`optuna.pruners.MedianPruner` in most examples,
72 # though basically it is outperformed by :class:`optuna.pruners.SuccessiveHalvingPruner` and
73 # :class:`optuna.pruners.HyperbandPruner` as in `this benchmark result <https://github.com/optuna/optuna/wiki/Benchmarks-with-Kurobako>`_.
74 #
75 #
76 # Activating Pruners
77 # ------------------
78 # To turn on the pruning feature, you need to call :func:`~optuna.trial.Trial.report` and :func:`~optuna.trial.Trial.should_prune` after each step of the iterative training.
79 # :func:`~optuna.trial.Trial.report` periodically monitors the intermediate objective values.
80 # :func:`~optuna.trial.Trial.should_prune` decides termination of the trial that does not meet a predefined condition.
81 #
82 # We would recommend using integration modules for major machine learning frameworks.
83 # Exclusive list is :mod:`optuna.integration` and usecases are available in `optuna/examples <https://github.com/optuna/optuna-examples/>`_.
84
85
86 import logging
87 import sys
88
89 import sklearn.datasets
90 import sklearn.linear_model
91 import sklearn.model_selection
92
93
94 def objective(trial):
95 iris = sklearn.datasets.load_iris()
96 classes = list(set(iris.target))
97 train_x, valid_x, train_y, valid_y = sklearn.model_selection.train_test_split(
98 iris.data, iris.target, test_size=0.25, random_state=0
99 )
100
101 alpha = trial.suggest_float("alpha", 1e-5, 1e-1, log=True)
102 clf = sklearn.linear_model.SGDClassifier(alpha=alpha)
103
104 for step in range(100):
105 clf.partial_fit(train_x, train_y, classes=classes)
106
107 # Report intermediate objective value.
108 intermediate_value = 1.0 - clf.score(valid_x, valid_y)
109 trial.report(intermediate_value, step)
110
111 # Handle pruning based on the intermediate value.
112 if trial.should_prune():
113 raise optuna.TrialPruned()
114
115 return 1.0 - clf.score(valid_x, valid_y)
116
117
118 ###################################################################################################
119 # Set up the median stopping rule as the pruning condition.
120
121 # Add stream handler of stdout to show the messages
122 optuna.logging.get_logger("optuna").addHandler(logging.StreamHandler(sys.stdout))
123 study = optuna.create_study(pruner=optuna.pruners.MedianPruner())
124 study.optimize(objective, n_trials=20)
125
126 ###################################################################################################
127 # As you can see, several trials were pruned (stopped) before they finished all of the iterations.
128 # The format of message is ``"Trial <Trial Number> pruned."``.
129
130 ###################################################################################################
131 # Which Sampler and Pruner Should be Used?
132 # ----------------------------------------
133 #
134 # From the benchmark results which are available at `optuna/optuna - wiki "Benchmarks with Kurobako" <https://github.com/optuna/optuna/wiki/Benchmarks-with-Kurobako>`_, at least for not deep learning tasks, we would say that
135 #
136 # * For :class:`optuna.samplers.RandomSampler`, :class:`optuna.pruners.MedianPruner` is the best.
137 # * For :class:`optuna.samplers.TPESampler`, :class:`optuna.pruners.Hyperband` is the best.
138 #
139 # However, note that the benchmark is not deep learning.
140 # For deep learning tasks,
141 # consult the below table.
142 # This table is from the `Ozaki et al., Hyperparameter Optimization Methods: Overview and Characteristics, in IEICE Trans, Vol.J103-D No.9 pp.615-631, 2020 <https://doi.org/10.14923/transinfj.2019JDR0003>`_ paper,
143 # which is written in Japanese.
144 #
145 # +---------------------------+-----------------------------------------+---------------------------------------------------------------+
146 # | Parallel Compute Resource | Categorical/Conditional Hyperparameters | Recommended Algorithms |
147 # +===========================+=========================================+===============================================================+
148 # | Limited | No | TPE. GP-EI if search space is low-dimensional and continuous. |
149 # + +-----------------------------------------+---------------------------------------------------------------+
150 # | | Yes | TPE. GP-EI if search space is low-dimensional and continuous |
151 # +---------------------------+-----------------------------------------+---------------------------------------------------------------+
152 # | Sufficient | No | CMA-ES, Random Search |
153 # + +-----------------------------------------+---------------------------------------------------------------+
154 # | | Yes | Random Search or Genetic Algorithm |
155 # +---------------------------+-----------------------------------------+---------------------------------------------------------------+
156 #
157
158 ###################################################################################################
159 # Integration Modules for Pruning
160 # -------------------------------
161 # To implement pruning mechanism in much simpler forms, Optuna provides integration modules for the following libraries.
162 #
163 # For the complete list of Optuna's integration modules, see :mod:`optuna.integration`.
164 #
165 # For example, :class:`~optuna.integration.XGBoostPruningCallback` introduces pruning without directly changing the logic of training iteration.
166 # (See also `example <https://github.com/optuna/optuna-examples/tree/main/xgboost/xgboost_integration.py>`_ for the entire script.)
167 #
168 # .. code-block:: python
169 #
170 # pruning_callback = optuna.integration.XGBoostPruningCallback(trial, 'validation-error')
171 # bst = xgb.train(param, dtrain, evals=[(dvalid, 'validation')], callbacks=[pruning_callback])
172
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tutorial/10_key_features/003_efficient_optimization_algorithms.py b/tutorial/10_key_features/003_efficient_optimization_algorithms.py
--- a/tutorial/10_key_features/003_efficient_optimization_algorithms.py
+++ b/tutorial/10_key_features/003_efficient_optimization_algorithms.py
@@ -17,13 +17,17 @@
Optuna provides the following sampling algorithms:
+- Grid Search implemented in :class:`optuna.samplers.GridSampler`
+
+- Random Search implemented in :class:`optuna.samplers.RandomSampler`
+
- Tree-structured Parzen Estimator algorithm implemented in :class:`optuna.samplers.TPESampler`
- CMA-ES based algorithm implemented in :class:`optuna.samplers.CmaEsSampler`
-- Grid Search implemented in :class:`optuna.samplers.GridSampler`
+- Algorithm to enable partial fixed parameters implemented in :class:`optuna.samplers.PartialFixedSampler`
-- Random Search implemented in :class:`optuna.samplers.RandomSampler`
+- Nondominated Sorting Genetic Algorithm II implemented in :class:`optuna.samplers.NSGAIISampler`
The default sampler is :class:`optuna.samplers.TPESampler`.
@@ -60,12 +64,18 @@
#
# Optuna provides the following pruning algorithms:
#
+# - Median pruning algorithm implemented in :class:`optuna.pruners.MedianPruner`
+#
+# - Non-pruning algorithm implementd in :class:`optuna.pruners.NopPruner`
+#
+# - Algorithm to operate pruner with tolerance implemented in :class:`optuna.pruners.PatientPruner`
+#
+# - Algorithm to prune specified percentile of trials implemented in :class:`optuna.pruners.PercentilePruner`
+#
# - Asynchronous Successive Halving algorithm implemented in :class:`optuna.pruners.SuccessiveHalvingPruner`
#
# - Hyperband algorithm implemented in :class:`optuna.pruners.HyperbandPruner`
#
-# - Median pruning algorithm implemented in :class:`optuna.pruners.MedianPruner`
-#
# - Threshold pruning algorithm implemented in :class:`optuna.pruners.ThresholdPruner`
#
# We use :class:`optuna.pruners.MedianPruner` in most examples,
| {"golden_diff": "diff --git a/tutorial/10_key_features/003_efficient_optimization_algorithms.py b/tutorial/10_key_features/003_efficient_optimization_algorithms.py\n--- a/tutorial/10_key_features/003_efficient_optimization_algorithms.py\n+++ b/tutorial/10_key_features/003_efficient_optimization_algorithms.py\n@@ -17,13 +17,17 @@\n \n Optuna provides the following sampling algorithms:\n \n+- Grid Search implemented in :class:`optuna.samplers.GridSampler`\n+\n+- Random Search implemented in :class:`optuna.samplers.RandomSampler`\n+\n - Tree-structured Parzen Estimator algorithm implemented in :class:`optuna.samplers.TPESampler`\n \n - CMA-ES based algorithm implemented in :class:`optuna.samplers.CmaEsSampler`\n \n-- Grid Search implemented in :class:`optuna.samplers.GridSampler`\n+- Algorithm to enable partial fixed parameters implemented in :class:`optuna.samplers.PartialFixedSampler`\n \n-- Random Search implemented in :class:`optuna.samplers.RandomSampler`\n+- Nondominated Sorting Genetic Algorithm II implemented in :class:`optuna.samplers.NSGAIISampler`\n \n The default sampler is :class:`optuna.samplers.TPESampler`.\n \n@@ -60,12 +64,18 @@\n #\n # Optuna provides the following pruning algorithms:\n #\n+# - Median pruning algorithm implemented in :class:`optuna.pruners.MedianPruner`\n+#\n+# - Non-pruning algorithm implementd in :class:`optuna.pruners.NopPruner`\n+#\n+# - Algorithm to operate pruner with tolerance implemented in :class:`optuna.pruners.PatientPruner`\n+#\n+# - Algorithm to prune specified percentile of trials implemented in :class:`optuna.pruners.PercentilePruner`\n+#\n # - Asynchronous Successive Halving algorithm implemented in :class:`optuna.pruners.SuccessiveHalvingPruner`\n #\n # - Hyperband algorithm implemented in :class:`optuna.pruners.HyperbandPruner`\n #\n-# - Median pruning algorithm implemented in :class:`optuna.pruners.MedianPruner`\n-#\n # - Threshold pruning algorithm implemented in :class:`optuna.pruners.ThresholdPruner`\n #\n # We use :class:`optuna.pruners.MedianPruner` in most examples,\n", "issue": "Add missing sampler and pruner classes to a tutorial page\n<!-- Please write a clear and concise description of what content in https://optuna.readthedocs.io/ is an issue. -->\r\n\r\n[Efficient Optimization Algorithms](https://optuna.readthedocs.io/en/latest/tutorial/10_key_features/003_efficient_optimization_algorithms.html) tutorial provides lists of samplers/pruners implemented in Optuna. However a few classes are not on the list. It would be great to mention them comprehensively from the tutorial pages for Optuna users. \r\n\r\n\r\nMore concretely, please add the following entries to the tutorial page.\r\n\r\n## Samplers\r\n\r\n- `optuna.samplers.PartialFixedSampler`\r\n- `optuna.samplers.NSGAIISampler`\r\n\r\n## Pruners\r\n- `optuna.pruners.PatientPruner`\r\n- `optuna.pruners.PercentilePruner`\r\n\n", "before_files": [{"content": "\"\"\"\n.. _pruning:\n\nEfficient Optimization Algorithms\n=================================\n\nOptuna enables efficient hyperparameter optimization by\nadopting state-of-the-art algorithms for sampling hyperparameters and\npruning efficiently unpromising trials.\n\nSampling Algorithms\n-------------------\n\nSamplers basically continually narrow down the search space using the records of suggested parameter values and evaluated objective values,\nleading to an optimal search space which giving off parameters leading to better objective values.\nMore detailed explanation of how samplers suggest parameters is in :class:`optuna.samplers.BaseSampler`.\n\nOptuna provides the following sampling algorithms:\n\n- Tree-structured Parzen Estimator algorithm implemented in :class:`optuna.samplers.TPESampler`\n\n- CMA-ES based algorithm implemented in :class:`optuna.samplers.CmaEsSampler`\n\n- Grid Search implemented in :class:`optuna.samplers.GridSampler`\n\n- Random Search implemented in :class:`optuna.samplers.RandomSampler`\n\nThe default sampler is :class:`optuna.samplers.TPESampler`.\n\nSwitching Samplers\n------------------\n\n\"\"\"\n\nimport optuna\n\n\n###################################################################################################\n# By default, Optuna uses :class:`~optuna.samplers.TPESampler` as follows.\n\nstudy = optuna.create_study()\nprint(f\"Sampler is {study.sampler.__class__.__name__}\")\n\n###################################################################################################\n# If you want to use different samplers for example :class:`~optuna.samplers.RandomSampler`\n# and :class:`~optuna.samplers.CmaEsSampler`,\n\nstudy = optuna.create_study(sampler=optuna.samplers.RandomSampler())\nprint(f\"Sampler is {study.sampler.__class__.__name__}\")\n\nstudy = optuna.create_study(sampler=optuna.samplers.CmaEsSampler())\nprint(f\"Sampler is {study.sampler.__class__.__name__}\")\n\n\n###################################################################################################\n# Pruning Algorithms\n# ------------------\n#\n# ``Pruners`` automatically stop unpromising trials at the early stages of the training (a.k.a., automated early-stopping).\n#\n# Optuna provides the following pruning algorithms:\n#\n# - Asynchronous Successive Halving algorithm implemented in :class:`optuna.pruners.SuccessiveHalvingPruner`\n#\n# - Hyperband algorithm implemented in :class:`optuna.pruners.HyperbandPruner`\n#\n# - Median pruning algorithm implemented in :class:`optuna.pruners.MedianPruner`\n#\n# - Threshold pruning algorithm implemented in :class:`optuna.pruners.ThresholdPruner`\n#\n# We use :class:`optuna.pruners.MedianPruner` in most examples,\n# though basically it is outperformed by :class:`optuna.pruners.SuccessiveHalvingPruner` and\n# :class:`optuna.pruners.HyperbandPruner` as in `this benchmark result <https://github.com/optuna/optuna/wiki/Benchmarks-with-Kurobako>`_.\n#\n#\n# Activating Pruners\n# ------------------\n# To turn on the pruning feature, you need to call :func:`~optuna.trial.Trial.report` and :func:`~optuna.trial.Trial.should_prune` after each step of the iterative training.\n# :func:`~optuna.trial.Trial.report` periodically monitors the intermediate objective values.\n# :func:`~optuna.trial.Trial.should_prune` decides termination of the trial that does not meet a predefined condition.\n#\n# We would recommend using integration modules for major machine learning frameworks.\n# Exclusive list is :mod:`optuna.integration` and usecases are available in `optuna/examples <https://github.com/optuna/optuna-examples/>`_.\n\n\nimport logging\nimport sys\n\nimport sklearn.datasets\nimport sklearn.linear_model\nimport sklearn.model_selection\n\n\ndef objective(trial):\n iris = sklearn.datasets.load_iris()\n classes = list(set(iris.target))\n train_x, valid_x, train_y, valid_y = sklearn.model_selection.train_test_split(\n iris.data, iris.target, test_size=0.25, random_state=0\n )\n\n alpha = trial.suggest_float(\"alpha\", 1e-5, 1e-1, log=True)\n clf = sklearn.linear_model.SGDClassifier(alpha=alpha)\n\n for step in range(100):\n clf.partial_fit(train_x, train_y, classes=classes)\n\n # Report intermediate objective value.\n intermediate_value = 1.0 - clf.score(valid_x, valid_y)\n trial.report(intermediate_value, step)\n\n # Handle pruning based on the intermediate value.\n if trial.should_prune():\n raise optuna.TrialPruned()\n\n return 1.0 - clf.score(valid_x, valid_y)\n\n\n###################################################################################################\n# Set up the median stopping rule as the pruning condition.\n\n# Add stream handler of stdout to show the messages\noptuna.logging.get_logger(\"optuna\").addHandler(logging.StreamHandler(sys.stdout))\nstudy = optuna.create_study(pruner=optuna.pruners.MedianPruner())\nstudy.optimize(objective, n_trials=20)\n\n###################################################################################################\n# As you can see, several trials were pruned (stopped) before they finished all of the iterations.\n# The format of message is ``\"Trial <Trial Number> pruned.\"``.\n\n###################################################################################################\n# Which Sampler and Pruner Should be Used?\n# ----------------------------------------\n#\n# From the benchmark results which are available at `optuna/optuna - wiki \"Benchmarks with Kurobako\" <https://github.com/optuna/optuna/wiki/Benchmarks-with-Kurobako>`_, at least for not deep learning tasks, we would say that\n#\n# * For :class:`optuna.samplers.RandomSampler`, :class:`optuna.pruners.MedianPruner` is the best.\n# * For :class:`optuna.samplers.TPESampler`, :class:`optuna.pruners.Hyperband` is the best.\n#\n# However, note that the benchmark is not deep learning.\n# For deep learning tasks,\n# consult the below table.\n# This table is from the `Ozaki et al., Hyperparameter Optimization Methods: Overview and Characteristics, in IEICE Trans, Vol.J103-D No.9 pp.615-631, 2020 <https://doi.org/10.14923/transinfj.2019JDR0003>`_ paper,\n# which is written in Japanese.\n#\n# +---------------------------+-----------------------------------------+---------------------------------------------------------------+\n# | Parallel Compute Resource | Categorical/Conditional Hyperparameters | Recommended Algorithms |\n# +===========================+=========================================+===============================================================+\n# | Limited | No | TPE. GP-EI if search space is low-dimensional and continuous. |\n# + +-----------------------------------------+---------------------------------------------------------------+\n# | | Yes | TPE. GP-EI if search space is low-dimensional and continuous |\n# +---------------------------+-----------------------------------------+---------------------------------------------------------------+\n# | Sufficient | No | CMA-ES, Random Search |\n# + +-----------------------------------------+---------------------------------------------------------------+\n# | | Yes | Random Search or Genetic Algorithm |\n# +---------------------------+-----------------------------------------+---------------------------------------------------------------+\n#\n\n###################################################################################################\n# Integration Modules for Pruning\n# -------------------------------\n# To implement pruning mechanism in much simpler forms, Optuna provides integration modules for the following libraries.\n#\n# For the complete list of Optuna's integration modules, see :mod:`optuna.integration`.\n#\n# For example, :class:`~optuna.integration.XGBoostPruningCallback` introduces pruning without directly changing the logic of training iteration.\n# (See also `example <https://github.com/optuna/optuna-examples/tree/main/xgboost/xgboost_integration.py>`_ for the entire script.)\n#\n# .. code-block:: python\n#\n# pruning_callback = optuna.integration.XGBoostPruningCallback(trial, 'validation-error')\n# bst = xgb.train(param, dtrain, evals=[(dvalid, 'validation')], callbacks=[pruning_callback])\n", "path": "tutorial/10_key_features/003_efficient_optimization_algorithms.py"}], "after_files": [{"content": "\"\"\"\n.. _pruning:\n\nEfficient Optimization Algorithms\n=================================\n\nOptuna enables efficient hyperparameter optimization by\nadopting state-of-the-art algorithms for sampling hyperparameters and\npruning efficiently unpromising trials.\n\nSampling Algorithms\n-------------------\n\nSamplers basically continually narrow down the search space using the records of suggested parameter values and evaluated objective values,\nleading to an optimal search space which giving off parameters leading to better objective values.\nMore detailed explanation of how samplers suggest parameters is in :class:`optuna.samplers.BaseSampler`.\n\nOptuna provides the following sampling algorithms:\n\n- Grid Search implemented in :class:`optuna.samplers.GridSampler`\n\n- Random Search implemented in :class:`optuna.samplers.RandomSampler`\n\n- Tree-structured Parzen Estimator algorithm implemented in :class:`optuna.samplers.TPESampler`\n\n- CMA-ES based algorithm implemented in :class:`optuna.samplers.CmaEsSampler`\n\n- Algorithm to enable partial fixed parameters implemented in :class:`optuna.samplers.PartialFixedSampler`\n\n- Nondominated Sorting Genetic Algorithm II implemented in :class:`optuna.samplers.NSGAIISampler`\n\nThe default sampler is :class:`optuna.samplers.TPESampler`.\n\nSwitching Samplers\n------------------\n\n\"\"\"\n\nimport optuna\n\n\n###################################################################################################\n# By default, Optuna uses :class:`~optuna.samplers.TPESampler` as follows.\n\nstudy = optuna.create_study()\nprint(f\"Sampler is {study.sampler.__class__.__name__}\")\n\n###################################################################################################\n# If you want to use different samplers for example :class:`~optuna.samplers.RandomSampler`\n# and :class:`~optuna.samplers.CmaEsSampler`,\n\nstudy = optuna.create_study(sampler=optuna.samplers.RandomSampler())\nprint(f\"Sampler is {study.sampler.__class__.__name__}\")\n\nstudy = optuna.create_study(sampler=optuna.samplers.CmaEsSampler())\nprint(f\"Sampler is {study.sampler.__class__.__name__}\")\n\n\n###################################################################################################\n# Pruning Algorithms\n# ------------------\n#\n# ``Pruners`` automatically stop unpromising trials at the early stages of the training (a.k.a., automated early-stopping).\n#\n# Optuna provides the following pruning algorithms:\n#\n# - Median pruning algorithm implemented in :class:`optuna.pruners.MedianPruner`\n#\n# - Non-pruning algorithm implementd in :class:`optuna.pruners.NopPruner`\n#\n# - Algorithm to operate pruner with tolerance implemented in :class:`optuna.pruners.PatientPruner`\n#\n# - Algorithm to prune specified percentile of trials implemented in :class:`optuna.pruners.PercentilePruner`\n#\n# - Asynchronous Successive Halving algorithm implemented in :class:`optuna.pruners.SuccessiveHalvingPruner`\n#\n# - Hyperband algorithm implemented in :class:`optuna.pruners.HyperbandPruner`\n#\n# - Threshold pruning algorithm implemented in :class:`optuna.pruners.ThresholdPruner`\n#\n# We use :class:`optuna.pruners.MedianPruner` in most examples,\n# though basically it is outperformed by :class:`optuna.pruners.SuccessiveHalvingPruner` and\n# :class:`optuna.pruners.HyperbandPruner` as in `this benchmark result <https://github.com/optuna/optuna/wiki/Benchmarks-with-Kurobako>`_.\n#\n#\n# Activating Pruners\n# ------------------\n# To turn on the pruning feature, you need to call :func:`~optuna.trial.Trial.report` and :func:`~optuna.trial.Trial.should_prune` after each step of the iterative training.\n# :func:`~optuna.trial.Trial.report` periodically monitors the intermediate objective values.\n# :func:`~optuna.trial.Trial.should_prune` decides termination of the trial that does not meet a predefined condition.\n#\n# We would recommend using integration modules for major machine learning frameworks.\n# Exclusive list is :mod:`optuna.integration` and usecases are available in `optuna/examples <https://github.com/optuna/optuna-examples/>`_.\n\n\nimport logging\nimport sys\n\nimport sklearn.datasets\nimport sklearn.linear_model\nimport sklearn.model_selection\n\n\ndef objective(trial):\n iris = sklearn.datasets.load_iris()\n classes = list(set(iris.target))\n train_x, valid_x, train_y, valid_y = sklearn.model_selection.train_test_split(\n iris.data, iris.target, test_size=0.25, random_state=0\n )\n\n alpha = trial.suggest_float(\"alpha\", 1e-5, 1e-1, log=True)\n clf = sklearn.linear_model.SGDClassifier(alpha=alpha)\n\n for step in range(100):\n clf.partial_fit(train_x, train_y, classes=classes)\n\n # Report intermediate objective value.\n intermediate_value = 1.0 - clf.score(valid_x, valid_y)\n trial.report(intermediate_value, step)\n\n # Handle pruning based on the intermediate value.\n if trial.should_prune():\n raise optuna.TrialPruned()\n\n return 1.0 - clf.score(valid_x, valid_y)\n\n\n###################################################################################################\n# Set up the median stopping rule as the pruning condition.\n\n# Add stream handler of stdout to show the messages\noptuna.logging.get_logger(\"optuna\").addHandler(logging.StreamHandler(sys.stdout))\nstudy = optuna.create_study(pruner=optuna.pruners.MedianPruner())\nstudy.optimize(objective, n_trials=20)\n\n###################################################################################################\n# As you can see, several trials were pruned (stopped) before they finished all of the iterations.\n# The format of message is ``\"Trial <Trial Number> pruned.\"``.\n\n###################################################################################################\n# Which Sampler and Pruner Should be Used?\n# ----------------------------------------\n#\n# From the benchmark results which are available at `optuna/optuna - wiki \"Benchmarks with Kurobako\" <https://github.com/optuna/optuna/wiki/Benchmarks-with-Kurobako>`_, at least for not deep learning tasks, we would say that\n#\n# * For :class:`optuna.samplers.RandomSampler`, :class:`optuna.pruners.MedianPruner` is the best.\n# * For :class:`optuna.samplers.TPESampler`, :class:`optuna.pruners.Hyperband` is the best.\n#\n# However, note that the benchmark is not deep learning.\n# For deep learning tasks,\n# consult the below table.\n# This table is from the `Ozaki et al., Hyperparameter Optimization Methods: Overview and Characteristics, in IEICE Trans, Vol.J103-D No.9 pp.615-631, 2020 <https://doi.org/10.14923/transinfj.2019JDR0003>`_ paper,\n# which is written in Japanese.\n#\n# +---------------------------+-----------------------------------------+---------------------------------------------------------------+\n# | Parallel Compute Resource | Categorical/Conditional Hyperparameters | Recommended Algorithms |\n# +===========================+=========================================+===============================================================+\n# | Limited | No | TPE. GP-EI if search space is low-dimensional and continuous. |\n# + +-----------------------------------------+---------------------------------------------------------------+\n# | | Yes | TPE. GP-EI if search space is low-dimensional and continuous |\n# +---------------------------+-----------------------------------------+---------------------------------------------------------------+\n# | Sufficient | No | CMA-ES, Random Search |\n# + +-----------------------------------------+---------------------------------------------------------------+\n# | | Yes | Random Search or Genetic Algorithm |\n# +---------------------------+-----------------------------------------+---------------------------------------------------------------+\n#\n\n###################################################################################################\n# Integration Modules for Pruning\n# -------------------------------\n# To implement pruning mechanism in much simpler forms, Optuna provides integration modules for the following libraries.\n#\n# For the complete list of Optuna's integration modules, see :mod:`optuna.integration`.\n#\n# For example, :class:`~optuna.integration.XGBoostPruningCallback` introduces pruning without directly changing the logic of training iteration.\n# (See also `example <https://github.com/optuna/optuna-examples/tree/main/xgboost/xgboost_integration.py>`_ for the entire script.)\n#\n# .. code-block:: python\n#\n# pruning_callback = optuna.integration.XGBoostPruningCallback(trial, 'validation-error')\n# bst = xgb.train(param, dtrain, evals=[(dvalid, 'validation')], callbacks=[pruning_callback])\n", "path": "tutorial/10_key_features/003_efficient_optimization_algorithms.py"}]} | 2,580 | 510 |
gh_patches_debug_44031 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-26 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support forward references
See: https://www.python.org/dev/peps/pep-0563/#forward-references
Right now the following code would break:
```python
from __future__ import annotations
import strawberry
import typing
@strawberry.type
class User:
name: str
friend: typing.Optional[User]
```
This is the error we get:
```
File "/Users/patrickarminio/Documents/personal/temp/stra/app.py", line 7, in <module>
from schema import schema
File "/Users/patrickarminio/Documents/personal/temp/stra/schema.py", line 10, in <module>
@strawberry.type
File "/Users/patrickarminio/.virtualenvs/stra-so-aNvo2/lib/python3.7/site-packages/strawberry/type.py", line 60, in type
return wrap()
File "/Users/patrickarminio/.virtualenvs/stra-so-aNvo2/lib/python3.7/site-packages/strawberry/type.py", line 55, in wrap
cls._fields = _get_fields(cls)
File "/Users/patrickarminio/.virtualenvs/stra-so-aNvo2/lib/python3.7/site-packages/strawberry/type.py", line 27, in _get_fields
cls_annotations = typing.get_type_hints(cls)
File "/Users/patrickarminio/.pyenv/versions/3.7.0/lib/python3.7/typing.py", line 973, in get_type_hints
value = _eval_type(value, base_globals, localns)
File "/Users/patrickarminio/.pyenv/versions/3.7.0/lib/python3.7/typing.py", line 260, in _eval_type
return t._evaluate(globalns, localns)
File "/Users/patrickarminio/.pyenv/versions/3.7.0/lib/python3.7/typing.py", line 464, in _evaluate
eval(self.__forward_code__, globalns, localns),
File "<string>", line 1, in <module>
NameError: name 'User' is not defined
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/type.py`
Content:
```
1 import typing
2
3 from dataclasses import dataclass
4 from graphql import GraphQLField, GraphQLObjectType
5 from graphql.utilities.schema_printer import print_type
6
7 from .constants import IS_STRAWBERRY_FIELD
8 from .type_converter import get_graphql_type_for_annotation
9
10
11 def _get_resolver(cls, field_name):
12 def _resolver(obj, info):
13 # TODO: can we make this nicer?
14 # does it work in all the cases?
15
16 field_resolver = getattr(cls(**(obj.__dict__ if obj else {})), field_name)
17
18 if getattr(field_resolver, IS_STRAWBERRY_FIELD, False):
19 return field_resolver(obj, info)
20
21 return field_resolver
22
23 return _resolver
24
25
26 def _get_fields(cls):
27 cls_annotations = typing.get_type_hints(cls)
28
29 fields = {
30 key: GraphQLField(
31 get_graphql_type_for_annotation(value, field_name=key),
32 resolve=_get_resolver(cls, key),
33 )
34 for key, value in cls_annotations.items()
35 }
36
37 fields.update(
38 {
39 key: value.field
40 for key, value in cls.__dict__.items()
41 if getattr(value, IS_STRAWBERRY_FIELD, False)
42 }
43 )
44
45 return fields
46
47
48 def type(cls):
49 def wrap():
50 def repr_(self):
51 return print_type(self.field)
52
53 setattr(cls, "__repr__", repr_)
54
55 cls._fields = _get_fields(cls)
56 cls.field = GraphQLObjectType(name=cls.__name__, fields=cls._fields)
57
58 return dataclass(cls, repr=False)
59
60 return wrap()
61
```
Path: `strawberry/type_converter.py`
Content:
```
1 from graphql import (
2 GraphQLBoolean,
3 GraphQLFloat,
4 GraphQLID,
5 GraphQLInt,
6 GraphQLList,
7 GraphQLNonNull,
8 GraphQLString,
9 GraphQLUnionType,
10 )
11
12 from .scalars import ID
13
14
15 TYPE_MAP = {
16 str: GraphQLString,
17 int: GraphQLInt,
18 float: GraphQLFloat,
19 bool: GraphQLBoolean,
20 ID: GraphQLID,
21 }
22
23
24 # TODO: make so that we don't pass force optional
25 # we use that when trying to get the type for a
26 # option field (which can either be a scalar or an object type)
27 def get_graphql_type_for_annotation(
28 annotation, field_name: str, force_optional: bool = False
29 ):
30 # TODO: nice error
31
32 is_optional = False
33
34 # TODO: this might lead to issues with types that have a field value
35 if hasattr(annotation, "field"):
36 graphql_type = annotation.field
37 else:
38 annotation_name = getattr(annotation, "_name", None)
39
40 if annotation_name == "List":
41 list_of_type = get_graphql_type_for_annotation(
42 annotation.__args__[0], field_name
43 )
44
45 return GraphQLList(list_of_type)
46
47 # for some reason _name is None for Optional and Union types, so we check if we
48 # have __args__ populated, there might be some edge cases where __args__ is
49 # populated but the type is not an Union, like in the above case with Lists
50 if hasattr(annotation, "__args__"):
51 types = annotation.__args__
52 non_none_types = [x for x in types if x != type(None)] # noqa:E721
53
54 # optionals are represented as Union[type, None]
55 if len(non_none_types) == 1:
56 is_optional = True
57 graphql_type = get_graphql_type_for_annotation(
58 non_none_types[0], field_name, force_optional=True
59 )
60 else:
61 is_optional = type(None) in types
62
63 # TODO: union types don't work with scalar types
64 # so we want to return a nice error
65 # also we want to make sure we have been passed
66 # strawberry types
67 graphql_type = GraphQLUnionType(
68 field_name, [type.field for type in types]
69 )
70 else:
71 graphql_type = TYPE_MAP.get(annotation)
72
73 if not graphql_type:
74 raise ValueError(f"Unable to get GraphQL type for {annotation}")
75
76 if is_optional or force_optional:
77 return graphql_type
78
79 return GraphQLNonNull(graphql_type)
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/strawberry/type.py b/strawberry/type.py
--- a/strawberry/type.py
+++ b/strawberry/type.py
@@ -5,7 +5,7 @@
from graphql.utilities.schema_printer import print_type
from .constants import IS_STRAWBERRY_FIELD
-from .type_converter import get_graphql_type_for_annotation
+from .type_converter import REGISTRY, get_graphql_type_for_annotation
def _get_resolver(cls, field_name):
@@ -23,37 +23,39 @@
return _resolver
-def _get_fields(cls):
- cls_annotations = typing.get_type_hints(cls)
-
- fields = {
- key: GraphQLField(
- get_graphql_type_for_annotation(value, field_name=key),
- resolve=_get_resolver(cls, key),
- )
- for key, value in cls_annotations.items()
- }
-
- fields.update(
- {
- key: value.field
- for key, value in cls.__dict__.items()
- if getattr(value, IS_STRAWBERRY_FIELD, False)
- }
- )
-
- return fields
-
-
def type(cls):
def wrap():
+ name = cls.__name__
+ REGISTRY[name] = cls
+
def repr_(self):
return print_type(self.field)
setattr(cls, "__repr__", repr_)
- cls._fields = _get_fields(cls)
- cls.field = GraphQLObjectType(name=cls.__name__, fields=cls._fields)
+ annotations = typing.get_type_hints(cls, None, REGISTRY)
+
+ def _get_fields():
+
+ fields = {
+ key: GraphQLField(
+ get_graphql_type_for_annotation(value, key),
+ resolve=_get_resolver(cls, key),
+ )
+ for key, value in annotations.items()
+ }
+
+ fields.update(
+ {
+ key: value.field
+ for key, value in cls.__dict__.items()
+ if getattr(value, IS_STRAWBERRY_FIELD, False)
+ }
+ )
+
+ return fields
+
+ cls.field = GraphQLObjectType(name, lambda: _get_fields())
return dataclass(cls, repr=False)
diff --git a/strawberry/type_converter.py b/strawberry/type_converter.py
--- a/strawberry/type_converter.py
+++ b/strawberry/type_converter.py
@@ -12,7 +12,7 @@
from .scalars import ID
-TYPE_MAP = {
+REGISTRY = {
str: GraphQLString,
int: GraphQLInt,
float: GraphQLFloat,
@@ -27,11 +27,9 @@
def get_graphql_type_for_annotation(
annotation, field_name: str, force_optional: bool = False
):
- # TODO: nice error
-
- is_optional = False
-
# TODO: this might lead to issues with types that have a field value
+ is_optional = force_optional
+
if hasattr(annotation, "field"):
graphql_type = annotation.field
else:
@@ -49,7 +47,7 @@
# populated but the type is not an Union, like in the above case with Lists
if hasattr(annotation, "__args__"):
types = annotation.__args__
- non_none_types = [x for x in types if x != type(None)] # noqa:E721
+ non_none_types = [x for x in types if x != None.__class__] # noqa:E721
# optionals are represented as Union[type, None]
if len(non_none_types) == 1:
@@ -58,7 +56,7 @@
non_none_types[0], field_name, force_optional=True
)
else:
- is_optional = type(None) in types
+ is_optional = None.__class__ in types
# TODO: union types don't work with scalar types
# so we want to return a nice error
@@ -68,12 +66,12 @@
field_name, [type.field for type in types]
)
else:
- graphql_type = TYPE_MAP.get(annotation)
+ graphql_type = REGISTRY.get(annotation)
if not graphql_type:
raise ValueError(f"Unable to get GraphQL type for {annotation}")
- if is_optional or force_optional:
+ if is_optional:
return graphql_type
return GraphQLNonNull(graphql_type)
| {"golden_diff": "diff --git a/strawberry/type.py b/strawberry/type.py\n--- a/strawberry/type.py\n+++ b/strawberry/type.py\n@@ -5,7 +5,7 @@\n from graphql.utilities.schema_printer import print_type\n \n from .constants import IS_STRAWBERRY_FIELD\n-from .type_converter import get_graphql_type_for_annotation\n+from .type_converter import REGISTRY, get_graphql_type_for_annotation\n \n \n def _get_resolver(cls, field_name):\n@@ -23,37 +23,39 @@\n return _resolver\n \n \n-def _get_fields(cls):\n- cls_annotations = typing.get_type_hints(cls)\n-\n- fields = {\n- key: GraphQLField(\n- get_graphql_type_for_annotation(value, field_name=key),\n- resolve=_get_resolver(cls, key),\n- )\n- for key, value in cls_annotations.items()\n- }\n-\n- fields.update(\n- {\n- key: value.field\n- for key, value in cls.__dict__.items()\n- if getattr(value, IS_STRAWBERRY_FIELD, False)\n- }\n- )\n-\n- return fields\n-\n-\n def type(cls):\n def wrap():\n+ name = cls.__name__\n+ REGISTRY[name] = cls\n+\n def repr_(self):\n return print_type(self.field)\n \n setattr(cls, \"__repr__\", repr_)\n \n- cls._fields = _get_fields(cls)\n- cls.field = GraphQLObjectType(name=cls.__name__, fields=cls._fields)\n+ annotations = typing.get_type_hints(cls, None, REGISTRY)\n+\n+ def _get_fields():\n+\n+ fields = {\n+ key: GraphQLField(\n+ get_graphql_type_for_annotation(value, key),\n+ resolve=_get_resolver(cls, key),\n+ )\n+ for key, value in annotations.items()\n+ }\n+\n+ fields.update(\n+ {\n+ key: value.field\n+ for key, value in cls.__dict__.items()\n+ if getattr(value, IS_STRAWBERRY_FIELD, False)\n+ }\n+ )\n+\n+ return fields\n+\n+ cls.field = GraphQLObjectType(name, lambda: _get_fields())\n \n return dataclass(cls, repr=False)\n \ndiff --git a/strawberry/type_converter.py b/strawberry/type_converter.py\n--- a/strawberry/type_converter.py\n+++ b/strawberry/type_converter.py\n@@ -12,7 +12,7 @@\n from .scalars import ID\n \n \n-TYPE_MAP = {\n+REGISTRY = {\n str: GraphQLString,\n int: GraphQLInt,\n float: GraphQLFloat,\n@@ -27,11 +27,9 @@\n def get_graphql_type_for_annotation(\n annotation, field_name: str, force_optional: bool = False\n ):\n- # TODO: nice error\n-\n- is_optional = False\n-\n # TODO: this might lead to issues with types that have a field value\n+ is_optional = force_optional\n+\n if hasattr(annotation, \"field\"):\n graphql_type = annotation.field\n else:\n@@ -49,7 +47,7 @@\n # populated but the type is not an Union, like in the above case with Lists\n if hasattr(annotation, \"__args__\"):\n types = annotation.__args__\n- non_none_types = [x for x in types if x != type(None)] # noqa:E721\n+ non_none_types = [x for x in types if x != None.__class__] # noqa:E721\n \n # optionals are represented as Union[type, None]\n if len(non_none_types) == 1:\n@@ -58,7 +56,7 @@\n non_none_types[0], field_name, force_optional=True\n )\n else:\n- is_optional = type(None) in types\n+ is_optional = None.__class__ in types\n \n # TODO: union types don't work with scalar types\n # so we want to return a nice error\n@@ -68,12 +66,12 @@\n field_name, [type.field for type in types]\n )\n else:\n- graphql_type = TYPE_MAP.get(annotation)\n+ graphql_type = REGISTRY.get(annotation)\n \n if not graphql_type:\n raise ValueError(f\"Unable to get GraphQL type for {annotation}\")\n \n- if is_optional or force_optional:\n+ if is_optional:\n return graphql_type\n \n return GraphQLNonNull(graphql_type)\n", "issue": "Support forward references\nSee: https://www.python.org/dev/peps/pep-0563/#forward-references\r\n\r\nRight now the following code would break:\r\n\r\n```python\r\nfrom __future__ import annotations\r\n\r\nimport strawberry\r\nimport typing\r\n\r\[email protected]\r\nclass User:\r\n name: str\r\n friend: typing.Optional[User]\r\n```\r\n\r\nThis is the error we get:\r\n\r\n```\r\n File \"/Users/patrickarminio/Documents/personal/temp/stra/app.py\", line 7, in <module>\r\n from schema import schema\r\n File \"/Users/patrickarminio/Documents/personal/temp/stra/schema.py\", line 10, in <module>\r\n @strawberry.type\r\n File \"/Users/patrickarminio/.virtualenvs/stra-so-aNvo2/lib/python3.7/site-packages/strawberry/type.py\", line 60, in type\r\n return wrap()\r\n File \"/Users/patrickarminio/.virtualenvs/stra-so-aNvo2/lib/python3.7/site-packages/strawberry/type.py\", line 55, in wrap\r\n cls._fields = _get_fields(cls)\r\n File \"/Users/patrickarminio/.virtualenvs/stra-so-aNvo2/lib/python3.7/site-packages/strawberry/type.py\", line 27, in _get_fields\r\n cls_annotations = typing.get_type_hints(cls)\r\n File \"/Users/patrickarminio/.pyenv/versions/3.7.0/lib/python3.7/typing.py\", line 973, in get_type_hints\r\n value = _eval_type(value, base_globals, localns)\r\n File \"/Users/patrickarminio/.pyenv/versions/3.7.0/lib/python3.7/typing.py\", line 260, in _eval_type\r\n return t._evaluate(globalns, localns)\r\n File \"/Users/patrickarminio/.pyenv/versions/3.7.0/lib/python3.7/typing.py\", line 464, in _evaluate\r\n eval(self.__forward_code__, globalns, localns),\r\n File \"<string>\", line 1, in <module>\r\nNameError: name 'User' is not defined\r\n```\n", "before_files": [{"content": "import typing\n\nfrom dataclasses import dataclass\nfrom graphql import GraphQLField, GraphQLObjectType\nfrom graphql.utilities.schema_printer import print_type\n\nfrom .constants import IS_STRAWBERRY_FIELD\nfrom .type_converter import get_graphql_type_for_annotation\n\n\ndef _get_resolver(cls, field_name):\n def _resolver(obj, info):\n # TODO: can we make this nicer?\n # does it work in all the cases?\n\n field_resolver = getattr(cls(**(obj.__dict__ if obj else {})), field_name)\n\n if getattr(field_resolver, IS_STRAWBERRY_FIELD, False):\n return field_resolver(obj, info)\n\n return field_resolver\n\n return _resolver\n\n\ndef _get_fields(cls):\n cls_annotations = typing.get_type_hints(cls)\n\n fields = {\n key: GraphQLField(\n get_graphql_type_for_annotation(value, field_name=key),\n resolve=_get_resolver(cls, key),\n )\n for key, value in cls_annotations.items()\n }\n\n fields.update(\n {\n key: value.field\n for key, value in cls.__dict__.items()\n if getattr(value, IS_STRAWBERRY_FIELD, False)\n }\n )\n\n return fields\n\n\ndef type(cls):\n def wrap():\n def repr_(self):\n return print_type(self.field)\n\n setattr(cls, \"__repr__\", repr_)\n\n cls._fields = _get_fields(cls)\n cls.field = GraphQLObjectType(name=cls.__name__, fields=cls._fields)\n\n return dataclass(cls, repr=False)\n\n return wrap()\n", "path": "strawberry/type.py"}, {"content": "from graphql import (\n GraphQLBoolean,\n GraphQLFloat,\n GraphQLID,\n GraphQLInt,\n GraphQLList,\n GraphQLNonNull,\n GraphQLString,\n GraphQLUnionType,\n)\n\nfrom .scalars import ID\n\n\nTYPE_MAP = {\n str: GraphQLString,\n int: GraphQLInt,\n float: GraphQLFloat,\n bool: GraphQLBoolean,\n ID: GraphQLID,\n}\n\n\n# TODO: make so that we don't pass force optional\n# we use that when trying to get the type for a\n# option field (which can either be a scalar or an object type)\ndef get_graphql_type_for_annotation(\n annotation, field_name: str, force_optional: bool = False\n):\n # TODO: nice error\n\n is_optional = False\n\n # TODO: this might lead to issues with types that have a field value\n if hasattr(annotation, \"field\"):\n graphql_type = annotation.field\n else:\n annotation_name = getattr(annotation, \"_name\", None)\n\n if annotation_name == \"List\":\n list_of_type = get_graphql_type_for_annotation(\n annotation.__args__[0], field_name\n )\n\n return GraphQLList(list_of_type)\n\n # for some reason _name is None for Optional and Union types, so we check if we\n # have __args__ populated, there might be some edge cases where __args__ is\n # populated but the type is not an Union, like in the above case with Lists\n if hasattr(annotation, \"__args__\"):\n types = annotation.__args__\n non_none_types = [x for x in types if x != type(None)] # noqa:E721\n\n # optionals are represented as Union[type, None]\n if len(non_none_types) == 1:\n is_optional = True\n graphql_type = get_graphql_type_for_annotation(\n non_none_types[0], field_name, force_optional=True\n )\n else:\n is_optional = type(None) in types\n\n # TODO: union types don't work with scalar types\n # so we want to return a nice error\n # also we want to make sure we have been passed\n # strawberry types\n graphql_type = GraphQLUnionType(\n field_name, [type.field for type in types]\n )\n else:\n graphql_type = TYPE_MAP.get(annotation)\n\n if not graphql_type:\n raise ValueError(f\"Unable to get GraphQL type for {annotation}\")\n\n if is_optional or force_optional:\n return graphql_type\n\n return GraphQLNonNull(graphql_type)\n", "path": "strawberry/type_converter.py"}], "after_files": [{"content": "import typing\n\nfrom dataclasses import dataclass\nfrom graphql import GraphQLField, GraphQLObjectType\nfrom graphql.utilities.schema_printer import print_type\n\nfrom .constants import IS_STRAWBERRY_FIELD\nfrom .type_converter import REGISTRY, get_graphql_type_for_annotation\n\n\ndef _get_resolver(cls, field_name):\n def _resolver(obj, info):\n # TODO: can we make this nicer?\n # does it work in all the cases?\n\n field_resolver = getattr(cls(**(obj.__dict__ if obj else {})), field_name)\n\n if getattr(field_resolver, IS_STRAWBERRY_FIELD, False):\n return field_resolver(obj, info)\n\n return field_resolver\n\n return _resolver\n\n\ndef type(cls):\n def wrap():\n name = cls.__name__\n REGISTRY[name] = cls\n\n def repr_(self):\n return print_type(self.field)\n\n setattr(cls, \"__repr__\", repr_)\n\n annotations = typing.get_type_hints(cls, None, REGISTRY)\n\n def _get_fields():\n\n fields = {\n key: GraphQLField(\n get_graphql_type_for_annotation(value, key),\n resolve=_get_resolver(cls, key),\n )\n for key, value in annotations.items()\n }\n\n fields.update(\n {\n key: value.field\n for key, value in cls.__dict__.items()\n if getattr(value, IS_STRAWBERRY_FIELD, False)\n }\n )\n\n return fields\n\n cls.field = GraphQLObjectType(name, lambda: _get_fields())\n\n return dataclass(cls, repr=False)\n\n return wrap()\n", "path": "strawberry/type.py"}, {"content": "from graphql import (\n GraphQLBoolean,\n GraphQLFloat,\n GraphQLID,\n GraphQLInt,\n GraphQLList,\n GraphQLNonNull,\n GraphQLString,\n GraphQLUnionType,\n)\n\nfrom .scalars import ID\n\n\nREGISTRY = {\n str: GraphQLString,\n int: GraphQLInt,\n float: GraphQLFloat,\n bool: GraphQLBoolean,\n ID: GraphQLID,\n}\n\n\n# TODO: make so that we don't pass force optional\n# we use that when trying to get the type for a\n# option field (which can either be a scalar or an object type)\ndef get_graphql_type_for_annotation(\n annotation, field_name: str, force_optional: bool = False\n):\n # TODO: this might lead to issues with types that have a field value\n is_optional = force_optional\n\n if hasattr(annotation, \"field\"):\n graphql_type = annotation.field\n else:\n annotation_name = getattr(annotation, \"_name\", None)\n\n if annotation_name == \"List\":\n list_of_type = get_graphql_type_for_annotation(\n annotation.__args__[0], field_name\n )\n\n return GraphQLList(list_of_type)\n\n # for some reason _name is None for Optional and Union types, so we check if we\n # have __args__ populated, there might be some edge cases where __args__ is\n # populated but the type is not an Union, like in the above case with Lists\n if hasattr(annotation, \"__args__\"):\n types = annotation.__args__\n non_none_types = [x for x in types if x != None.__class__] # noqa:E721\n\n # optionals are represented as Union[type, None]\n if len(non_none_types) == 1:\n is_optional = True\n graphql_type = get_graphql_type_for_annotation(\n non_none_types[0], field_name, force_optional=True\n )\n else:\n is_optional = None.__class__ in types\n\n # TODO: union types don't work with scalar types\n # so we want to return a nice error\n # also we want to make sure we have been passed\n # strawberry types\n graphql_type = GraphQLUnionType(\n field_name, [type.field for type in types]\n )\n else:\n graphql_type = REGISTRY.get(annotation)\n\n if not graphql_type:\n raise ValueError(f\"Unable to get GraphQL type for {annotation}\")\n\n if is_optional:\n return graphql_type\n\n return GraphQLNonNull(graphql_type)\n", "path": "strawberry/type_converter.py"}]} | 1,924 | 982 |
gh_patches_debug_14469 | rasdani/github-patches | git_diff | openstates__openstates-scrapers-2343 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MO failing since at least 2018-05-24
MO has been failing since 2018-05-24
Based on automated runs it appears that MO has not run successfully in 2 days (2018-05-24).
```
06:09:50 INFO billy: Save committee on Government Oversight, Standing
06:09:50 INFO billy: Save committee on Tax Policy, Statutory
06:09:50 INFO billy: Save committee on Litigation Reform, Standing
06:09:50 INFO billy: Save committee Rules - Legislative Oversight, Standing
06:09:50 INFO billy: Save committee Investigative on Oversight, Standing
06:09:50 INFO billy: Save committee Agriculture Policy, Standing
06:09:50 INFO billy: Save committee Transportation, Infrastructure and Public Safety
06:09:50 INFO billy: Save committee Agriculture, Food Production, and Outdoor Resources
06:09:50 INFO billy: Save committee Professional Registration
06:09:50 INFO billy: Save committee on Employment Security, Standing
06:09:50 INFO billy: Save committee Fiscal Oversight
06:09:50 INFO billy: Save committee Workforce Development, Standing
06:09:50 INFO billy: Save committee General Laws, Standing
06:09:50 INFO billy: Save committee Subcommittee on Appropriations - Public Safety, Corrections, Transportation, and Revenue, Subcommittee
06:09:50 INFO billy: Save committee Subcommittee on Mass Transit Security, Subcommittee
06:09:50 INFO billy: Save committee Subcommittee on Second Amendment Preservation, Subcommittee
06:09:50 INFO billy: Save committee on Homeland Security, Standing
06:09:50 INFO billy: Save committee Ways and Means
06:09:50 INFO billy: Save committee Judiciary, Standing
06:09:50 INFO billy: Save committee Subcommittee on Ports, Subcommittee
06:09:50 INFO billy: Save committee Corrections and Public Institutions, Standing
06:09:50 INFO billy: Save bill lower 2018S1: HR 2
06:09:50 WARNING billy: Value u'2018S1' for field '<obj>.session' is not in the enumeration: ['2012', '2013', '2014', '2015', '2016', '2017', '2017S1', '2017S2', '2018']
06:09:50 INFO billy: Save bill lower 2018S1: HR 1
06:09:50 WARNING billy: Value u'2018S1' for field '<obj>.session' is not in the enumeration: ['2012', '2013', '2014', '2015', '2016', '2017', '2017S1', '2017S2', '2018']
06:09:50 INFO billy: Save vote 2018S1 lower: HR 2 'Reported Do Pass (H) - '
06:09:50 WARNING billy: Value None for field '<obj>.yes_count' is not of type integer
06:09:52 INFO billy: billy-update abbr=mo
actions=import,report
types=bills,legislators,votes,committees,alldata
sessions=2018
terms=2017-2018
06:09:52 INFO billy: Finished importing 194 legislator files.
06:09:53 INFO billy: imported 1 vote files
Traceback (most recent call last):
File "/opt/openstates/venv-billy//bin/billy-update", line 11, in <module>
load_entry_point('billy', 'console_scripts', 'billy-update')()
File "/opt/openstates/venv-billy/src/billy/billy/bin/update.py", line 413, in main
import_report = _do_imports(abbrev, args)
File "/opt/openstates/venv-billy/src/billy/billy/bin/update.py", line 152, in _do_imports
report['bills'] = import_bills(abbrev, settings.BILLY_DATA_DIR)
File "/opt/openstates/venv-billy/src/billy/billy/importers/bills.py", line 413, in import_bills
ret = import_bill(data, votes, categorizer)
File "/opt/openstates/venv-billy/src/billy/billy/importers/bills.py", line 231, in import_bill
match_sponsor_ids(abbr, data)
File "/opt/openstates/venv-billy/src/billy/billy/importers/bills.py", line 37, in match_sponsor_ids
sponsor['name'])
File "/opt/openstates/venv-billy/src/billy/billy/importers/names.py", line 26, in get_legislator_id
raise Exception("bad session: " + session)
Exception: bad session: 2018S1
```
Visit http://bobsled.openstates.org for more info.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `billy_metadata/mo.py`
Content:
```
1 import datetime
2
3 metadata = dict(
4 name='Missouri',
5 abbreviation='mo',
6 legislature_name='Missouri General Assembly',
7 legislature_url='http://www.moga.mo.gov/',
8 capitol_timezone='America/Chicago',
9 chambers={
10 'upper': {
11 'name': 'Senate',
12 'title': 'Senator'
13 },
14 'lower': {
15 'name': 'House',
16 'title': 'Representative'
17 },
18 },
19 terms=[
20 {
21 'name': '2011-2012',
22 'sessions': ['2012'],
23 'start_year': 2011,
24 'end_year': 2012,
25 },
26 {
27 'name': '2013-2014',
28 'sessions': ['2013', '2014'],
29 'start_year': 2013,
30 'end_year': 2014,
31 },
32 {
33 'name': '2015-2016',
34 'sessions': ['2015', '2016'],
35 'start_year': 2015,
36 'end_year': 2016,
37 },
38 {
39 'name': '2017-2018',
40 'sessions': ['2017', '2017S1', '2017S2', '2018'],
41 'start_year': 2017,
42 'end_year': 2018,
43 },
44 ],
45 # General Assembly sessions convene the Wed. following the first Mon.
46 # of January and adjourn May 30.
47 # http://www.house.mo.gov/content.aspx?info=/info/howbill.htm
48 session_details={
49 '2012': {
50 'type': 'primary',
51 'start_date': datetime.date(2012,1,4),
52 'end_date': datetime.date(2012,5,30),
53 'display_name': '2012 Regular Session',
54 '_scraped_name': '2012 - 96th General Assembly - 2nd Regular Session',
55 },
56 '2013': {
57 'type': 'primary',
58 'start_date': datetime.date(2013,1,9),
59 'end_date': datetime.date(2013,5,30),
60 'display_name': '2013 Regular Session',
61 '_scraped_name': '2013 - 97th General Assembly - 1st Regular Session',
62 },
63 '2014': {
64 'type': 'primary',
65 'start_date': datetime.date(2014,1,8),
66 'end_date': datetime.date(2014,5,30),
67 'display_name': '2014 Regular Session',
68 '_scraped_name': '2014 - 97th General Assembly - 2nd Regular Session',
69 },
70 '2015': {
71 'type': 'primary',
72 'start_date': datetime.date(2015,1,7),
73 'end_date': datetime.date(2015,5,30),
74 'display_name': '2015 Regular Session',
75 '_scraped_name': '2015 - 98th General Assembly - 1st Regular Session',
76 },
77 '2016': {
78 'type': 'primary',
79 'start_date': datetime.date(2016,1,6),
80 'end_date': datetime.date(2016,5,30),
81 'display_name': '2016 Regular Session',
82 },
83 '2017': {
84 'type': 'primary',
85 'start_date': datetime.date(2017,1,4),
86 'end_date': datetime.date(2017,5,12),
87 'display_name': '2017 Regular Session',
88 },
89 '2017S1': {
90 'type': 'special',
91 'display_name': '2017 First Extraordinary Session',
92 },
93 '2017S2': {
94 'type': 'special',
95 'display_name': '2017 Second Extraordinary Session',
96 },
97 '2018': {
98 'type': 'primary',
99 'display_name': '2018 Regular Session',
100 },
101 '2018S1': {
102 'type': 'special',
103 'display_name': '2018 First Extraordinary Session',
104 },
105 },
106 feature_flags=['subjects', 'influenceexplorer'],
107 _ignored_scraped_sessions=[
108 '2014 - 97th General Assembly - 2nd Regular Session',
109 '2013 - 97th General Assembly - 1st Regular Session',
110 '2012 - 96th General Assembly - 2nd Regular Session',
111 '2011 - 96th General Assembly - 1st Regular Session',
112 '2010 - 95th General Assembly - 2nd Regular Session',
113 '2009 - 95th General Assembly - 1st Regular Session',
114 '2008 - 94th General Assembly - 2nd Regular Session',
115 '2007 - 94th General Assembly - 1st Regular Session',
116 '2006 - 93rd General Assembly - 2nd Regular Session',
117 '2005 - 93rd General Assembly - 1st Regular Session',
118 '2004 - 92nd General Assembly - 2nd Regular Session',
119 '2003 - 92nd General Assembly - 1st Regular Session',
120 '2002 - 91st General Assembly - 2nd Regular Session',
121 '2001 - 91st General Assembly - 1st Regular Session',
122 '2000 - 90th General Assembly - 2nd Regular Session',
123 '1999 - 90th General Assembly - 1st Regular Session',
124 '1998 - 89th General Assembly - 2nd Regular Session',
125 '1997 - 89th General Assembly - 1st Regular Session',
126 '1996 - 88th General Assembly - 2nd Regular Session',
127 '1995 - 88th General Assembly - 1st Regular Session'
128 ]
129 )
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/billy_metadata/mo.py b/billy_metadata/mo.py
--- a/billy_metadata/mo.py
+++ b/billy_metadata/mo.py
@@ -37,7 +37,7 @@
},
{
'name': '2017-2018',
- 'sessions': ['2017', '2017S1', '2017S2', '2018'],
+ 'sessions': ['2017', '2017S1', '2017S2', '2018', '2018S1'],
'start_year': 2017,
'end_year': 2018,
},
@@ -98,7 +98,7 @@
'type': 'primary',
'display_name': '2018 Regular Session',
},
- '2018S2': {
+ '2018S1': {
'type': 'special',
'display_name': '2018 First Extraordinary Session',
},
| {"golden_diff": "diff --git a/billy_metadata/mo.py b/billy_metadata/mo.py\n--- a/billy_metadata/mo.py\n+++ b/billy_metadata/mo.py\n@@ -37,7 +37,7 @@\n },\n {\n 'name': '2017-2018',\n- 'sessions': ['2017', '2017S1', '2017S2', '2018'],\n+ 'sessions': ['2017', '2017S1', '2017S2', '2018', '2018S1'],\n 'start_year': 2017,\n 'end_year': 2018,\n },\n@@ -98,7 +98,7 @@\n 'type': 'primary',\n 'display_name': '2018 Regular Session',\n },\n- '2018S2': {\n+ '2018S1': {\n 'type': 'special',\n 'display_name': '2018 First Extraordinary Session',\n },\n", "issue": "MO failing since at least 2018-05-24\nMO has been failing since 2018-05-24\n\nBased on automated runs it appears that MO has not run successfully in 2 days (2018-05-24).\n\n\n```\n 06:09:50 INFO billy: Save committee on Government Oversight, Standing\n06:09:50 INFO billy: Save committee on Tax Policy, Statutory\n06:09:50 INFO billy: Save committee on Litigation Reform, Standing\n06:09:50 INFO billy: Save committee Rules - Legislative Oversight, Standing\n06:09:50 INFO billy: Save committee Investigative on Oversight, Standing\n06:09:50 INFO billy: Save committee Agriculture Policy, Standing\n06:09:50 INFO billy: Save committee Transportation, Infrastructure and Public Safety\n06:09:50 INFO billy: Save committee Agriculture, Food Production, and Outdoor Resources\n06:09:50 INFO billy: Save committee Professional Registration\n06:09:50 INFO billy: Save committee on Employment Security, Standing\n06:09:50 INFO billy: Save committee Fiscal Oversight\n06:09:50 INFO billy: Save committee Workforce Development, Standing\n06:09:50 INFO billy: Save committee General Laws, Standing\n06:09:50 INFO billy: Save committee Subcommittee on Appropriations - Public Safety, Corrections, Transportation, and Revenue, Subcommittee\n06:09:50 INFO billy: Save committee Subcommittee on Mass Transit Security, Subcommittee\n06:09:50 INFO billy: Save committee Subcommittee on Second Amendment Preservation, Subcommittee\n06:09:50 INFO billy: Save committee on Homeland Security, Standing\n06:09:50 INFO billy: Save committee Ways and Means\n06:09:50 INFO billy: Save committee Judiciary, Standing\n06:09:50 INFO billy: Save committee Subcommittee on Ports, Subcommittee\n06:09:50 INFO billy: Save committee Corrections and Public Institutions, Standing\n06:09:50 INFO billy: Save bill lower 2018S1: HR 2\n06:09:50 WARNING billy: Value u'2018S1' for field '<obj>.session' is not in the enumeration: ['2012', '2013', '2014', '2015', '2016', '2017', '2017S1', '2017S2', '2018']\n06:09:50 INFO billy: Save bill lower 2018S1: HR 1\n06:09:50 WARNING billy: Value u'2018S1' for field '<obj>.session' is not in the enumeration: ['2012', '2013', '2014', '2015', '2016', '2017', '2017S1', '2017S2', '2018']\n06:09:50 INFO billy: Save vote 2018S1 lower: HR 2 'Reported Do Pass (H) - '\n06:09:50 WARNING billy: Value None for field '<obj>.yes_count' is not of type integer\n06:09:52 INFO billy: billy-update abbr=mo\n actions=import,report\n types=bills,legislators,votes,committees,alldata\n sessions=2018\n terms=2017-2018\n06:09:52 INFO billy: Finished importing 194 legislator files.\n06:09:53 INFO billy: imported 1 vote files\nTraceback (most recent call last):\n File \"/opt/openstates/venv-billy//bin/billy-update\", line 11, in <module>\n load_entry_point('billy', 'console_scripts', 'billy-update')()\n File \"/opt/openstates/venv-billy/src/billy/billy/bin/update.py\", line 413, in main\n import_report = _do_imports(abbrev, args)\n File \"/opt/openstates/venv-billy/src/billy/billy/bin/update.py\", line 152, in _do_imports\n report['bills'] = import_bills(abbrev, settings.BILLY_DATA_DIR)\n File \"/opt/openstates/venv-billy/src/billy/billy/importers/bills.py\", line 413, in import_bills\n ret = import_bill(data, votes, categorizer)\n File \"/opt/openstates/venv-billy/src/billy/billy/importers/bills.py\", line 231, in import_bill\n match_sponsor_ids(abbr, data)\n File \"/opt/openstates/venv-billy/src/billy/billy/importers/bills.py\", line 37, in match_sponsor_ids\n sponsor['name'])\n File \"/opt/openstates/venv-billy/src/billy/billy/importers/names.py\", line 26, in get_legislator_id\n raise Exception(\"bad session: \" + session)\nException: bad session: 2018S1\n```\n\nVisit http://bobsled.openstates.org for more info.\n\n", "before_files": [{"content": "import datetime\n\nmetadata = dict(\n name='Missouri',\n abbreviation='mo',\n legislature_name='Missouri General Assembly',\n legislature_url='http://www.moga.mo.gov/',\n capitol_timezone='America/Chicago',\n chambers={\n 'upper': {\n 'name': 'Senate',\n 'title': 'Senator'\n },\n 'lower': {\n 'name': 'House',\n 'title': 'Representative'\n },\n },\n terms=[\n {\n 'name': '2011-2012',\n 'sessions': ['2012'],\n 'start_year': 2011,\n 'end_year': 2012,\n },\n {\n 'name': '2013-2014',\n 'sessions': ['2013', '2014'],\n 'start_year': 2013,\n 'end_year': 2014,\n },\n {\n 'name': '2015-2016',\n 'sessions': ['2015', '2016'],\n 'start_year': 2015,\n 'end_year': 2016,\n },\n {\n 'name': '2017-2018',\n 'sessions': ['2017', '2017S1', '2017S2', '2018'],\n 'start_year': 2017,\n 'end_year': 2018,\n },\n ],\n # General Assembly sessions convene the Wed. following the first Mon.\n # of January and adjourn May 30.\n # http://www.house.mo.gov/content.aspx?info=/info/howbill.htm\n session_details={\n '2012': {\n 'type': 'primary',\n 'start_date': datetime.date(2012,1,4),\n 'end_date': datetime.date(2012,5,30),\n 'display_name': '2012 Regular Session',\n '_scraped_name': '2012 - 96th General Assembly - 2nd Regular Session',\n },\n '2013': {\n 'type': 'primary',\n 'start_date': datetime.date(2013,1,9),\n 'end_date': datetime.date(2013,5,30),\n 'display_name': '2013 Regular Session',\n '_scraped_name': '2013 - 97th General Assembly - 1st Regular Session',\n },\n '2014': {\n 'type': 'primary',\n 'start_date': datetime.date(2014,1,8),\n 'end_date': datetime.date(2014,5,30),\n 'display_name': '2014 Regular Session',\n '_scraped_name': '2014 - 97th General Assembly - 2nd Regular Session',\n },\n '2015': {\n 'type': 'primary',\n 'start_date': datetime.date(2015,1,7),\n 'end_date': datetime.date(2015,5,30),\n 'display_name': '2015 Regular Session',\n '_scraped_name': '2015 - 98th General Assembly - 1st Regular Session',\n },\n '2016': {\n 'type': 'primary',\n 'start_date': datetime.date(2016,1,6),\n 'end_date': datetime.date(2016,5,30),\n 'display_name': '2016 Regular Session',\n },\n '2017': {\n 'type': 'primary',\n 'start_date': datetime.date(2017,1,4),\n 'end_date': datetime.date(2017,5,12),\n 'display_name': '2017 Regular Session',\n },\n '2017S1': {\n 'type': 'special',\n 'display_name': '2017 First Extraordinary Session',\n },\n '2017S2': {\n 'type': 'special',\n 'display_name': '2017 Second Extraordinary Session',\n },\n '2018': {\n 'type': 'primary',\n 'display_name': '2018 Regular Session',\n },\n '2018S1': {\n 'type': 'special',\n 'display_name': '2018 First Extraordinary Session',\n },\n },\n feature_flags=['subjects', 'influenceexplorer'],\n _ignored_scraped_sessions=[\n '2014 - 97th General Assembly - 2nd Regular Session',\n '2013 - 97th General Assembly - 1st Regular Session',\n '2012 - 96th General Assembly - 2nd Regular Session',\n '2011 - 96th General Assembly - 1st Regular Session',\n '2010 - 95th General Assembly - 2nd Regular Session',\n '2009 - 95th General Assembly - 1st Regular Session',\n '2008 - 94th General Assembly - 2nd Regular Session',\n '2007 - 94th General Assembly - 1st Regular Session',\n '2006 - 93rd General Assembly - 2nd Regular Session',\n '2005 - 93rd General Assembly - 1st Regular Session',\n '2004 - 92nd General Assembly - 2nd Regular Session',\n '2003 - 92nd General Assembly - 1st Regular Session',\n '2002 - 91st General Assembly - 2nd Regular Session',\n '2001 - 91st General Assembly - 1st Regular Session',\n '2000 - 90th General Assembly - 2nd Regular Session',\n '1999 - 90th General Assembly - 1st Regular Session',\n '1998 - 89th General Assembly - 2nd Regular Session',\n '1997 - 89th General Assembly - 1st Regular Session',\n '1996 - 88th General Assembly - 2nd Regular Session',\n '1995 - 88th General Assembly - 1st Regular Session'\n ]\n)\n", "path": "billy_metadata/mo.py"}], "after_files": [{"content": "import datetime\n\nmetadata = dict(\n name='Missouri',\n abbreviation='mo',\n legislature_name='Missouri General Assembly',\n legislature_url='http://www.moga.mo.gov/',\n capitol_timezone='America/Chicago',\n chambers={\n 'upper': {\n 'name': 'Senate',\n 'title': 'Senator'\n },\n 'lower': {\n 'name': 'House',\n 'title': 'Representative'\n },\n },\n terms=[\n {\n 'name': '2011-2012',\n 'sessions': ['2012'],\n 'start_year': 2011,\n 'end_year': 2012,\n },\n {\n 'name': '2013-2014',\n 'sessions': ['2013', '2014'],\n 'start_year': 2013,\n 'end_year': 2014,\n },\n {\n 'name': '2015-2016',\n 'sessions': ['2015', '2016'],\n 'start_year': 2015,\n 'end_year': 2016,\n },\n {\n 'name': '2017-2018',\n 'sessions': ['2017', '2017S1', '2017S2', '2018', '2018S1'],\n 'start_year': 2017,\n 'end_year': 2018,\n },\n ],\n # General Assembly sessions convene the Wed. following the first Mon.\n # of January and adjourn May 30.\n # http://www.house.mo.gov/content.aspx?info=/info/howbill.htm\n session_details={\n '2012': {\n 'type': 'primary',\n 'start_date': datetime.date(2012,1,4),\n 'end_date': datetime.date(2012,5,30),\n 'display_name': '2012 Regular Session',\n '_scraped_name': '2012 - 96th General Assembly - 2nd Regular Session',\n },\n '2013': {\n 'type': 'primary',\n 'start_date': datetime.date(2013,1,9),\n 'end_date': datetime.date(2013,5,30),\n 'display_name': '2013 Regular Session',\n '_scraped_name': '2013 - 97th General Assembly - 1st Regular Session',\n },\n '2014': {\n 'type': 'primary',\n 'start_date': datetime.date(2014,1,8),\n 'end_date': datetime.date(2014,5,30),\n 'display_name': '2014 Regular Session',\n '_scraped_name': '2014 - 97th General Assembly - 2nd Regular Session',\n },\n '2015': {\n 'type': 'primary',\n 'start_date': datetime.date(2015,1,7),\n 'end_date': datetime.date(2015,5,30),\n 'display_name': '2015 Regular Session',\n '_scraped_name': '2015 - 98th General Assembly - 1st Regular Session',\n },\n '2016': {\n 'type': 'primary',\n 'start_date': datetime.date(2016,1,6),\n 'end_date': datetime.date(2016,5,30),\n 'display_name': '2016 Regular Session',\n },\n '2017': {\n 'type': 'primary',\n 'start_date': datetime.date(2017,1,4),\n 'end_date': datetime.date(2017,5,12),\n 'display_name': '2017 Regular Session',\n },\n '2017S1': {\n 'type': 'special',\n 'display_name': '2017 First Extraordinary Session',\n },\n '2017S2': {\n 'type': 'special',\n 'display_name': '2017 Second Extraordinary Session',\n },\n '2018': {\n 'type': 'primary',\n 'display_name': '2018 Regular Session',\n },\n '2018S1': {\n 'type': 'special',\n 'display_name': '2018 First Extraordinary Session',\n },\n },\n feature_flags=['subjects', 'influenceexplorer'],\n _ignored_scraped_sessions=[\n '2014 - 97th General Assembly - 2nd Regular Session',\n '2013 - 97th General Assembly - 1st Regular Session',\n '2012 - 96th General Assembly - 2nd Regular Session',\n '2011 - 96th General Assembly - 1st Regular Session',\n '2010 - 95th General Assembly - 2nd Regular Session',\n '2009 - 95th General Assembly - 1st Regular Session',\n '2008 - 94th General Assembly - 2nd Regular Session',\n '2007 - 94th General Assembly - 1st Regular Session',\n '2006 - 93rd General Assembly - 2nd Regular Session',\n '2005 - 93rd General Assembly - 1st Regular Session',\n '2004 - 92nd General Assembly - 2nd Regular Session',\n '2003 - 92nd General Assembly - 1st Regular Session',\n '2002 - 91st General Assembly - 2nd Regular Session',\n '2001 - 91st General Assembly - 1st Regular Session',\n '2000 - 90th General Assembly - 2nd Regular Session',\n '1999 - 90th General Assembly - 1st Regular Session',\n '1998 - 89th General Assembly - 2nd Regular Session',\n '1997 - 89th General Assembly - 1st Regular Session',\n '1996 - 88th General Assembly - 2nd Regular Session',\n '1995 - 88th General Assembly - 1st Regular Session'\n ]\n)\n", "path": "billy_metadata/mo.py"}]} | 3,238 | 245 |
gh_patches_debug_1657 | rasdani/github-patches | git_diff | kubeflow__pipelines-5054 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TypeErro occurs in gcp/automl/create_dataset_for_tables component
### What steps did you take:
[A clear and concise description of what the bug is.]
[gcp/automl/create_dataset_for_tables component](https://github.com/kubeflow/pipelines/tree/master/components/gcp/automl/create_dataset_for_tables)'s `create_time` output is declared as a string:
https://github.com/kubeflow/pipelines/blob/ecb14f40bb819c0678589b6458892ece5369fa71/components/gcp/automl/create_dataset_for_tables/component.yaml#L15
however, `google.protobuf.timestamp_pb2.Timestamp` is returned in actual fact:
https://github.com/kubeflow/pipelines/blob/ecb14f40bb819c0678589b6458892ece5369fa71/components/gcp/automl/create_dataset_for_tables/component.py#L54
FYI: The `dataset` object is an instance of `google.cloud.automl_v1beta1.types.Dataset` class and its [document](https://googleapis.dev/python/automl/0.4.0/gapic/v1beta1/types.html#google.cloud.automl_v1beta1.types.Dataset.create_time) says:
> **create_time**
> Output only. Timestamp when this dataset was created.
### What happened:
`TypeError` occurs

### What did you expect to happen:
Work.
### Environment:
<!-- Please fill in those that seem relevant. -->
How did you deploy Kubeflow Pipelines (KFP)? AI Platform Pipelines
<!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). -->
KFP version: 1.0.4 <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. -->
KFP SDK version: 1.3.0 <!-- Please attach the output of this shell command: $pip list | grep kfp -->
### Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind bug
<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->
<!--
// /area frontend
// /area backend
// /area sdk
// /area testing
// /area engprod
-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `components/gcp/automl/create_dataset_for_tables/component.py`
Content:
```
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import NamedTuple
16
17
18 def automl_create_dataset_for_tables(
19 gcp_project_id: str,
20 gcp_region: str,
21 display_name: str,
22 description: str = None,
23 tables_dataset_metadata: dict = {},
24 retry=None, #=google.api_core.gapic_v1.method.DEFAULT,
25 timeout: float = None, #=google.api_core.gapic_v1.method.DEFAULT,
26 metadata: dict = None,
27 ) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str), ('dataset_url', 'URI')]):
28 '''automl_create_dataset_for_tables creates an empty Dataset for AutoML tables
29 '''
30 import google
31 from google.cloud import automl
32 client = automl.AutoMlClient()
33
34 location_path = client.location_path(gcp_project_id, gcp_region)
35 dataset_dict = {
36 'display_name': display_name,
37 'description': description,
38 'tables_dataset_metadata': tables_dataset_metadata,
39 }
40 dataset = client.create_dataset(
41 location_path,
42 dataset_dict,
43 retry or google.api_core.gapic_v1.method.DEFAULT,
44 timeout or google.api_core.gapic_v1.method.DEFAULT,
45 metadata,
46 )
47 print(dataset)
48 dataset_id = dataset.name.rsplit('/', 1)[-1]
49 dataset_url = 'https://console.cloud.google.com/automl-tables/locations/{region}/datasets/{dataset_id}/schemav2?project={project_id}'.format(
50 project_id=gcp_project_id,
51 region=gcp_region,
52 dataset_id=dataset_id,
53 )
54 return (dataset.name, dataset.create_time, dataset_id, dataset_url)
55
56
57 if __name__ == '__main__':
58 import kfp
59 kfp.components.func_to_container_op(
60 automl_create_dataset_for_tables,
61 output_component_file='component.yaml',
62 base_image='python:3.7',
63 packages_to_install=['google-cloud-automl==0.4.0']
64 )
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/components/gcp/automl/create_dataset_for_tables/component.py b/components/gcp/automl/create_dataset_for_tables/component.py
--- a/components/gcp/automl/create_dataset_for_tables/component.py
+++ b/components/gcp/automl/create_dataset_for_tables/component.py
@@ -51,7 +51,7 @@
region=gcp_region,
dataset_id=dataset_id,
)
- return (dataset.name, dataset.create_time, dataset_id, dataset_url)
+ return (dataset.name, str(dataset.create_time), dataset_id, dataset_url)
if __name__ == '__main__':
| {"golden_diff": "diff --git a/components/gcp/automl/create_dataset_for_tables/component.py b/components/gcp/automl/create_dataset_for_tables/component.py\n--- a/components/gcp/automl/create_dataset_for_tables/component.py\n+++ b/components/gcp/automl/create_dataset_for_tables/component.py\n@@ -51,7 +51,7 @@\n region=gcp_region,\n dataset_id=dataset_id,\n )\n- return (dataset.name, dataset.create_time, dataset_id, dataset_url)\n+ return (dataset.name, str(dataset.create_time), dataset_id, dataset_url)\n \n \n if __name__ == '__main__':\n", "issue": "TypeErro occurs in gcp/automl/create_dataset_for_tables component\n### What steps did you take:\r\n[A clear and concise description of what the bug is.]\r\n\r\n[gcp/automl/create_dataset_for_tables component](https://github.com/kubeflow/pipelines/tree/master/components/gcp/automl/create_dataset_for_tables)'s `create_time` output is declared as a string:\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/ecb14f40bb819c0678589b6458892ece5369fa71/components/gcp/automl/create_dataset_for_tables/component.yaml#L15\r\n\r\nhowever, `google.protobuf.timestamp_pb2.Timestamp` is returned in actual fact:\r\n\r\nhttps://github.com/kubeflow/pipelines/blob/ecb14f40bb819c0678589b6458892ece5369fa71/components/gcp/automl/create_dataset_for_tables/component.py#L54\r\n\r\nFYI: The `dataset` object is an instance of `google.cloud.automl_v1beta1.types.Dataset` class and its [document](https://googleapis.dev/python/automl/0.4.0/gapic/v1beta1/types.html#google.cloud.automl_v1beta1.types.Dataset.create_time) says:\r\n\r\n> **create_time**\r\n> Output only. Timestamp when this dataset was created.\r\n\r\n### What happened:\r\n\r\n`TypeError` occurs\r\n\r\n\r\n\r\n### What did you expect to happen:\r\n\r\nWork.\r\n\r\n### Environment:\r\n<!-- Please fill in those that seem relevant. -->\r\n\r\nHow did you deploy Kubeflow Pipelines (KFP)? AI Platform Pipelines\r\n<!-- If you are not sure, here's [an introduction of all options](https://www.kubeflow.org/docs/pipelines/installation/overview/). -->\r\n\r\nKFP version: 1.0.4 <!-- If you are not sure, build commit shows on bottom of KFP UI left sidenav. -->\r\n\r\nKFP SDK version: 1.3.0 <!-- Please attach the output of this shell command: $pip list | grep kfp -->\r\n\r\n\r\n### Anything else you would like to add:\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\r\n/kind bug\r\n<!-- Please include labels by uncommenting them to help us better triage issues, choose from the following -->\r\n<!--\r\n// /area frontend\r\n// /area backend\r\n// /area sdk\r\n// /area testing\r\n// /area engprod\r\n-->\r\n\n", "before_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import NamedTuple\n\n\ndef automl_create_dataset_for_tables(\n gcp_project_id: str,\n gcp_region: str,\n display_name: str,\n description: str = None,\n tables_dataset_metadata: dict = {},\n retry=None, #=google.api_core.gapic_v1.method.DEFAULT,\n timeout: float = None, #=google.api_core.gapic_v1.method.DEFAULT,\n metadata: dict = None,\n) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str), ('dataset_url', 'URI')]):\n '''automl_create_dataset_for_tables creates an empty Dataset for AutoML tables\n '''\n import google\n from google.cloud import automl\n client = automl.AutoMlClient()\n\n location_path = client.location_path(gcp_project_id, gcp_region)\n dataset_dict = {\n 'display_name': display_name,\n 'description': description,\n 'tables_dataset_metadata': tables_dataset_metadata,\n }\n dataset = client.create_dataset(\n location_path,\n dataset_dict,\n retry or google.api_core.gapic_v1.method.DEFAULT,\n timeout or google.api_core.gapic_v1.method.DEFAULT,\n metadata,\n )\n print(dataset)\n dataset_id = dataset.name.rsplit('/', 1)[-1]\n dataset_url = 'https://console.cloud.google.com/automl-tables/locations/{region}/datasets/{dataset_id}/schemav2?project={project_id}'.format(\n project_id=gcp_project_id,\n region=gcp_region,\n dataset_id=dataset_id,\n )\n return (dataset.name, dataset.create_time, dataset_id, dataset_url)\n\n\nif __name__ == '__main__':\n import kfp\n kfp.components.func_to_container_op(\n automl_create_dataset_for_tables,\n output_component_file='component.yaml',\n base_image='python:3.7',\n packages_to_install=['google-cloud-automl==0.4.0']\n )\n", "path": "components/gcp/automl/create_dataset_for_tables/component.py"}], "after_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import NamedTuple\n\n\ndef automl_create_dataset_for_tables(\n gcp_project_id: str,\n gcp_region: str,\n display_name: str,\n description: str = None,\n tables_dataset_metadata: dict = {},\n retry=None, #=google.api_core.gapic_v1.method.DEFAULT,\n timeout: float = None, #=google.api_core.gapic_v1.method.DEFAULT,\n metadata: dict = None,\n) -> NamedTuple('Outputs', [('dataset_path', str), ('create_time', str), ('dataset_id', str), ('dataset_url', 'URI')]):\n '''automl_create_dataset_for_tables creates an empty Dataset for AutoML tables\n '''\n import google\n from google.cloud import automl\n client = automl.AutoMlClient()\n\n location_path = client.location_path(gcp_project_id, gcp_region)\n dataset_dict = {\n 'display_name': display_name,\n 'description': description,\n 'tables_dataset_metadata': tables_dataset_metadata,\n }\n dataset = client.create_dataset(\n location_path,\n dataset_dict,\n retry or google.api_core.gapic_v1.method.DEFAULT,\n timeout or google.api_core.gapic_v1.method.DEFAULT,\n metadata,\n )\n print(dataset)\n dataset_id = dataset.name.rsplit('/', 1)[-1]\n dataset_url = 'https://console.cloud.google.com/automl-tables/locations/{region}/datasets/{dataset_id}/schemav2?project={project_id}'.format(\n project_id=gcp_project_id,\n region=gcp_region,\n dataset_id=dataset_id,\n )\n return (dataset.name, str(dataset.create_time), dataset_id, dataset_url)\n\n\nif __name__ == '__main__':\n import kfp\n kfp.components.func_to_container_op(\n automl_create_dataset_for_tables,\n output_component_file='component.yaml',\n base_image='python:3.7',\n packages_to_install=['google-cloud-automl==0.4.0']\n )\n", "path": "components/gcp/automl/create_dataset_for_tables/component.py"}]} | 1,543 | 131 |
gh_patches_debug_35912 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-531 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
output_dir for python API invocation + parametrize output_dir tests
Need this for a cookiecutter project that's tested using [python api](http://cookiecutter.readthedocs.org/en/latest/advanced_usage.html#calling-cookiecutter-functions-from-python).
Also parametrize output_dir tests and add testcase for cookiecutter python object.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cookiecutter/cli.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 cookiecutter.cli
6 -----------------
7
8 Main `cookiecutter` CLI.
9 """
10
11 import os
12 import sys
13 import logging
14
15 import click
16
17 from cookiecutter import __version__
18 from cookiecutter.main import cookiecutter
19 from cookiecutter.exceptions import (
20 OutputDirExistsException, InvalidModeException
21 )
22
23 logger = logging.getLogger(__name__)
24
25
26 def version_msg():
27 python_version = sys.version[:3]
28 location = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
29 message = u'Cookiecutter %(version)s from {} (Python {})'
30 return message.format(location, python_version)
31
32
33 @click.command()
34 @click.version_option(__version__, u'-V', u'--version', message=version_msg())
35 @click.argument(u'template')
36 @click.option(
37 u'--no-input', is_flag=True,
38 help=u'Do not prompt for parameters and only use cookiecutter.json '
39 u'file content',
40 )
41 @click.option(
42 u'-c', u'--checkout',
43 help=u'branch, tag or commit to checkout after git clone',
44 )
45 @click.option(
46 '-v', '--verbose',
47 is_flag=True, help='Print debug information', default=False
48 )
49 @click.option(
50 u'--replay', is_flag=True,
51 help=u'Do not prompt for parameters and only use information entered '
52 u'previously',
53 )
54 @click.option(
55 u'-f', u'--overwrite-if-exists', is_flag=True,
56 help=u'Overwrite the contents of the output directory if it already exists'
57 )
58 def main(template, no_input, checkout, verbose, replay, overwrite_if_exists):
59 """Create a project from a Cookiecutter project template (TEMPLATE)."""
60 if verbose:
61 logging.basicConfig(
62 format=u'%(levelname)s %(filename)s: %(message)s',
63 level=logging.DEBUG
64 )
65 else:
66 # Log info and above to console
67 logging.basicConfig(
68 format=u'%(levelname)s: %(message)s',
69 level=logging.INFO
70 )
71
72 try:
73 cookiecutter(template, checkout, no_input, replay=replay,
74 overwrite_if_exists=overwrite_if_exists)
75 except (OutputDirExistsException, InvalidModeException) as e:
76 click.echo(e)
77 sys.exit(1)
78
79 if __name__ == "__main__":
80 main()
81
```
Path: `cookiecutter/main.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 """
5 cookiecutter.main
6 -----------------
7
8 Main entry point for the `cookiecutter` command.
9
10 The code in this module is also a good example of how to use Cookiecutter as a
11 library rather than a script.
12 """
13
14 from __future__ import unicode_literals
15 import logging
16 import os
17 import re
18
19 from .config import get_user_config
20 from .exceptions import InvalidModeException
21 from .prompt import prompt_for_config
22 from .generate import generate_context, generate_files
23 from .vcs import clone
24 from .replay import dump, load
25
26 logger = logging.getLogger(__name__)
27
28 builtin_abbreviations = {
29 'gh': 'https://github.com/{0}.git',
30 'bb': 'https://bitbucket.org/{0}',
31 }
32
33 REPO_REGEX = """
34 (
35 ((git|ssh|https|http):(//)?) # something like git:// ssh:// etc.
36 | # or
37 (\w+@[\w\.]+) # something like user@...
38 )
39 .*
40 """
41
42
43 def is_repo_url(value):
44 """Return True if value is a repository URL."""
45 return bool(re.match(REPO_REGEX, value, re.VERBOSE))
46
47
48 def expand_abbreviations(template, config_dict):
49 """
50 Expand abbreviations in a template name.
51
52 :param template: The project template name.
53 :param config_dict: The user config, which will contain abbreviation
54 definitions.
55 """
56
57 abbreviations = builtin_abbreviations.copy()
58 abbreviations.update(config_dict.get('abbreviations', {}))
59
60 if template in abbreviations:
61 return abbreviations[template]
62
63 # Split on colon. If there is no colon, rest will be empty
64 # and prefix will be the whole template
65 prefix, sep, rest = template.partition(':')
66 if prefix in abbreviations:
67 return abbreviations[prefix].format(rest)
68
69 return template
70
71
72 def cookiecutter(
73 template, checkout=None, no_input=False, extra_context=None,
74 replay=False, overwrite_if_exists=False):
75 """
76 API equivalent to using Cookiecutter at the command line.
77
78 :param template: A directory containing a project template directory,
79 or a URL to a git repository.
80 :param checkout: The branch, tag or commit ID to checkout after clone.
81 :param no_input: Prompt the user at command line for manual configuration?
82 :param extra_context: A dictionary of context that overrides default
83 and user configuration.
84 :param: overwrite_if_exists: Overwrite the contents of output directory
85 if it exists
86 """
87 if replay and ((no_input is not False) or (extra_context is not None)):
88 err_msg = (
89 "You can not use both replay and no_input or extra_context "
90 "at the same time."
91 )
92 raise InvalidModeException(err_msg)
93
94 # Get user config from ~/.cookiecutterrc or equivalent
95 # If no config file, sensible defaults from config.DEFAULT_CONFIG are used
96 config_dict = get_user_config()
97
98 template = expand_abbreviations(template, config_dict)
99
100 if is_repo_url(template):
101 repo_dir = clone(
102 repo_url=template,
103 checkout=checkout,
104 clone_to_dir=config_dict['cookiecutters_dir'],
105 no_input=no_input
106 )
107 else:
108 # If it's a local repo, no need to clone or copy to your
109 # cookiecutters_dir
110 repo_dir = template
111
112 template_name = os.path.basename(template)
113
114 if replay:
115 context = load(template_name)
116 else:
117 context_file = os.path.join(repo_dir, 'cookiecutter.json')
118 logging.debug('context_file is {0}'.format(context_file))
119
120 context = generate_context(
121 context_file=context_file,
122 default_context=config_dict['default_context'],
123 extra_context=extra_context,
124 )
125
126 # prompt the user to manually configure at the command line.
127 # except when 'no-input' flag is set
128 context['cookiecutter'] = prompt_for_config(context, no_input)
129
130 dump(template_name, context)
131
132 # Create project from local context and project template.
133 generate_files(
134 repo_dir=repo_dir,
135 context=context,
136 overwrite_if_exists=overwrite_if_exists
137 )
138
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cookiecutter/cli.py b/cookiecutter/cli.py
--- a/cookiecutter/cli.py
+++ b/cookiecutter/cli.py
@@ -55,7 +55,12 @@
u'-f', u'--overwrite-if-exists', is_flag=True,
help=u'Overwrite the contents of the output directory if it already exists'
)
-def main(template, no_input, checkout, verbose, replay, overwrite_if_exists):
[email protected](
+ u'-o', u'--output-dir', default='.', type=click.Path(),
+ help=u'Where to output the generated project dir into'
+)
+def main(template, no_input, checkout, verbose, replay, overwrite_if_exists,
+ output_dir):
"""Create a project from a Cookiecutter project template (TEMPLATE)."""
if verbose:
logging.basicConfig(
@@ -70,8 +75,12 @@
)
try:
- cookiecutter(template, checkout, no_input, replay=replay,
- overwrite_if_exists=overwrite_if_exists)
+ cookiecutter(
+ template, checkout, no_input,
+ replay=replay,
+ overwrite_if_exists=overwrite_if_exists,
+ output_dir=output_dir
+ )
except (OutputDirExistsException, InvalidModeException) as e:
click.echo(e)
sys.exit(1)
diff --git a/cookiecutter/main.py b/cookiecutter/main.py
--- a/cookiecutter/main.py
+++ b/cookiecutter/main.py
@@ -71,7 +71,7 @@
def cookiecutter(
template, checkout=None, no_input=False, extra_context=None,
- replay=False, overwrite_if_exists=False):
+ replay=False, overwrite_if_exists=False, output_dir='.'):
"""
API equivalent to using Cookiecutter at the command line.
@@ -83,6 +83,7 @@
and user configuration.
:param: overwrite_if_exists: Overwrite the contents of output directory
if it exists
+ :param output_dir: Where to output the generated project dir into.
"""
if replay and ((no_input is not False) or (extra_context is not None)):
err_msg = (
@@ -133,5 +134,6 @@
generate_files(
repo_dir=repo_dir,
context=context,
- overwrite_if_exists=overwrite_if_exists
+ overwrite_if_exists=overwrite_if_exists,
+ output_dir=output_dir
)
| {"golden_diff": "diff --git a/cookiecutter/cli.py b/cookiecutter/cli.py\n--- a/cookiecutter/cli.py\n+++ b/cookiecutter/cli.py\n@@ -55,7 +55,12 @@\n u'-f', u'--overwrite-if-exists', is_flag=True,\n help=u'Overwrite the contents of the output directory if it already exists'\n )\n-def main(template, no_input, checkout, verbose, replay, overwrite_if_exists):\[email protected](\n+ u'-o', u'--output-dir', default='.', type=click.Path(),\n+ help=u'Where to output the generated project dir into'\n+)\n+def main(template, no_input, checkout, verbose, replay, overwrite_if_exists,\n+ output_dir):\n \"\"\"Create a project from a Cookiecutter project template (TEMPLATE).\"\"\"\n if verbose:\n logging.basicConfig(\n@@ -70,8 +75,12 @@\n )\n \n try:\n- cookiecutter(template, checkout, no_input, replay=replay,\n- overwrite_if_exists=overwrite_if_exists)\n+ cookiecutter(\n+ template, checkout, no_input,\n+ replay=replay,\n+ overwrite_if_exists=overwrite_if_exists,\n+ output_dir=output_dir\n+ )\n except (OutputDirExistsException, InvalidModeException) as e:\n click.echo(e)\n sys.exit(1)\ndiff --git a/cookiecutter/main.py b/cookiecutter/main.py\n--- a/cookiecutter/main.py\n+++ b/cookiecutter/main.py\n@@ -71,7 +71,7 @@\n \n def cookiecutter(\n template, checkout=None, no_input=False, extra_context=None,\n- replay=False, overwrite_if_exists=False):\n+ replay=False, overwrite_if_exists=False, output_dir='.'):\n \"\"\"\n API equivalent to using Cookiecutter at the command line.\n \n@@ -83,6 +83,7 @@\n and user configuration.\n :param: overwrite_if_exists: Overwrite the contents of output directory\n if it exists\n+ :param output_dir: Where to output the generated project dir into.\n \"\"\"\n if replay and ((no_input is not False) or (extra_context is not None)):\n err_msg = (\n@@ -133,5 +134,6 @@\n generate_files(\n repo_dir=repo_dir,\n context=context,\n- overwrite_if_exists=overwrite_if_exists\n+ overwrite_if_exists=overwrite_if_exists,\n+ output_dir=output_dir\n )\n", "issue": "output_dir for python API invocation + parametrize output_dir tests\nNeed this for a cookiecutter project that's tested using [python api](http://cookiecutter.readthedocs.org/en/latest/advanced_usage.html#calling-cookiecutter-functions-from-python).\n\nAlso parametrize output_dir tests and add testcase for cookiecutter python object.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.cli\n-----------------\n\nMain `cookiecutter` CLI.\n\"\"\"\n\nimport os\nimport sys\nimport logging\n\nimport click\n\nfrom cookiecutter import __version__\nfrom cookiecutter.main import cookiecutter\nfrom cookiecutter.exceptions import (\n OutputDirExistsException, InvalidModeException\n)\n\nlogger = logging.getLogger(__name__)\n\n\ndef version_msg():\n python_version = sys.version[:3]\n location = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n message = u'Cookiecutter %(version)s from {} (Python {})'\n return message.format(location, python_version)\n\n\[email protected]()\[email protected]_option(__version__, u'-V', u'--version', message=version_msg())\[email protected](u'template')\[email protected](\n u'--no-input', is_flag=True,\n help=u'Do not prompt for parameters and only use cookiecutter.json '\n u'file content',\n)\[email protected](\n u'-c', u'--checkout',\n help=u'branch, tag or commit to checkout after git clone',\n)\[email protected](\n '-v', '--verbose',\n is_flag=True, help='Print debug information', default=False\n)\[email protected](\n u'--replay', is_flag=True,\n help=u'Do not prompt for parameters and only use information entered '\n u'previously',\n)\[email protected](\n u'-f', u'--overwrite-if-exists', is_flag=True,\n help=u'Overwrite the contents of the output directory if it already exists'\n)\ndef main(template, no_input, checkout, verbose, replay, overwrite_if_exists):\n \"\"\"Create a project from a Cookiecutter project template (TEMPLATE).\"\"\"\n if verbose:\n logging.basicConfig(\n format=u'%(levelname)s %(filename)s: %(message)s',\n level=logging.DEBUG\n )\n else:\n # Log info and above to console\n logging.basicConfig(\n format=u'%(levelname)s: %(message)s',\n level=logging.INFO\n )\n\n try:\n cookiecutter(template, checkout, no_input, replay=replay,\n overwrite_if_exists=overwrite_if_exists)\n except (OutputDirExistsException, InvalidModeException) as e:\n click.echo(e)\n sys.exit(1)\n\nif __name__ == \"__main__\":\n main()\n", "path": "cookiecutter/cli.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.main\n-----------------\n\nMain entry point for the `cookiecutter` command.\n\nThe code in this module is also a good example of how to use Cookiecutter as a\nlibrary rather than a script.\n\"\"\"\n\nfrom __future__ import unicode_literals\nimport logging\nimport os\nimport re\n\nfrom .config import get_user_config\nfrom .exceptions import InvalidModeException\nfrom .prompt import prompt_for_config\nfrom .generate import generate_context, generate_files\nfrom .vcs import clone\nfrom .replay import dump, load\n\nlogger = logging.getLogger(__name__)\n\nbuiltin_abbreviations = {\n 'gh': 'https://github.com/{0}.git',\n 'bb': 'https://bitbucket.org/{0}',\n}\n\nREPO_REGEX = \"\"\"\n(\n((git|ssh|https|http):(//)?) # something like git:// ssh:// etc.\n | # or\n (\\w+@[\\w\\.]+) # something like user@...\n)\n.*\n\"\"\"\n\n\ndef is_repo_url(value):\n \"\"\"Return True if value is a repository URL.\"\"\"\n return bool(re.match(REPO_REGEX, value, re.VERBOSE))\n\n\ndef expand_abbreviations(template, config_dict):\n \"\"\"\n Expand abbreviations in a template name.\n\n :param template: The project template name.\n :param config_dict: The user config, which will contain abbreviation\n definitions.\n \"\"\"\n\n abbreviations = builtin_abbreviations.copy()\n abbreviations.update(config_dict.get('abbreviations', {}))\n\n if template in abbreviations:\n return abbreviations[template]\n\n # Split on colon. If there is no colon, rest will be empty\n # and prefix will be the whole template\n prefix, sep, rest = template.partition(':')\n if prefix in abbreviations:\n return abbreviations[prefix].format(rest)\n\n return template\n\n\ndef cookiecutter(\n template, checkout=None, no_input=False, extra_context=None,\n replay=False, overwrite_if_exists=False):\n \"\"\"\n API equivalent to using Cookiecutter at the command line.\n\n :param template: A directory containing a project template directory,\n or a URL to a git repository.\n :param checkout: The branch, tag or commit ID to checkout after clone.\n :param no_input: Prompt the user at command line for manual configuration?\n :param extra_context: A dictionary of context that overrides default\n and user configuration.\n :param: overwrite_if_exists: Overwrite the contents of output directory\n if it exists\n \"\"\"\n if replay and ((no_input is not False) or (extra_context is not None)):\n err_msg = (\n \"You can not use both replay and no_input or extra_context \"\n \"at the same time.\"\n )\n raise InvalidModeException(err_msg)\n\n # Get user config from ~/.cookiecutterrc or equivalent\n # If no config file, sensible defaults from config.DEFAULT_CONFIG are used\n config_dict = get_user_config()\n\n template = expand_abbreviations(template, config_dict)\n\n if is_repo_url(template):\n repo_dir = clone(\n repo_url=template,\n checkout=checkout,\n clone_to_dir=config_dict['cookiecutters_dir'],\n no_input=no_input\n )\n else:\n # If it's a local repo, no need to clone or copy to your\n # cookiecutters_dir\n repo_dir = template\n\n template_name = os.path.basename(template)\n\n if replay:\n context = load(template_name)\n else:\n context_file = os.path.join(repo_dir, 'cookiecutter.json')\n logging.debug('context_file is {0}'.format(context_file))\n\n context = generate_context(\n context_file=context_file,\n default_context=config_dict['default_context'],\n extra_context=extra_context,\n )\n\n # prompt the user to manually configure at the command line.\n # except when 'no-input' flag is set\n context['cookiecutter'] = prompt_for_config(context, no_input)\n\n dump(template_name, context)\n\n # Create project from local context and project template.\n generate_files(\n repo_dir=repo_dir,\n context=context,\n overwrite_if_exists=overwrite_if_exists\n )\n", "path": "cookiecutter/main.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.cli\n-----------------\n\nMain `cookiecutter` CLI.\n\"\"\"\n\nimport os\nimport sys\nimport logging\n\nimport click\n\nfrom cookiecutter import __version__\nfrom cookiecutter.main import cookiecutter\nfrom cookiecutter.exceptions import (\n OutputDirExistsException, InvalidModeException\n)\n\nlogger = logging.getLogger(__name__)\n\n\ndef version_msg():\n python_version = sys.version[:3]\n location = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n message = u'Cookiecutter %(version)s from {} (Python {})'\n return message.format(location, python_version)\n\n\[email protected]()\[email protected]_option(__version__, u'-V', u'--version', message=version_msg())\[email protected](u'template')\[email protected](\n u'--no-input', is_flag=True,\n help=u'Do not prompt for parameters and only use cookiecutter.json '\n u'file content',\n)\[email protected](\n u'-c', u'--checkout',\n help=u'branch, tag or commit to checkout after git clone',\n)\[email protected](\n '-v', '--verbose',\n is_flag=True, help='Print debug information', default=False\n)\[email protected](\n u'--replay', is_flag=True,\n help=u'Do not prompt for parameters and only use information entered '\n u'previously',\n)\[email protected](\n u'-f', u'--overwrite-if-exists', is_flag=True,\n help=u'Overwrite the contents of the output directory if it already exists'\n)\[email protected](\n u'-o', u'--output-dir', default='.', type=click.Path(),\n help=u'Where to output the generated project dir into'\n)\ndef main(template, no_input, checkout, verbose, replay, overwrite_if_exists,\n output_dir):\n \"\"\"Create a project from a Cookiecutter project template (TEMPLATE).\"\"\"\n if verbose:\n logging.basicConfig(\n format=u'%(levelname)s %(filename)s: %(message)s',\n level=logging.DEBUG\n )\n else:\n # Log info and above to console\n logging.basicConfig(\n format=u'%(levelname)s: %(message)s',\n level=logging.INFO\n )\n\n try:\n cookiecutter(\n template, checkout, no_input,\n replay=replay,\n overwrite_if_exists=overwrite_if_exists,\n output_dir=output_dir\n )\n except (OutputDirExistsException, InvalidModeException) as e:\n click.echo(e)\n sys.exit(1)\n\nif __name__ == \"__main__\":\n main()\n", "path": "cookiecutter/cli.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\ncookiecutter.main\n-----------------\n\nMain entry point for the `cookiecutter` command.\n\nThe code in this module is also a good example of how to use Cookiecutter as a\nlibrary rather than a script.\n\"\"\"\n\nfrom __future__ import unicode_literals\nimport logging\nimport os\nimport re\n\nfrom .config import get_user_config\nfrom .exceptions import InvalidModeException\nfrom .prompt import prompt_for_config\nfrom .generate import generate_context, generate_files\nfrom .vcs import clone\nfrom .replay import dump, load\n\nlogger = logging.getLogger(__name__)\n\nbuiltin_abbreviations = {\n 'gh': 'https://github.com/{0}.git',\n 'bb': 'https://bitbucket.org/{0}',\n}\n\nREPO_REGEX = \"\"\"\n(\n((git|ssh|https|http):(//)?) # something like git:// ssh:// etc.\n | # or\n (\\w+@[\\w\\.]+) # something like user@...\n)\n.*\n\"\"\"\n\n\ndef is_repo_url(value):\n \"\"\"Return True if value is a repository URL.\"\"\"\n return bool(re.match(REPO_REGEX, value, re.VERBOSE))\n\n\ndef expand_abbreviations(template, config_dict):\n \"\"\"\n Expand abbreviations in a template name.\n\n :param template: The project template name.\n :param config_dict: The user config, which will contain abbreviation\n definitions.\n \"\"\"\n\n abbreviations = builtin_abbreviations.copy()\n abbreviations.update(config_dict.get('abbreviations', {}))\n\n if template in abbreviations:\n return abbreviations[template]\n\n # Split on colon. If there is no colon, rest will be empty\n # and prefix will be the whole template\n prefix, sep, rest = template.partition(':')\n if prefix in abbreviations:\n return abbreviations[prefix].format(rest)\n\n return template\n\n\ndef cookiecutter(\n template, checkout=None, no_input=False, extra_context=None,\n replay=False, overwrite_if_exists=False, output_dir='.'):\n \"\"\"\n API equivalent to using Cookiecutter at the command line.\n\n :param template: A directory containing a project template directory,\n or a URL to a git repository.\n :param checkout: The branch, tag or commit ID to checkout after clone.\n :param no_input: Prompt the user at command line for manual configuration?\n :param extra_context: A dictionary of context that overrides default\n and user configuration.\n :param: overwrite_if_exists: Overwrite the contents of output directory\n if it exists\n :param output_dir: Where to output the generated project dir into.\n \"\"\"\n if replay and ((no_input is not False) or (extra_context is not None)):\n err_msg = (\n \"You can not use both replay and no_input or extra_context \"\n \"at the same time.\"\n )\n raise InvalidModeException(err_msg)\n\n # Get user config from ~/.cookiecutterrc or equivalent\n # If no config file, sensible defaults from config.DEFAULT_CONFIG are used\n config_dict = get_user_config()\n\n template = expand_abbreviations(template, config_dict)\n\n if is_repo_url(template):\n repo_dir = clone(\n repo_url=template,\n checkout=checkout,\n clone_to_dir=config_dict['cookiecutters_dir'],\n no_input=no_input\n )\n else:\n # If it's a local repo, no need to clone or copy to your\n # cookiecutters_dir\n repo_dir = template\n\n template_name = os.path.basename(template)\n\n if replay:\n context = load(template_name)\n else:\n context_file = os.path.join(repo_dir, 'cookiecutter.json')\n logging.debug('context_file is {0}'.format(context_file))\n\n context = generate_context(\n context_file=context_file,\n default_context=config_dict['default_context'],\n extra_context=extra_context,\n )\n\n # prompt the user to manually configure at the command line.\n # except when 'no-input' flag is set\n context['cookiecutter'] = prompt_for_config(context, no_input)\n\n dump(template_name, context)\n\n # Create project from local context and project template.\n generate_files(\n repo_dir=repo_dir,\n context=context,\n overwrite_if_exists=overwrite_if_exists,\n output_dir=output_dir\n )\n", "path": "cookiecutter/main.py"}]} | 2,245 | 547 |
gh_patches_debug_25391 | rasdani/github-patches | git_diff | jdb78__pytorch-forecasting-300 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Default trainer kwargs in temporal_fusion_transformer/tuning.py cannot be overwritten
```python
[I 2021-01-27 09:42:03,226] A new study created in memory with name: no-name-8f54548f-a847-4450-9a1c-93acfdc5609e
[W 2021-01-27 09:42:03,229] Trial 0 failed because of the following error: TypeError("ABCMeta object got multiple values for keyword argument 'progress_bar_refresh_rate'",)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/optuna/_optimize.py", line 198, in _run_trial
value_or_values = func(trial)
File "/usr/local/lib/python3.6/dist-packages/pytorch_forecasting/models/temporal_fusion_transformer/tuning.py", line 145, in objective
**trainer_kwargs,
TypeError: ABCMeta object got multiple values for keyword argument 'progress_bar_refresh_rate'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytorch_forecasting/models/temporal_fusion_transformer/tuning.py`
Content:
```
1 """
2 Hyperparameters can be efficiently tuned with `optuna <https://optuna.readthedocs.io/>`_.
3 """
4 import copy
5 import logging
6 import os
7 from typing import Any, Dict, Tuple, Union
8
9 import numpy as np
10 import optuna
11 from optuna.integration import PyTorchLightningPruningCallback, TensorBoardCallback
12 import optuna.logging
13 import pytorch_lightning as pl
14 from pytorch_lightning import Callback
15 from pytorch_lightning.callbacks import LearningRateMonitor
16 from pytorch_lightning.loggers import TensorBoardLogger
17 import statsmodels.api as sm
18 import torch
19 from torch.utils.data import DataLoader
20
21 from pytorch_forecasting import TemporalFusionTransformer
22 from pytorch_forecasting.data import TimeSeriesDataSet
23 from pytorch_forecasting.metrics import QuantileLoss
24
25 optuna_logger = logging.getLogger("optuna")
26
27
28 class MetricsCallback(Callback):
29 """PyTorch Lightning metric callback."""
30
31 def __init__(self):
32 super().__init__()
33 self.metrics = []
34
35 def on_validation_end(self, trainer, pl_module):
36 self.metrics.append(trainer.callback_metrics)
37
38
39 def optimize_hyperparameters(
40 train_dataloader: DataLoader,
41 val_dataloader: DataLoader,
42 model_path: str,
43 max_epochs: int = 20,
44 n_trials: int = 100,
45 timeout: float = 3600 * 8.0, # 8 hours
46 gradient_clip_val_range: Tuple[float, float] = (0.01, 100.0),
47 hidden_size_range: Tuple[int, int] = (16, 265),
48 hidden_continuous_size_range: Tuple[int, int] = (8, 64),
49 attention_head_size_range: Tuple[int, int] = (1, 4),
50 dropout_range: Tuple[float, float] = (0.1, 0.3),
51 learning_rate_range: Tuple[float, float] = (1e-5, 1.0),
52 use_learning_rate_finder: bool = True,
53 trainer_kwargs: Dict[str, Any] = {},
54 log_dir: str = "lightning_logs",
55 study: optuna.Study = None,
56 verbose: Union[int, bool] = None,
57 **kwargs,
58 ) -> optuna.Study:
59 """
60 Optimize Temporal Fusion Transformer hyperparameters.
61
62 Run hyperparameter optimization. Learning rate for is determined with
63 the PyTorch Lightning learning rate finder.
64
65 Args:
66 train_dataloader (DataLoader): dataloader for training model
67 val_dataloader (DataLoader): dataloader for validating model
68 model_path (str): folder to which model checkpoints are saved
69 max_epochs (int, optional): Maximum number of epochs to run training. Defaults to 20.
70 n_trials (int, optional): Number of hyperparameter trials to run. Defaults to 100.
71 timeout (float, optional): Time in seconds after which training is stopped regardless of number of epochs
72 or validation metric. Defaults to 3600*8.0.
73 hidden_size_range (Tuple[int, int], optional): Minimum and maximum of ``hidden_size`` hyperparameter. Defaults
74 to (16, 265).
75 hidden_continuous_size_range (Tuple[int, int], optional): Minimum and maximum of ``hidden_continuous_size``
76 hyperparameter. Defaults to (8, 64).
77 attention_head_size_range (Tuple[int, int], optional): Minimum and maximum of ``attention_head_size``
78 hyperparameter. Defaults to (1, 4).
79 dropout_range (Tuple[float, float], optional): Minimum and maximum of ``dropout`` hyperparameter. Defaults to
80 (0.1, 0.3).
81 learning_rate_range (Tuple[float, float], optional): Learning rate range. Defaults to (1e-5, 1.0).
82 use_learning_rate_finder (bool): If to use learning rate finder or optimize as part of hyperparameters.
83 Defaults to True.
84 trainer_kwargs (Dict[str, Any], optional): Additional arguments to the
85 `PyTorch Lightning trainer <https://pytorch-lightning.readthedocs.io/en/latest/trainer.html>`_ such
86 as ``limit_train_batches``. Defaults to {}.
87 log_dir (str, optional): Folder into which to log results for tensorboard. Defaults to "lightning_logs".
88 study (optuna.Study, optional): study to resume. Will create new study by default.
89 verbose (Union[int, bool]): level of verbosity.
90 * None: no change in verbosity level (equivalent to verbose=1 by optuna-set default).
91 * 0 or False: log only warnings.
92 * 1 or True: log pruning events.
93 * 2: optuna logging level at debug level.
94 Defaults to None.
95
96 **kwargs: Additional arguments for the :py:class:`~TemporalFusionTransformer`.
97
98 Returns:
99 optuna.Study: optuna study results
100 """
101 assert isinstance(train_dataloader.dataset, TimeSeriesDataSet) and isinstance(
102 val_dataloader.dataset, TimeSeriesDataSet
103 ), "dataloaders must be built from timeseriesdataset"
104
105 logging_level = {
106 None: optuna.logging.get_verbosity(),
107 0: optuna.logging.WARNING,
108 1: optuna.logging.INFO,
109 2: optuna.logging.DEBUG,
110 }
111 optuna_verbose = logging_level[verbose]
112 optuna.logging.set_verbosity(optuna_verbose)
113
114 loss = kwargs.get(
115 "loss", QuantileLoss()
116 ) # need a deepcopy of loss as it will otherwise propagate from one trial to the next
117
118 # create objective function
119 def objective(trial: optuna.Trial) -> float:
120 # Filenames for each trial must be made unique in order to access each checkpoint.
121 checkpoint_callback = pl.callbacks.ModelCheckpoint(
122 dirpath=os.path.join(model_path, "trial_{}".format(trial.number)), filename="{epoch}", monitor="val_loss"
123 )
124
125 # The default logger in PyTorch Lightning writes to event files to be consumed by
126 # TensorBoard. We don't use any logger here as it requires us to implement several abstract
127 # methods. Instead we setup a simple callback, that saves metrics from each validation step.
128 metrics_callback = MetricsCallback()
129 learning_rate_callback = LearningRateMonitor()
130 logger = TensorBoardLogger(log_dir, name="optuna", version=trial.number)
131 gradient_clip_val = trial.suggest_loguniform("gradient_clip_val", *gradient_clip_val_range)
132 trainer_kwargs.setdefault("gpus", [0] if torch.cuda.is_available() else None)
133 trainer = pl.Trainer(
134 max_epochs=max_epochs,
135 gradient_clip_val=gradient_clip_val,
136 callbacks=[
137 metrics_callback,
138 learning_rate_callback,
139 checkpoint_callback,
140 PyTorchLightningPruningCallback(trial, monitor="val_loss"),
141 ],
142 logger=logger,
143 progress_bar_refresh_rate=[0, 1][optuna_verbose < optuna.logging.INFO],
144 weights_summary=[None, "top"][optuna_verbose < optuna.logging.INFO],
145 **trainer_kwargs,
146 )
147
148 # create model
149 hidden_size = trial.suggest_int("hidden_size", *hidden_size_range, log=True)
150 kwargs["loss"] = copy.deepcopy(loss)
151 model = TemporalFusionTransformer.from_dataset(
152 train_dataloader.dataset,
153 dropout=trial.suggest_uniform("dropout", *dropout_range),
154 hidden_size=hidden_size,
155 hidden_continuous_size=trial.suggest_int(
156 "hidden_continuous_size",
157 hidden_continuous_size_range[0],
158 min(hidden_continuous_size_range[1], hidden_size),
159 log=True,
160 ),
161 attention_head_size=trial.suggest_int("attention_head_size", *attention_head_size_range),
162 log_interval=-1,
163 **kwargs,
164 )
165 # find good learning rate
166 if use_learning_rate_finder:
167 lr_trainer = pl.Trainer(
168 gradient_clip_val=gradient_clip_val,
169 gpus=[0] if torch.cuda.is_available() else None,
170 logger=False,
171 progress_bar_refresh_rate=0,
172 weights_summary=None,
173 )
174 res = lr_trainer.tuner.lr_find(
175 model,
176 train_dataloader=train_dataloader,
177 val_dataloaders=val_dataloader,
178 early_stop_threshold=10000,
179 min_lr=learning_rate_range[0],
180 num_training=100,
181 max_lr=learning_rate_range[1],
182 )
183
184 loss_finite = np.isfinite(res.results["loss"])
185 if loss_finite.sum() > 3: # at least 3 valid values required for learning rate finder
186 lr_smoothed, loss_smoothed = sm.nonparametric.lowess(
187 np.asarray(res.results["loss"])[loss_finite],
188 np.asarray(res.results["lr"])[loss_finite],
189 frac=1.0 / 10.0,
190 )[min(loss_finite.sum() - 3, 10) : -1].T
191 optimal_idx = np.gradient(loss_smoothed).argmin()
192 optimal_lr = lr_smoothed[optimal_idx]
193 else:
194 optimal_idx = np.asarray(res.results["loss"]).argmin()
195 optimal_lr = res.results["lr"][optimal_idx]
196 optuna_logger.info(f"Using learning rate of {optimal_lr:.3g}")
197 # add learning rate artificially
198 model.hparams.learning_rate = trial.suggest_uniform("learning_rate", optimal_lr, optimal_lr)
199 else:
200 model.hparams.learning_rate = trial.suggest_loguniform("learning_rate", *learning_rate_range)
201
202 # fit
203 trainer.fit(model, train_dataloader=train_dataloader, val_dataloaders=val_dataloader)
204
205 # report result
206 return metrics_callback.metrics[-1]["val_loss"].item()
207
208 # setup optuna and run
209 pruner = optuna.pruners.SuccessiveHalvingPruner()
210 if study is None:
211 study = optuna.create_study(direction="minimize", pruner=pruner)
212 study.optimize(objective, n_trials=n_trials, timeout=timeout)
213 return study
214
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pytorch_forecasting/models/temporal_fusion_transformer/tuning.py b/pytorch_forecasting/models/temporal_fusion_transformer/tuning.py
--- a/pytorch_forecasting/models/temporal_fusion_transformer/tuning.py
+++ b/pytorch_forecasting/models/temporal_fusion_transformer/tuning.py
@@ -129,8 +129,8 @@
learning_rate_callback = LearningRateMonitor()
logger = TensorBoardLogger(log_dir, name="optuna", version=trial.number)
gradient_clip_val = trial.suggest_loguniform("gradient_clip_val", *gradient_clip_val_range)
- trainer_kwargs.setdefault("gpus", [0] if torch.cuda.is_available() else None)
- trainer = pl.Trainer(
+ default_trainer_kwargs = dict(
+ gpus=[0] if torch.cuda.is_available() else None,
max_epochs=max_epochs,
gradient_clip_val=gradient_clip_val,
callbacks=[
@@ -142,7 +142,10 @@
logger=logger,
progress_bar_refresh_rate=[0, 1][optuna_verbose < optuna.logging.INFO],
weights_summary=[None, "top"][optuna_verbose < optuna.logging.INFO],
- **trainer_kwargs,
+ )
+ default_trainer_kwargs.update(trainer_kwargs)
+ trainer = pl.Trainer(
+ **default_trainer_kwargs,
)
# create model
| {"golden_diff": "diff --git a/pytorch_forecasting/models/temporal_fusion_transformer/tuning.py b/pytorch_forecasting/models/temporal_fusion_transformer/tuning.py\n--- a/pytorch_forecasting/models/temporal_fusion_transformer/tuning.py\n+++ b/pytorch_forecasting/models/temporal_fusion_transformer/tuning.py\n@@ -129,8 +129,8 @@\n learning_rate_callback = LearningRateMonitor()\n logger = TensorBoardLogger(log_dir, name=\"optuna\", version=trial.number)\n gradient_clip_val = trial.suggest_loguniform(\"gradient_clip_val\", *gradient_clip_val_range)\n- trainer_kwargs.setdefault(\"gpus\", [0] if torch.cuda.is_available() else None)\n- trainer = pl.Trainer(\n+ default_trainer_kwargs = dict(\n+ gpus=[0] if torch.cuda.is_available() else None,\n max_epochs=max_epochs,\n gradient_clip_val=gradient_clip_val,\n callbacks=[\n@@ -142,7 +142,10 @@\n logger=logger,\n progress_bar_refresh_rate=[0, 1][optuna_verbose < optuna.logging.INFO],\n weights_summary=[None, \"top\"][optuna_verbose < optuna.logging.INFO],\n- **trainer_kwargs,\n+ )\n+ default_trainer_kwargs.update(trainer_kwargs)\n+ trainer = pl.Trainer(\n+ **default_trainer_kwargs,\n )\n \n # create model\n", "issue": "Default trainer kwargs in temporal_fusion_transformer/tuning.py cannot be overwritten\n```python\r\n[I 2021-01-27 09:42:03,226] A new study created in memory with name: no-name-8f54548f-a847-4450-9a1c-93acfdc5609e\r\n[W 2021-01-27 09:42:03,229] Trial 0 failed because of the following error: TypeError(\"ABCMeta object got multiple values for keyword argument 'progress_bar_refresh_rate'\",)\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/optuna/_optimize.py\", line 198, in _run_trial\r\n value_or_values = func(trial)\r\n File \"/usr/local/lib/python3.6/dist-packages/pytorch_forecasting/models/temporal_fusion_transformer/tuning.py\", line 145, in objective\r\n **trainer_kwargs,\r\nTypeError: ABCMeta object got multiple values for keyword argument 'progress_bar_refresh_rate'\r\n```\n", "before_files": [{"content": "\"\"\"\nHyperparameters can be efficiently tuned with `optuna <https://optuna.readthedocs.io/>`_.\n\"\"\"\nimport copy\nimport logging\nimport os\nfrom typing import Any, Dict, Tuple, Union\n\nimport numpy as np\nimport optuna\nfrom optuna.integration import PyTorchLightningPruningCallback, TensorBoardCallback\nimport optuna.logging\nimport pytorch_lightning as pl\nfrom pytorch_lightning import Callback\nfrom pytorch_lightning.callbacks import LearningRateMonitor\nfrom pytorch_lightning.loggers import TensorBoardLogger\nimport statsmodels.api as sm\nimport torch\nfrom torch.utils.data import DataLoader\n\nfrom pytorch_forecasting import TemporalFusionTransformer\nfrom pytorch_forecasting.data import TimeSeriesDataSet\nfrom pytorch_forecasting.metrics import QuantileLoss\n\noptuna_logger = logging.getLogger(\"optuna\")\n\n\nclass MetricsCallback(Callback):\n \"\"\"PyTorch Lightning metric callback.\"\"\"\n\n def __init__(self):\n super().__init__()\n self.metrics = []\n\n def on_validation_end(self, trainer, pl_module):\n self.metrics.append(trainer.callback_metrics)\n\n\ndef optimize_hyperparameters(\n train_dataloader: DataLoader,\n val_dataloader: DataLoader,\n model_path: str,\n max_epochs: int = 20,\n n_trials: int = 100,\n timeout: float = 3600 * 8.0, # 8 hours\n gradient_clip_val_range: Tuple[float, float] = (0.01, 100.0),\n hidden_size_range: Tuple[int, int] = (16, 265),\n hidden_continuous_size_range: Tuple[int, int] = (8, 64),\n attention_head_size_range: Tuple[int, int] = (1, 4),\n dropout_range: Tuple[float, float] = (0.1, 0.3),\n learning_rate_range: Tuple[float, float] = (1e-5, 1.0),\n use_learning_rate_finder: bool = True,\n trainer_kwargs: Dict[str, Any] = {},\n log_dir: str = \"lightning_logs\",\n study: optuna.Study = None,\n verbose: Union[int, bool] = None,\n **kwargs,\n) -> optuna.Study:\n \"\"\"\n Optimize Temporal Fusion Transformer hyperparameters.\n\n Run hyperparameter optimization. Learning rate for is determined with\n the PyTorch Lightning learning rate finder.\n\n Args:\n train_dataloader (DataLoader): dataloader for training model\n val_dataloader (DataLoader): dataloader for validating model\n model_path (str): folder to which model checkpoints are saved\n max_epochs (int, optional): Maximum number of epochs to run training. Defaults to 20.\n n_trials (int, optional): Number of hyperparameter trials to run. Defaults to 100.\n timeout (float, optional): Time in seconds after which training is stopped regardless of number of epochs\n or validation metric. Defaults to 3600*8.0.\n hidden_size_range (Tuple[int, int], optional): Minimum and maximum of ``hidden_size`` hyperparameter. Defaults\n to (16, 265).\n hidden_continuous_size_range (Tuple[int, int], optional): Minimum and maximum of ``hidden_continuous_size``\n hyperparameter. Defaults to (8, 64).\n attention_head_size_range (Tuple[int, int], optional): Minimum and maximum of ``attention_head_size``\n hyperparameter. Defaults to (1, 4).\n dropout_range (Tuple[float, float], optional): Minimum and maximum of ``dropout`` hyperparameter. Defaults to\n (0.1, 0.3).\n learning_rate_range (Tuple[float, float], optional): Learning rate range. Defaults to (1e-5, 1.0).\n use_learning_rate_finder (bool): If to use learning rate finder or optimize as part of hyperparameters.\n Defaults to True.\n trainer_kwargs (Dict[str, Any], optional): Additional arguments to the\n `PyTorch Lightning trainer <https://pytorch-lightning.readthedocs.io/en/latest/trainer.html>`_ such\n as ``limit_train_batches``. Defaults to {}.\n log_dir (str, optional): Folder into which to log results for tensorboard. Defaults to \"lightning_logs\".\n study (optuna.Study, optional): study to resume. Will create new study by default.\n verbose (Union[int, bool]): level of verbosity.\n * None: no change in verbosity level (equivalent to verbose=1 by optuna-set default).\n * 0 or False: log only warnings.\n * 1 or True: log pruning events.\n * 2: optuna logging level at debug level.\n Defaults to None.\n\n **kwargs: Additional arguments for the :py:class:`~TemporalFusionTransformer`.\n\n Returns:\n optuna.Study: optuna study results\n \"\"\"\n assert isinstance(train_dataloader.dataset, TimeSeriesDataSet) and isinstance(\n val_dataloader.dataset, TimeSeriesDataSet\n ), \"dataloaders must be built from timeseriesdataset\"\n\n logging_level = {\n None: optuna.logging.get_verbosity(),\n 0: optuna.logging.WARNING,\n 1: optuna.logging.INFO,\n 2: optuna.logging.DEBUG,\n }\n optuna_verbose = logging_level[verbose]\n optuna.logging.set_verbosity(optuna_verbose)\n\n loss = kwargs.get(\n \"loss\", QuantileLoss()\n ) # need a deepcopy of loss as it will otherwise propagate from one trial to the next\n\n # create objective function\n def objective(trial: optuna.Trial) -> float:\n # Filenames for each trial must be made unique in order to access each checkpoint.\n checkpoint_callback = pl.callbacks.ModelCheckpoint(\n dirpath=os.path.join(model_path, \"trial_{}\".format(trial.number)), filename=\"{epoch}\", monitor=\"val_loss\"\n )\n\n # The default logger in PyTorch Lightning writes to event files to be consumed by\n # TensorBoard. We don't use any logger here as it requires us to implement several abstract\n # methods. Instead we setup a simple callback, that saves metrics from each validation step.\n metrics_callback = MetricsCallback()\n learning_rate_callback = LearningRateMonitor()\n logger = TensorBoardLogger(log_dir, name=\"optuna\", version=trial.number)\n gradient_clip_val = trial.suggest_loguniform(\"gradient_clip_val\", *gradient_clip_val_range)\n trainer_kwargs.setdefault(\"gpus\", [0] if torch.cuda.is_available() else None)\n trainer = pl.Trainer(\n max_epochs=max_epochs,\n gradient_clip_val=gradient_clip_val,\n callbacks=[\n metrics_callback,\n learning_rate_callback,\n checkpoint_callback,\n PyTorchLightningPruningCallback(trial, monitor=\"val_loss\"),\n ],\n logger=logger,\n progress_bar_refresh_rate=[0, 1][optuna_verbose < optuna.logging.INFO],\n weights_summary=[None, \"top\"][optuna_verbose < optuna.logging.INFO],\n **trainer_kwargs,\n )\n\n # create model\n hidden_size = trial.suggest_int(\"hidden_size\", *hidden_size_range, log=True)\n kwargs[\"loss\"] = copy.deepcopy(loss)\n model = TemporalFusionTransformer.from_dataset(\n train_dataloader.dataset,\n dropout=trial.suggest_uniform(\"dropout\", *dropout_range),\n hidden_size=hidden_size,\n hidden_continuous_size=trial.suggest_int(\n \"hidden_continuous_size\",\n hidden_continuous_size_range[0],\n min(hidden_continuous_size_range[1], hidden_size),\n log=True,\n ),\n attention_head_size=trial.suggest_int(\"attention_head_size\", *attention_head_size_range),\n log_interval=-1,\n **kwargs,\n )\n # find good learning rate\n if use_learning_rate_finder:\n lr_trainer = pl.Trainer(\n gradient_clip_val=gradient_clip_val,\n gpus=[0] if torch.cuda.is_available() else None,\n logger=False,\n progress_bar_refresh_rate=0,\n weights_summary=None,\n )\n res = lr_trainer.tuner.lr_find(\n model,\n train_dataloader=train_dataloader,\n val_dataloaders=val_dataloader,\n early_stop_threshold=10000,\n min_lr=learning_rate_range[0],\n num_training=100,\n max_lr=learning_rate_range[1],\n )\n\n loss_finite = np.isfinite(res.results[\"loss\"])\n if loss_finite.sum() > 3: # at least 3 valid values required for learning rate finder\n lr_smoothed, loss_smoothed = sm.nonparametric.lowess(\n np.asarray(res.results[\"loss\"])[loss_finite],\n np.asarray(res.results[\"lr\"])[loss_finite],\n frac=1.0 / 10.0,\n )[min(loss_finite.sum() - 3, 10) : -1].T\n optimal_idx = np.gradient(loss_smoothed).argmin()\n optimal_lr = lr_smoothed[optimal_idx]\n else:\n optimal_idx = np.asarray(res.results[\"loss\"]).argmin()\n optimal_lr = res.results[\"lr\"][optimal_idx]\n optuna_logger.info(f\"Using learning rate of {optimal_lr:.3g}\")\n # add learning rate artificially\n model.hparams.learning_rate = trial.suggest_uniform(\"learning_rate\", optimal_lr, optimal_lr)\n else:\n model.hparams.learning_rate = trial.suggest_loguniform(\"learning_rate\", *learning_rate_range)\n\n # fit\n trainer.fit(model, train_dataloader=train_dataloader, val_dataloaders=val_dataloader)\n\n # report result\n return metrics_callback.metrics[-1][\"val_loss\"].item()\n\n # setup optuna and run\n pruner = optuna.pruners.SuccessiveHalvingPruner()\n if study is None:\n study = optuna.create_study(direction=\"minimize\", pruner=pruner)\n study.optimize(objective, n_trials=n_trials, timeout=timeout)\n return study\n", "path": "pytorch_forecasting/models/temporal_fusion_transformer/tuning.py"}], "after_files": [{"content": "\"\"\"\nHyperparameters can be efficiently tuned with `optuna <https://optuna.readthedocs.io/>`_.\n\"\"\"\nimport copy\nimport logging\nimport os\nfrom typing import Any, Dict, Tuple, Union\n\nimport numpy as np\nimport optuna\nfrom optuna.integration import PyTorchLightningPruningCallback, TensorBoardCallback\nimport optuna.logging\nimport pytorch_lightning as pl\nfrom pytorch_lightning import Callback\nfrom pytorch_lightning.callbacks import LearningRateMonitor\nfrom pytorch_lightning.loggers import TensorBoardLogger\nimport statsmodels.api as sm\nimport torch\nfrom torch.utils.data import DataLoader\n\nfrom pytorch_forecasting import TemporalFusionTransformer\nfrom pytorch_forecasting.data import TimeSeriesDataSet\nfrom pytorch_forecasting.metrics import QuantileLoss\n\noptuna_logger = logging.getLogger(\"optuna\")\n\n\nclass MetricsCallback(Callback):\n \"\"\"PyTorch Lightning metric callback.\"\"\"\n\n def __init__(self):\n super().__init__()\n self.metrics = []\n\n def on_validation_end(self, trainer, pl_module):\n self.metrics.append(trainer.callback_metrics)\n\n\ndef optimize_hyperparameters(\n train_dataloader: DataLoader,\n val_dataloader: DataLoader,\n model_path: str,\n max_epochs: int = 20,\n n_trials: int = 100,\n timeout: float = 3600 * 8.0, # 8 hours\n gradient_clip_val_range: Tuple[float, float] = (0.01, 100.0),\n hidden_size_range: Tuple[int, int] = (16, 265),\n hidden_continuous_size_range: Tuple[int, int] = (8, 64),\n attention_head_size_range: Tuple[int, int] = (1, 4),\n dropout_range: Tuple[float, float] = (0.1, 0.3),\n learning_rate_range: Tuple[float, float] = (1e-5, 1.0),\n use_learning_rate_finder: bool = True,\n trainer_kwargs: Dict[str, Any] = {},\n log_dir: str = \"lightning_logs\",\n study: optuna.Study = None,\n verbose: Union[int, bool] = None,\n **kwargs,\n) -> optuna.Study:\n \"\"\"\n Optimize Temporal Fusion Transformer hyperparameters.\n\n Run hyperparameter optimization. Learning rate for is determined with\n the PyTorch Lightning learning rate finder.\n\n Args:\n train_dataloader (DataLoader): dataloader for training model\n val_dataloader (DataLoader): dataloader for validating model\n model_path (str): folder to which model checkpoints are saved\n max_epochs (int, optional): Maximum number of epochs to run training. Defaults to 20.\n n_trials (int, optional): Number of hyperparameter trials to run. Defaults to 100.\n timeout (float, optional): Time in seconds after which training is stopped regardless of number of epochs\n or validation metric. Defaults to 3600*8.0.\n hidden_size_range (Tuple[int, int], optional): Minimum and maximum of ``hidden_size`` hyperparameter. Defaults\n to (16, 265).\n hidden_continuous_size_range (Tuple[int, int], optional): Minimum and maximum of ``hidden_continuous_size``\n hyperparameter. Defaults to (8, 64).\n attention_head_size_range (Tuple[int, int], optional): Minimum and maximum of ``attention_head_size``\n hyperparameter. Defaults to (1, 4).\n dropout_range (Tuple[float, float], optional): Minimum and maximum of ``dropout`` hyperparameter. Defaults to\n (0.1, 0.3).\n learning_rate_range (Tuple[float, float], optional): Learning rate range. Defaults to (1e-5, 1.0).\n use_learning_rate_finder (bool): If to use learning rate finder or optimize as part of hyperparameters.\n Defaults to True.\n trainer_kwargs (Dict[str, Any], optional): Additional arguments to the\n `PyTorch Lightning trainer <https://pytorch-lightning.readthedocs.io/en/latest/trainer.html>`_ such\n as ``limit_train_batches``. Defaults to {}.\n log_dir (str, optional): Folder into which to log results for tensorboard. Defaults to \"lightning_logs\".\n study (optuna.Study, optional): study to resume. Will create new study by default.\n verbose (Union[int, bool]): level of verbosity.\n * None: no change in verbosity level (equivalent to verbose=1 by optuna-set default).\n * 0 or False: log only warnings.\n * 1 or True: log pruning events.\n * 2: optuna logging level at debug level.\n Defaults to None.\n\n **kwargs: Additional arguments for the :py:class:`~TemporalFusionTransformer`.\n\n Returns:\n optuna.Study: optuna study results\n \"\"\"\n assert isinstance(train_dataloader.dataset, TimeSeriesDataSet) and isinstance(\n val_dataloader.dataset, TimeSeriesDataSet\n ), \"dataloaders must be built from timeseriesdataset\"\n\n logging_level = {\n None: optuna.logging.get_verbosity(),\n 0: optuna.logging.WARNING,\n 1: optuna.logging.INFO,\n 2: optuna.logging.DEBUG,\n }\n optuna_verbose = logging_level[verbose]\n optuna.logging.set_verbosity(optuna_verbose)\n\n loss = kwargs.get(\n \"loss\", QuantileLoss()\n ) # need a deepcopy of loss as it will otherwise propagate from one trial to the next\n\n # create objective function\n def objective(trial: optuna.Trial) -> float:\n # Filenames for each trial must be made unique in order to access each checkpoint.\n checkpoint_callback = pl.callbacks.ModelCheckpoint(\n dirpath=os.path.join(model_path, \"trial_{}\".format(trial.number)), filename=\"{epoch}\", monitor=\"val_loss\"\n )\n\n # The default logger in PyTorch Lightning writes to event files to be consumed by\n # TensorBoard. We don't use any logger here as it requires us to implement several abstract\n # methods. Instead we setup a simple callback, that saves metrics from each validation step.\n metrics_callback = MetricsCallback()\n learning_rate_callback = LearningRateMonitor()\n logger = TensorBoardLogger(log_dir, name=\"optuna\", version=trial.number)\n gradient_clip_val = trial.suggest_loguniform(\"gradient_clip_val\", *gradient_clip_val_range)\n default_trainer_kwargs = dict(\n gpus=[0] if torch.cuda.is_available() else None,\n max_epochs=max_epochs,\n gradient_clip_val=gradient_clip_val,\n callbacks=[\n metrics_callback,\n learning_rate_callback,\n checkpoint_callback,\n PyTorchLightningPruningCallback(trial, monitor=\"val_loss\"),\n ],\n logger=logger,\n progress_bar_refresh_rate=[0, 1][optuna_verbose < optuna.logging.INFO],\n weights_summary=[None, \"top\"][optuna_verbose < optuna.logging.INFO],\n )\n default_trainer_kwargs.update(trainer_kwargs)\n trainer = pl.Trainer(\n **default_trainer_kwargs,\n )\n\n # create model\n hidden_size = trial.suggest_int(\"hidden_size\", *hidden_size_range, log=True)\n kwargs[\"loss\"] = copy.deepcopy(loss)\n model = TemporalFusionTransformer.from_dataset(\n train_dataloader.dataset,\n dropout=trial.suggest_uniform(\"dropout\", *dropout_range),\n hidden_size=hidden_size,\n hidden_continuous_size=trial.suggest_int(\n \"hidden_continuous_size\",\n hidden_continuous_size_range[0],\n min(hidden_continuous_size_range[1], hidden_size),\n log=True,\n ),\n attention_head_size=trial.suggest_int(\"attention_head_size\", *attention_head_size_range),\n log_interval=-1,\n **kwargs,\n )\n # find good learning rate\n if use_learning_rate_finder:\n lr_trainer = pl.Trainer(\n gradient_clip_val=gradient_clip_val,\n gpus=[0] if torch.cuda.is_available() else None,\n logger=False,\n progress_bar_refresh_rate=0,\n weights_summary=None,\n )\n res = lr_trainer.tuner.lr_find(\n model,\n train_dataloader=train_dataloader,\n val_dataloaders=val_dataloader,\n early_stop_threshold=10000,\n min_lr=learning_rate_range[0],\n num_training=100,\n max_lr=learning_rate_range[1],\n )\n\n loss_finite = np.isfinite(res.results[\"loss\"])\n if loss_finite.sum() > 3: # at least 3 valid values required for learning rate finder\n lr_smoothed, loss_smoothed = sm.nonparametric.lowess(\n np.asarray(res.results[\"loss\"])[loss_finite],\n np.asarray(res.results[\"lr\"])[loss_finite],\n frac=1.0 / 10.0,\n )[min(loss_finite.sum() - 3, 10) : -1].T\n optimal_idx = np.gradient(loss_smoothed).argmin()\n optimal_lr = lr_smoothed[optimal_idx]\n else:\n optimal_idx = np.asarray(res.results[\"loss\"]).argmin()\n optimal_lr = res.results[\"lr\"][optimal_idx]\n optuna_logger.info(f\"Using learning rate of {optimal_lr:.3g}\")\n # add learning rate artificially\n model.hparams.learning_rate = trial.suggest_uniform(\"learning_rate\", optimal_lr, optimal_lr)\n else:\n model.hparams.learning_rate = trial.suggest_loguniform(\"learning_rate\", *learning_rate_range)\n\n # fit\n trainer.fit(model, train_dataloader=train_dataloader, val_dataloaders=val_dataloader)\n\n # report result\n return metrics_callback.metrics[-1][\"val_loss\"].item()\n\n # setup optuna and run\n pruner = optuna.pruners.SuccessiveHalvingPruner()\n if study is None:\n study = optuna.create_study(direction=\"minimize\", pruner=pruner)\n study.optimize(objective, n_trials=n_trials, timeout=timeout)\n return study\n", "path": "pytorch_forecasting/models/temporal_fusion_transformer/tuning.py"}]} | 3,226 | 305 |
gh_patches_debug_13674 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-2717 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
UnicodeDecodeError for invalid headers in ASGI middleware
When the ASGI middleware receives headers with invalid encoding, for example as the result of somebody fuzzing an endpoint as is the case here, it introduces a Unicode decode error into the HTTP pipeline. It shouldn't be doing this.
### Which version of dd-trace-py are you using?
0.46.0
### Which version of the libraries are you using?
What I believe are the potentially relevant libraries are listed below.
```
datadog==0.40.1
...
ddtrace==0.46.0
...
fastapi==0.63.0
...
httpcore==0.12.3
httptools==0.1.1
...
starlette==0.13.6
...
uvicorn==0.13.4
uvloop==0.15.2
```
### How can we reproduce your problem?
```python
from ddtrace.contrib.asgi.middleware import _extract_headers
def test_ddtrace_invalid_header_name_encoding():
assert _extract_headers({"headers": [((bytes.fromhex("c0")), "test")]}) == {}
def test_ddtrace_invalid_header_value_encoding():
assert _extract_headers({"headers": [("test", (bytes.fromhex("c0")))]}) == {}
```
### What is the result that you get?
The middleware blows up with a `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc0 in position 0: invalid start byte` exception.
### What is the result that you expected?
This is trace middleware so it should not introduce exceptions into request processing. In the case of a header it can't decode then I'd expect it to either:
1. Omit the invalid header from the result and continue attempting to decode the remainder, or
2. Substitute a sentinel value for the unprocessable header to indicate an error
I've suggested (1) in the repro above, as that seems like it'd be the least likely to cause any problems.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/contrib/asgi/middleware.py`
Content:
```
1 import sys
2
3 import ddtrace
4 from ddtrace import config
5 from ddtrace.constants import ANALYTICS_SAMPLE_RATE_KEY
6 from ddtrace.ext import SpanTypes
7 from ddtrace.ext import http
8
9 from .. import trace_utils
10 from ...internal.compat import reraise
11 from ...internal.logger import get_logger
12 from .utils import guarantee_single_callable
13
14
15 log = get_logger(__name__)
16
17 config._add(
18 "asgi",
19 dict(service_name=config._get_service(default="asgi"), request_span_name="asgi.request", distributed_tracing=True),
20 )
21
22 ASGI_VERSION = "asgi.version"
23 ASGI_SPEC_VERSION = "asgi.spec_version"
24
25
26 def bytes_to_str(str_or_bytes):
27 return str_or_bytes.decode() if isinstance(str_or_bytes, bytes) else str_or_bytes
28
29
30 def _extract_versions_from_scope(scope, integration_config):
31 tags = {}
32
33 http_version = scope.get("http_version")
34 if http_version:
35 tags[http.VERSION] = http_version
36
37 scope_asgi = scope.get("asgi")
38
39 if scope_asgi and "version" in scope_asgi:
40 tags[ASGI_VERSION] = scope_asgi["version"]
41
42 if scope_asgi and "spec_version" in scope_asgi:
43 tags[ASGI_SPEC_VERSION] = scope_asgi["spec_version"]
44
45 return tags
46
47
48 def _extract_headers(scope):
49 headers = scope.get("headers")
50 if headers:
51 # headers: (Iterable[[byte string, byte string]])
52 return dict((bytes_to_str(k), bytes_to_str(v)) for (k, v) in headers)
53 return {}
54
55
56 def _default_handle_exception_span(exc, span):
57 """Default handler for exception for span"""
58 span.set_tag(http.STATUS_CODE, 500)
59
60
61 class TraceMiddleware:
62 """
63 ASGI application middleware that traces the requests.
64
65 Args:
66 app: The ASGI application.
67 tracer: Custom tracer. Defaults to the global tracer.
68 """
69
70 def __init__(
71 self,
72 app,
73 tracer=None,
74 integration_config=config.asgi,
75 handle_exception_span=_default_handle_exception_span,
76 span_modifier=None,
77 ):
78 self.app = guarantee_single_callable(app)
79 self.tracer = tracer or ddtrace.tracer
80 self.integration_config = integration_config
81 self.handle_exception_span = handle_exception_span
82 self.span_modifier = span_modifier
83
84 async def __call__(self, scope, receive, send):
85 if scope["type"] != "http":
86 return await self.app(scope, receive, send)
87
88 headers = _extract_headers(scope)
89
90 trace_utils.activate_distributed_headers(
91 self.tracer, int_config=self.integration_config, request_headers=headers
92 )
93
94 resource = "{} {}".format(scope["method"], scope["path"])
95
96 span = self.tracer.trace(
97 name=self.integration_config.get("request_span_name", "asgi.request"),
98 service=trace_utils.int_service(None, self.integration_config),
99 resource=resource,
100 span_type=SpanTypes.WEB,
101 )
102
103 if self.span_modifier:
104 self.span_modifier(span, scope)
105
106 sample_rate = self.integration_config.get_analytics_sample_rate(use_global_config=True)
107 if sample_rate is not None:
108 span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, sample_rate)
109
110 method = scope.get("method")
111 server = scope.get("server")
112 if server and len(server) == 2:
113 port = server[1]
114 server_host = server[0] + (":" + str(port) if port is not None and port != 80 else "")
115 full_path = scope.get("root_path", "") + scope.get("path", "")
116 url = scope.get("scheme", "http") + "://" + server_host + full_path
117 else:
118 url = None
119
120 if self.integration_config.trace_query_string:
121 query_string = scope.get("query_string")
122 if len(query_string) > 0:
123 query_string = bytes_to_str(query_string)
124 else:
125 query_string = None
126
127 trace_utils.set_http_meta(
128 span, self.integration_config, method=method, url=url, query=query_string, request_headers=headers
129 )
130
131 tags = _extract_versions_from_scope(scope, self.integration_config)
132 span.set_tags(tags)
133
134 async def wrapped_send(message):
135 if span and message.get("type") == "http.response.start" and "status" in message:
136 status_code = message["status"]
137 else:
138 status_code = None
139
140 if "headers" in message:
141 response_headers = message["headers"]
142 else:
143 response_headers = None
144
145 trace_utils.set_http_meta(
146 span, self.integration_config, status_code=status_code, response_headers=response_headers
147 )
148
149 return await send(message)
150
151 try:
152 return await self.app(scope, receive, wrapped_send)
153 except Exception as exc:
154 (exc_type, exc_val, exc_tb) = sys.exc_info()
155 span.set_exc_info(exc_type, exc_val, exc_tb)
156 self.handle_exception_span(exc, span)
157 reraise(exc_type, exc_val, exc_tb)
158 finally:
159 span.finish()
160
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ddtrace/contrib/asgi/middleware.py b/ddtrace/contrib/asgi/middleware.py
--- a/ddtrace/contrib/asgi/middleware.py
+++ b/ddtrace/contrib/asgi/middleware.py
@@ -85,11 +85,15 @@
if scope["type"] != "http":
return await self.app(scope, receive, send)
- headers = _extract_headers(scope)
-
- trace_utils.activate_distributed_headers(
- self.tracer, int_config=self.integration_config, request_headers=headers
- )
+ try:
+ headers = _extract_headers(scope)
+ except Exception:
+ log.warning("failed to decode headers for distributed tracing", exc_info=True)
+ headers = {}
+ else:
+ trace_utils.activate_distributed_headers(
+ self.tracer, int_config=self.integration_config, request_headers=headers
+ )
resource = "{} {}".format(scope["method"], scope["path"])
| {"golden_diff": "diff --git a/ddtrace/contrib/asgi/middleware.py b/ddtrace/contrib/asgi/middleware.py\n--- a/ddtrace/contrib/asgi/middleware.py\n+++ b/ddtrace/contrib/asgi/middleware.py\n@@ -85,11 +85,15 @@\n if scope[\"type\"] != \"http\":\n return await self.app(scope, receive, send)\n \n- headers = _extract_headers(scope)\n-\n- trace_utils.activate_distributed_headers(\n- self.tracer, int_config=self.integration_config, request_headers=headers\n- )\n+ try:\n+ headers = _extract_headers(scope)\n+ except Exception:\n+ log.warning(\"failed to decode headers for distributed tracing\", exc_info=True)\n+ headers = {}\n+ else:\n+ trace_utils.activate_distributed_headers(\n+ self.tracer, int_config=self.integration_config, request_headers=headers\n+ )\n \n resource = \"{} {}\".format(scope[\"method\"], scope[\"path\"])\n", "issue": "UnicodeDecodeError for invalid headers in ASGI middleware\nWhen the ASGI middleware receives headers with invalid encoding, for example as the result of somebody fuzzing an endpoint as is the case here, it introduces a Unicode decode error into the HTTP pipeline. It shouldn't be doing this.\r\n\r\n### Which version of dd-trace-py are you using?\r\n\r\n0.46.0\r\n\r\n### Which version of the libraries are you using?\r\n\r\nWhat I believe are the potentially relevant libraries are listed below.\r\n\r\n```\r\ndatadog==0.40.1\r\n...\r\nddtrace==0.46.0\r\n...\r\nfastapi==0.63.0\r\n...\r\nhttpcore==0.12.3\r\nhttptools==0.1.1\r\n...\r\nstarlette==0.13.6\r\n...\r\nuvicorn==0.13.4\r\nuvloop==0.15.2\r\n```\r\n\r\n### How can we reproduce your problem?\r\n\r\n```python\r\nfrom ddtrace.contrib.asgi.middleware import _extract_headers\r\n\r\n\r\ndef test_ddtrace_invalid_header_name_encoding():\r\n assert _extract_headers({\"headers\": [((bytes.fromhex(\"c0\")), \"test\")]}) == {}\r\n\r\n\r\ndef test_ddtrace_invalid_header_value_encoding():\r\n assert _extract_headers({\"headers\": [(\"test\", (bytes.fromhex(\"c0\")))]}) == {}\r\n```\r\n\r\n### What is the result that you get?\r\n\r\nThe middleware blows up with a `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc0 in position 0: invalid start byte` exception.\r\n\r\n### What is the result that you expected?\r\n\r\nThis is trace middleware so it should not introduce exceptions into request processing. In the case of a header it can't decode then I'd expect it to either:\r\n\r\n1. Omit the invalid header from the result and continue attempting to decode the remainder, or\r\n2. Substitute a sentinel value for the unprocessable header to indicate an error\r\n\r\nI've suggested (1) in the repro above, as that seems like it'd be the least likely to cause any problems.\n", "before_files": [{"content": "import sys\n\nimport ddtrace\nfrom ddtrace import config\nfrom ddtrace.constants import ANALYTICS_SAMPLE_RATE_KEY\nfrom ddtrace.ext import SpanTypes\nfrom ddtrace.ext import http\n\nfrom .. import trace_utils\nfrom ...internal.compat import reraise\nfrom ...internal.logger import get_logger\nfrom .utils import guarantee_single_callable\n\n\nlog = get_logger(__name__)\n\nconfig._add(\n \"asgi\",\n dict(service_name=config._get_service(default=\"asgi\"), request_span_name=\"asgi.request\", distributed_tracing=True),\n)\n\nASGI_VERSION = \"asgi.version\"\nASGI_SPEC_VERSION = \"asgi.spec_version\"\n\n\ndef bytes_to_str(str_or_bytes):\n return str_or_bytes.decode() if isinstance(str_or_bytes, bytes) else str_or_bytes\n\n\ndef _extract_versions_from_scope(scope, integration_config):\n tags = {}\n\n http_version = scope.get(\"http_version\")\n if http_version:\n tags[http.VERSION] = http_version\n\n scope_asgi = scope.get(\"asgi\")\n\n if scope_asgi and \"version\" in scope_asgi:\n tags[ASGI_VERSION] = scope_asgi[\"version\"]\n\n if scope_asgi and \"spec_version\" in scope_asgi:\n tags[ASGI_SPEC_VERSION] = scope_asgi[\"spec_version\"]\n\n return tags\n\n\ndef _extract_headers(scope):\n headers = scope.get(\"headers\")\n if headers:\n # headers: (Iterable[[byte string, byte string]])\n return dict((bytes_to_str(k), bytes_to_str(v)) for (k, v) in headers)\n return {}\n\n\ndef _default_handle_exception_span(exc, span):\n \"\"\"Default handler for exception for span\"\"\"\n span.set_tag(http.STATUS_CODE, 500)\n\n\nclass TraceMiddleware:\n \"\"\"\n ASGI application middleware that traces the requests.\n\n Args:\n app: The ASGI application.\n tracer: Custom tracer. Defaults to the global tracer.\n \"\"\"\n\n def __init__(\n self,\n app,\n tracer=None,\n integration_config=config.asgi,\n handle_exception_span=_default_handle_exception_span,\n span_modifier=None,\n ):\n self.app = guarantee_single_callable(app)\n self.tracer = tracer or ddtrace.tracer\n self.integration_config = integration_config\n self.handle_exception_span = handle_exception_span\n self.span_modifier = span_modifier\n\n async def __call__(self, scope, receive, send):\n if scope[\"type\"] != \"http\":\n return await self.app(scope, receive, send)\n\n headers = _extract_headers(scope)\n\n trace_utils.activate_distributed_headers(\n self.tracer, int_config=self.integration_config, request_headers=headers\n )\n\n resource = \"{} {}\".format(scope[\"method\"], scope[\"path\"])\n\n span = self.tracer.trace(\n name=self.integration_config.get(\"request_span_name\", \"asgi.request\"),\n service=trace_utils.int_service(None, self.integration_config),\n resource=resource,\n span_type=SpanTypes.WEB,\n )\n\n if self.span_modifier:\n self.span_modifier(span, scope)\n\n sample_rate = self.integration_config.get_analytics_sample_rate(use_global_config=True)\n if sample_rate is not None:\n span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, sample_rate)\n\n method = scope.get(\"method\")\n server = scope.get(\"server\")\n if server and len(server) == 2:\n port = server[1]\n server_host = server[0] + (\":\" + str(port) if port is not None and port != 80 else \"\")\n full_path = scope.get(\"root_path\", \"\") + scope.get(\"path\", \"\")\n url = scope.get(\"scheme\", \"http\") + \"://\" + server_host + full_path\n else:\n url = None\n\n if self.integration_config.trace_query_string:\n query_string = scope.get(\"query_string\")\n if len(query_string) > 0:\n query_string = bytes_to_str(query_string)\n else:\n query_string = None\n\n trace_utils.set_http_meta(\n span, self.integration_config, method=method, url=url, query=query_string, request_headers=headers\n )\n\n tags = _extract_versions_from_scope(scope, self.integration_config)\n span.set_tags(tags)\n\n async def wrapped_send(message):\n if span and message.get(\"type\") == \"http.response.start\" and \"status\" in message:\n status_code = message[\"status\"]\n else:\n status_code = None\n\n if \"headers\" in message:\n response_headers = message[\"headers\"]\n else:\n response_headers = None\n\n trace_utils.set_http_meta(\n span, self.integration_config, status_code=status_code, response_headers=response_headers\n )\n\n return await send(message)\n\n try:\n return await self.app(scope, receive, wrapped_send)\n except Exception as exc:\n (exc_type, exc_val, exc_tb) = sys.exc_info()\n span.set_exc_info(exc_type, exc_val, exc_tb)\n self.handle_exception_span(exc, span)\n reraise(exc_type, exc_val, exc_tb)\n finally:\n span.finish()\n", "path": "ddtrace/contrib/asgi/middleware.py"}], "after_files": [{"content": "import sys\n\nimport ddtrace\nfrom ddtrace import config\nfrom ddtrace.constants import ANALYTICS_SAMPLE_RATE_KEY\nfrom ddtrace.ext import SpanTypes\nfrom ddtrace.ext import http\n\nfrom .. import trace_utils\nfrom ...internal.compat import reraise\nfrom ...internal.logger import get_logger\nfrom .utils import guarantee_single_callable\n\n\nlog = get_logger(__name__)\n\nconfig._add(\n \"asgi\",\n dict(service_name=config._get_service(default=\"asgi\"), request_span_name=\"asgi.request\", distributed_tracing=True),\n)\n\nASGI_VERSION = \"asgi.version\"\nASGI_SPEC_VERSION = \"asgi.spec_version\"\n\n\ndef bytes_to_str(str_or_bytes):\n return str_or_bytes.decode() if isinstance(str_or_bytes, bytes) else str_or_bytes\n\n\ndef _extract_versions_from_scope(scope, integration_config):\n tags = {}\n\n http_version = scope.get(\"http_version\")\n if http_version:\n tags[http.VERSION] = http_version\n\n scope_asgi = scope.get(\"asgi\")\n\n if scope_asgi and \"version\" in scope_asgi:\n tags[ASGI_VERSION] = scope_asgi[\"version\"]\n\n if scope_asgi and \"spec_version\" in scope_asgi:\n tags[ASGI_SPEC_VERSION] = scope_asgi[\"spec_version\"]\n\n return tags\n\n\ndef _extract_headers(scope):\n headers = scope.get(\"headers\")\n if headers:\n # headers: (Iterable[[byte string, byte string]])\n return dict((bytes_to_str(k), bytes_to_str(v)) for (k, v) in headers)\n return {}\n\n\ndef _default_handle_exception_span(exc, span):\n \"\"\"Default handler for exception for span\"\"\"\n span.set_tag(http.STATUS_CODE, 500)\n\n\nclass TraceMiddleware:\n \"\"\"\n ASGI application middleware that traces the requests.\n\n Args:\n app: The ASGI application.\n tracer: Custom tracer. Defaults to the global tracer.\n \"\"\"\n\n def __init__(\n self,\n app,\n tracer=None,\n integration_config=config.asgi,\n handle_exception_span=_default_handle_exception_span,\n span_modifier=None,\n ):\n self.app = guarantee_single_callable(app)\n self.tracer = tracer or ddtrace.tracer\n self.integration_config = integration_config\n self.handle_exception_span = handle_exception_span\n self.span_modifier = span_modifier\n\n async def __call__(self, scope, receive, send):\n if scope[\"type\"] != \"http\":\n return await self.app(scope, receive, send)\n\n try:\n headers = _extract_headers(scope)\n except Exception:\n log.warning(\"failed to decode headers for distributed tracing\", exc_info=True)\n headers = {}\n else:\n trace_utils.activate_distributed_headers(\n self.tracer, int_config=self.integration_config, request_headers=headers\n )\n\n resource = \"{} {}\".format(scope[\"method\"], scope[\"path\"])\n\n span = self.tracer.trace(\n name=self.integration_config.get(\"request_span_name\", \"asgi.request\"),\n service=trace_utils.int_service(None, self.integration_config),\n resource=resource,\n span_type=SpanTypes.WEB,\n )\n\n if self.span_modifier:\n self.span_modifier(span, scope)\n\n sample_rate = self.integration_config.get_analytics_sample_rate(use_global_config=True)\n if sample_rate is not None:\n span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, sample_rate)\n\n method = scope.get(\"method\")\n server = scope.get(\"server\")\n if server and len(server) == 2:\n port = server[1]\n server_host = server[0] + (\":\" + str(port) if port is not None and port != 80 else \"\")\n full_path = scope.get(\"root_path\", \"\") + scope.get(\"path\", \"\")\n url = scope.get(\"scheme\", \"http\") + \"://\" + server_host + full_path\n else:\n url = None\n\n if self.integration_config.trace_query_string:\n query_string = scope.get(\"query_string\")\n if len(query_string) > 0:\n query_string = bytes_to_str(query_string)\n else:\n query_string = None\n\n trace_utils.set_http_meta(\n span, self.integration_config, method=method, url=url, query=query_string, request_headers=headers\n )\n\n tags = _extract_versions_from_scope(scope, self.integration_config)\n span.set_tags(tags)\n\n async def wrapped_send(message):\n if span and message.get(\"type\") == \"http.response.start\" and \"status\" in message:\n status_code = message[\"status\"]\n else:\n status_code = None\n\n if \"headers\" in message:\n response_headers = message[\"headers\"]\n else:\n response_headers = None\n\n trace_utils.set_http_meta(\n span, self.integration_config, status_code=status_code, response_headers=response_headers\n )\n\n return await send(message)\n\n try:\n return await self.app(scope, receive, wrapped_send)\n except Exception as exc:\n (exc_type, exc_val, exc_tb) = sys.exc_info()\n span.set_exc_info(exc_type, exc_val, exc_tb)\n self.handle_exception_span(exc, span)\n reraise(exc_type, exc_val, exc_tb)\n finally:\n span.finish()\n", "path": "ddtrace/contrib/asgi/middleware.py"}]} | 2,179 | 209 |
gh_patches_debug_7432 | rasdani/github-patches | git_diff | pulp__pulpcore-3412 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
0077_move_remote_url_credentials.py fails on Remotes that have @ in path, not netloc
**Version**
3.18.10
**Describe the bug**
Migration 0077 fails when you have a remote that has an @ somewhere in the path
```
Applying core.0077_move_remote_url_credentials...Traceback (most recent call last):
File "/usr/bin/pulpcore-manager", line 33, in <module>
sys.exit(load_entry_point('pulpcore==3.18.10', 'console_scripts', 'pulpcore-manager')())
File "/usr/lib/python3.9/site-packages/pulpcore/app/manage.py", line 11, in manage
execute_from_command_line(sys.argv)
File "/usr/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/usr/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/usr/lib/python3.9/site-packages/django/core/management/base.py", line 89, in wrapped
res = handle_func(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/django/core/management/commands/migrate.py", line 244, in handle
post_migrate_state = executor.migrate(
File "/usr/lib/python3.9/site-packages/django/db/migrations/executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/usr/lib/python3.9/site-packages/django/db/migrations/executor.py", line 147, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/usr/lib/python3.9/site-packages/django/db/migrations/executor.py", line 227, in apply_migration
state = migration.apply(state, schema_editor)
File "/usr/lib/python3.9/site-packages/django/db/migrations/migration.py", line 126, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/usr/lib/python3.9/site-packages/django/db/migrations/operations/special.py", line 190, in database_forwards
self.code(from_state.apps, schema_editor)
File "/usr/lib/python3.9/site-packages/pulpcore/app/migrations/0077_move_remote_url_credentials.py", line 19, in move_remote_url_credentials
_, url_split = url.netloc.rsplit("@", maxsplit=1)
ValueError: not enough values to unpack (expected 2, got 1)
```
**To Reproduce**
Steps to reproduce the behavior:
* Have a remote `https://download.copr.fedorainfracloud.org/results/@caddy/caddy/epel-8-x86_64/`
* Try to migrate 0077
**Expected behavior**
migration aplies
**Additional context**
https://community.theforeman.org/t/foreman-3-3-katello-4-5-upgrade-failed-pulpcore-manager-migrate-noinput/31088
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/app/migrations/0077_move_remote_url_credentials.py`
Content:
```
1 # Generated by Django 3.2.6 on 2021-09-29 14:00
2
3 from urllib.parse import urlparse, urlunparse
4
5 from django.db import migrations
6
7
8 def move_remote_url_credentials(apps, schema_editor):
9 Remote = apps.get_model("core", "Remote")
10
11 for remote in Remote.objects.filter(url__contains="@").iterator():
12 url = urlparse(remote.url)
13
14 if not remote.username:
15 remote.username = url.username
16 if not remote.password:
17 remote.password = url.password
18
19 _, url_split = url.netloc.rsplit("@", maxsplit=1)
20 remote.url = urlunparse(url._replace(netloc=url_split))
21 remote.save()
22
23
24 class Migration(migrations.Migration):
25
26 dependencies = [
27 ('core', '0076_remove_reserved_resource'),
28 ]
29
30 operations = [
31 migrations.RunPython(
32 code=move_remote_url_credentials,
33 reverse_code=migrations.RunPython.noop,
34 elidable=True,
35 )
36 ]
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pulpcore/app/migrations/0077_move_remote_url_credentials.py b/pulpcore/app/migrations/0077_move_remote_url_credentials.py
--- a/pulpcore/app/migrations/0077_move_remote_url_credentials.py
+++ b/pulpcore/app/migrations/0077_move_remote_url_credentials.py
@@ -11,6 +11,11 @@
for remote in Remote.objects.filter(url__contains="@").iterator():
url = urlparse(remote.url)
+ if '@' not in url.netloc:
+ # URLs can have an @ in other places than the netloc,
+ # but those do not indicate credentials
+ continue
+
if not remote.username:
remote.username = url.username
if not remote.password:
| {"golden_diff": "diff --git a/pulpcore/app/migrations/0077_move_remote_url_credentials.py b/pulpcore/app/migrations/0077_move_remote_url_credentials.py\n--- a/pulpcore/app/migrations/0077_move_remote_url_credentials.py\n+++ b/pulpcore/app/migrations/0077_move_remote_url_credentials.py\n@@ -11,6 +11,11 @@\n for remote in Remote.objects.filter(url__contains=\"@\").iterator():\n url = urlparse(remote.url)\n \n+ if '@' not in url.netloc:\n+ # URLs can have an @ in other places than the netloc,\n+ # but those do not indicate credentials\n+ continue\n+\n if not remote.username:\n remote.username = url.username\n if not remote.password:\n", "issue": "0077_move_remote_url_credentials.py fails on Remotes that have @ in path, not netloc\n**Version**\r\n3.18.10\r\n\r\n**Describe the bug**\r\nMigration 0077 fails when you have a remote that has an @ somewhere in the path\r\n\r\n```\r\n Applying core.0077_move_remote_url_credentials...Traceback (most recent call last):\r\n File \"/usr/bin/pulpcore-manager\", line 33, in <module>\r\n sys.exit(load_entry_point('pulpcore==3.18.10', 'console_scripts', 'pulpcore-manager')())\r\n File \"/usr/lib/python3.9/site-packages/pulpcore/app/manage.py\", line 11, in manage\r\n execute_from_command_line(sys.argv)\r\n File \"/usr/lib/python3.9/site-packages/django/core/management/__init__.py\", line 419, in execute_from_command_line\r\n utility.execute()\r\n File \"/usr/lib/python3.9/site-packages/django/core/management/__init__.py\", line 413, in execute\r\n self.fetch_command(subcommand).run_from_argv(self.argv)\r\n File \"/usr/lib/python3.9/site-packages/django/core/management/base.py\", line 354, in run_from_argv\r\n self.execute(*args, **cmd_options)\r\n File \"/usr/lib/python3.9/site-packages/django/core/management/base.py\", line 398, in execute\r\n output = self.handle(*args, **options)\r\n File \"/usr/lib/python3.9/site-packages/django/core/management/base.py\", line 89, in wrapped\r\n res = handle_func(*args, **kwargs)\r\n File \"/usr/lib/python3.9/site-packages/django/core/management/commands/migrate.py\", line 244, in handle\r\n post_migrate_state = executor.migrate(\r\n File \"/usr/lib/python3.9/site-packages/django/db/migrations/executor.py\", line 117, in migrate\r\n state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)\r\n File \"/usr/lib/python3.9/site-packages/django/db/migrations/executor.py\", line 147, in _migrate_all_forwards\r\n state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)\r\n File \"/usr/lib/python3.9/site-packages/django/db/migrations/executor.py\", line 227, in apply_migration\r\n state = migration.apply(state, schema_editor)\r\n File \"/usr/lib/python3.9/site-packages/django/db/migrations/migration.py\", line 126, in apply\r\n operation.database_forwards(self.app_label, schema_editor, old_state, project_state)\r\n File \"/usr/lib/python3.9/site-packages/django/db/migrations/operations/special.py\", line 190, in database_forwards\r\n self.code(from_state.apps, schema_editor)\r\n File \"/usr/lib/python3.9/site-packages/pulpcore/app/migrations/0077_move_remote_url_credentials.py\", line 19, in move_remote_url_credentials\r\n _, url_split = url.netloc.rsplit(\"@\", maxsplit=1)\r\nValueError: not enough values to unpack (expected 2, got 1)\r\n```\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n* Have a remote `https://download.copr.fedorainfracloud.org/results/@caddy/caddy/epel-8-x86_64/`\r\n* Try to migrate 0077\r\n\r\n**Expected behavior**\r\nmigration aplies\r\n\r\n**Additional context**\r\nhttps://community.theforeman.org/t/foreman-3-3-katello-4-5-upgrade-failed-pulpcore-manager-migrate-noinput/31088\r\n\n", "before_files": [{"content": "# Generated by Django 3.2.6 on 2021-09-29 14:00\n\nfrom urllib.parse import urlparse, urlunparse\n\nfrom django.db import migrations\n\n\ndef move_remote_url_credentials(apps, schema_editor):\n Remote = apps.get_model(\"core\", \"Remote\")\n\n for remote in Remote.objects.filter(url__contains=\"@\").iterator():\n url = urlparse(remote.url)\n\n if not remote.username:\n remote.username = url.username\n if not remote.password:\n remote.password = url.password\n\n _, url_split = url.netloc.rsplit(\"@\", maxsplit=1)\n remote.url = urlunparse(url._replace(netloc=url_split))\n remote.save()\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0076_remove_reserved_resource'),\n ]\n\n operations = [\n migrations.RunPython(\n code=move_remote_url_credentials,\n reverse_code=migrations.RunPython.noop,\n elidable=True,\n )\n ]\n", "path": "pulpcore/app/migrations/0077_move_remote_url_credentials.py"}], "after_files": [{"content": "# Generated by Django 3.2.6 on 2021-09-29 14:00\n\nfrom urllib.parse import urlparse, urlunparse\n\nfrom django.db import migrations\n\n\ndef move_remote_url_credentials(apps, schema_editor):\n Remote = apps.get_model(\"core\", \"Remote\")\n\n for remote in Remote.objects.filter(url__contains=\"@\").iterator():\n url = urlparse(remote.url)\n\n if '@' not in url.netloc:\n # URLs can have an @ in other places than the netloc,\n # but those do not indicate credentials\n continue\n\n if not remote.username:\n remote.username = url.username\n if not remote.password:\n remote.password = url.password\n\n _, url_split = url.netloc.rsplit(\"@\", maxsplit=1)\n remote.url = urlunparse(url._replace(netloc=url_split))\n remote.save()\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('core', '0076_remove_reserved_resource'),\n ]\n\n operations = [\n migrations.RunPython(\n code=move_remote_url_credentials,\n reverse_code=migrations.RunPython.noop,\n elidable=True,\n )\n ]\n", "path": "pulpcore/app/migrations/0077_move_remote_url_credentials.py"}]} | 1,388 | 172 |
gh_patches_debug_30344 | rasdani/github-patches | git_diff | CTFd__CTFd-760 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dynamic challenges not work when using user mode
**Environment**:
- CTFd Version/Commit:`2bd310b5d99f4305f8b61076f9af3b7e3c91ba8f`
- Operating System: `ubuntu 16.04`
- Web Browser and Version: `Chrome`
when using user mode , `solve_count` always get 0
https://github.com/CTFd/CTFd/blob/master/CTFd/plugins/dynamic_challenges/__init__.py#L95
```
solve_count = Solves.query \
.join(Teams, Solves.team_id == Teams.id) \
.filter(Solves.challenge_id == challenge.id, Teams.banned == False) \
.count()
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/plugins/dynamic_challenges/__init__.py`
Content:
```
1 from __future__ import division # Use floating point for math calculations
2 from CTFd.plugins.challenges import BaseChallenge, CHALLENGE_CLASSES
3 from CTFd.plugins import register_plugin_assets_directory
4 from CTFd.plugins.flags import get_flag_class
5 from CTFd.models import db, Solves, Fails, Flags, Challenges, ChallengeFiles, Tags, Teams, Hints
6 from CTFd import utils
7 from CTFd.utils.migrations import upgrade
8 from CTFd.utils.user import get_ip
9 from CTFd.utils.uploads import upload_file, delete_file
10 from flask import Blueprint
11 import math
12
13
14 class DynamicValueChallenge(BaseChallenge):
15 id = "dynamic" # Unique identifier used to register challenges
16 name = "dynamic" # Name of a challenge type
17 templates = { # Handlebars templates used for each aspect of challenge editing & viewing
18 'create': '/plugins/dynamic_challenges/assets/create.html',
19 'update': '/plugins/dynamic_challenges/assets/update.html',
20 'view': '/plugins/dynamic_challenges/assets/view.html',
21 }
22 scripts = { # Scripts that are loaded when a template is loaded
23 'create': '/plugins/dynamic_challenges/assets/create.js',
24 'update': '/plugins/dynamic_challenges/assets/update.js',
25 'view': '/plugins/dynamic_challenges/assets/view.js',
26 }
27 # Route at which files are accessible. This must be registered using register_plugin_assets_directory()
28 route = '/plugins/dynamic_challenges/assets/'
29 # Blueprint used to access the static_folder directory.
30 blueprint = Blueprint('dynamic_challenges', __name__, template_folder='templates', static_folder='assets')
31
32 @staticmethod
33 def create(request):
34 """
35 This method is used to process the challenge creation request.
36
37 :param request:
38 :return:
39 """
40 data = request.form or request.get_json()
41 challenge = DynamicChallenge(**data)
42
43 db.session.add(challenge)
44 db.session.commit()
45
46 return challenge
47
48 @staticmethod
49 def read(challenge):
50 """
51 This method is in used to access the data of a challenge in a format processable by the front end.
52
53 :param challenge:
54 :return: Challenge object, data dictionary to be returned to the user
55 """
56 challenge = DynamicChallenge.query.filter_by(id=challenge.id).first()
57 data = {
58 'id': challenge.id,
59 'name': challenge.name,
60 'value': challenge.value,
61 'initial': challenge.initial,
62 'decay': challenge.decay,
63 'minimum': challenge.minimum,
64 'description': challenge.description,
65 'category': challenge.category,
66 'state': challenge.state,
67 'max_attempts': challenge.max_attempts,
68 'type': challenge.type,
69 'type_data': {
70 'id': DynamicValueChallenge.id,
71 'name': DynamicValueChallenge.name,
72 'templates': DynamicValueChallenge.templates,
73 'scripts': DynamicValueChallenge.scripts,
74 }
75 }
76 return data
77
78 @staticmethod
79 def update(challenge, request):
80 """
81 This method is used to update the information associated with a challenge. This should be kept strictly to the
82 Challenges table and any child tables.
83
84 :param challenge:
85 :param request:
86 :return:
87 """
88 data = request.form or request.get_json()
89 data['initial'] = float(data.get('initial', 0))
90 data['minimum'] = float(data.get('minimum', 0))
91 data['decay'] = float(data.get('decay', 0))
92 for attr, value in data.items():
93 setattr(challenge, attr, value)
94
95 solve_count = Solves.query \
96 .join(Teams, Solves.team_id == Teams.id) \
97 .filter(Solves.challenge_id == challenge.id, Teams.banned == False) \
98 .count()
99
100 # It is important that this calculation takes into account floats.
101 # Hence this file uses from __future__ import division
102 value = (((challenge.minimum - challenge.initial) / (challenge.decay ** 2)) * (solve_count ** 2)) + challenge.initial
103
104 value = math.ceil(value)
105
106 if value < challenge.minimum:
107 value = challenge.minimum
108
109 challenge.value = value
110
111 db.session.commit()
112 return challenge
113
114 @staticmethod
115 def delete(challenge):
116 """
117 This method is used to delete the resources used by a challenge.
118
119 :param challenge:
120 :return:
121 """
122 Fails.query.filter_by(challenge_id=challenge.id).delete()
123 Solves.query.filter_by(challenge_id=challenge.id).delete()
124 Flags.query.filter_by(challenge_id=challenge.id).delete()
125 files = ChallengeFiles.query.filter_by(challenge_id=challenge.id).all()
126 for f in files:
127 delete_file(f.id)
128 ChallengeFiles.query.filter_by(challenge_id=challenge.id).delete()
129 Tags.query.filter_by(challenge_id=challenge.id).delete()
130 Hints.query.filter_by(challenge_id=challenge.id).delete()
131 DynamicChallenge.query.filter_by(id=challenge.id).delete()
132 Challenges.query.filter_by(id=challenge.id).delete()
133 db.session.commit()
134
135 @staticmethod
136 def attempt(challenge, request):
137 """
138 This method is used to check whether a given input is right or wrong. It does not make any changes and should
139 return a boolean for correctness and a string to be shown to the user. It is also in charge of parsing the
140 user's input from the request itself.
141
142 :param challenge: The Challenge object from the database
143 :param request: The request the user submitted
144 :return: (boolean, string)
145 """
146 data = request.form or request.get_json()
147 submission = data['submission'].strip()
148 flags = Flags.query.filter_by(challenge_id=challenge.id).all()
149 for flag in flags:
150 if get_flag_class(flag.type).compare(flag, submission):
151 return True, 'Correct'
152 return False, 'Incorrect'
153
154 @staticmethod
155 def solve(user, team, challenge, request):
156 """
157 This method is used to insert Solves into the database in order to mark a challenge as solved.
158
159 :param team: The Team object from the database
160 :param chal: The Challenge object from the database
161 :param request: The request the user submitted
162 :return:
163 """
164 chal = DynamicChallenge.query.filter_by(id=challenge.id).first()
165 data = request.form or request.get_json()
166 submission = data['submission'].strip()
167
168 solve_count = Solves.query\
169 .join(Teams, Solves.team_id == Teams.id)\
170 .filter(Solves.challenge_id == chal.id, Teams.banned == False)\
171 .count()
172
173 # It is important that this calculation takes into account floats.
174 # Hence this file uses from __future__ import division
175 value = (
176 (
177 (chal.minimum - chal.initial) / (chal.decay**2)
178 ) * (solve_count**2)
179 ) + chal.initial
180
181 value = math.ceil(value)
182
183 if value < chal.minimum:
184 value = chal.minimum
185
186 chal.value = value
187
188 solve = Solves(
189 user_id=user.id,
190 team_id=team.id if team else None,
191 challenge_id=challenge.id,
192 ip=get_ip(req=request),
193 provided=submission
194 )
195 db.session.add(solve)
196 db.session.commit()
197 db.session.close()
198
199 @staticmethod
200 def fail(user, team, challenge, request):
201 """
202 This method is used to insert Fails into the database in order to mark an answer incorrect.
203
204 :param team: The Team object from the database
205 :param challenge: The Challenge object from the database
206 :param request: The request the user submitted
207 :return:
208 """
209 data = request.form or request.get_json()
210 submission = data['submission'].strip()
211 wrong = Fails(
212 user_id=user.id,
213 team_id=team.id if team else None,
214 challenge_id=challenge.id,
215 ip=get_ip(request),
216 provided=submission
217 )
218 db.session.add(wrong)
219 db.session.commit()
220 db.session.close()
221
222
223 class DynamicChallenge(Challenges):
224 __mapper_args__ = {'polymorphic_identity': 'dynamic'}
225 id = db.Column(None, db.ForeignKey('challenges.id'), primary_key=True)
226 initial = db.Column(db.Integer)
227 minimum = db.Column(db.Integer)
228 decay = db.Column(db.Integer)
229
230 def __init__(self, *args, **kwargs):
231 super(DynamicChallenge, self).__init__(**kwargs)
232 self.initial = kwargs['value']
233
234
235 def load(app):
236 # upgrade()
237 app.db.create_all()
238 CHALLENGE_CLASSES['dynamic'] = DynamicValueChallenge
239 register_plugin_assets_directory(app, base_path='/plugins/dynamic_challenges/assets/')
240
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/CTFd/plugins/dynamic_challenges/__init__.py b/CTFd/plugins/dynamic_challenges/__init__.py
--- a/CTFd/plugins/dynamic_challenges/__init__.py
+++ b/CTFd/plugins/dynamic_challenges/__init__.py
@@ -7,6 +7,7 @@
from CTFd.utils.migrations import upgrade
from CTFd.utils.user import get_ip
from CTFd.utils.uploads import upload_file, delete_file
+from CTFd.utils.modes import get_model
from flask import Blueprint
import math
@@ -92,9 +93,11 @@
for attr, value in data.items():
setattr(challenge, attr, value)
+ Model = get_model()
+
solve_count = Solves.query \
- .join(Teams, Solves.team_id == Teams.id) \
- .filter(Solves.challenge_id == challenge.id, Teams.banned == False) \
+ .join(Model, Solves.account_id == Model.id) \
+ .filter(Solves.challenge_id == challenge.id, Model.hidden == False, Model.banned == False) \
.count()
# It is important that this calculation takes into account floats.
@@ -165,9 +168,11 @@
data = request.form or request.get_json()
submission = data['submission'].strip()
- solve_count = Solves.query\
- .join(Teams, Solves.team_id == Teams.id)\
- .filter(Solves.challenge_id == chal.id, Teams.banned == False)\
+ Model = get_model()
+
+ solve_count = Solves.query \
+ .join(Model, Solves.account_id == Model.id) \
+ .filter(Solves.challenge_id == challenge.id, Model.hidden == False, Model.banned == False) \
.count()
# It is important that this calculation takes into account floats.
| {"golden_diff": "diff --git a/CTFd/plugins/dynamic_challenges/__init__.py b/CTFd/plugins/dynamic_challenges/__init__.py\n--- a/CTFd/plugins/dynamic_challenges/__init__.py\n+++ b/CTFd/plugins/dynamic_challenges/__init__.py\n@@ -7,6 +7,7 @@\n from CTFd.utils.migrations import upgrade\n from CTFd.utils.user import get_ip\n from CTFd.utils.uploads import upload_file, delete_file\n+from CTFd.utils.modes import get_model\n from flask import Blueprint\n import math\n \n@@ -92,9 +93,11 @@\n for attr, value in data.items():\n setattr(challenge, attr, value)\n \n+ Model = get_model()\n+\n solve_count = Solves.query \\\n- .join(Teams, Solves.team_id == Teams.id) \\\n- .filter(Solves.challenge_id == challenge.id, Teams.banned == False) \\\n+ .join(Model, Solves.account_id == Model.id) \\\n+ .filter(Solves.challenge_id == challenge.id, Model.hidden == False, Model.banned == False) \\\n .count()\n \n # It is important that this calculation takes into account floats.\n@@ -165,9 +168,11 @@\n data = request.form or request.get_json()\n submission = data['submission'].strip()\n \n- solve_count = Solves.query\\\n- .join(Teams, Solves.team_id == Teams.id)\\\n- .filter(Solves.challenge_id == chal.id, Teams.banned == False)\\\n+ Model = get_model()\n+\n+ solve_count = Solves.query \\\n+ .join(Model, Solves.account_id == Model.id) \\\n+ .filter(Solves.challenge_id == challenge.id, Model.hidden == False, Model.banned == False) \\\n .count()\n \n # It is important that this calculation takes into account floats.\n", "issue": "dynamic challenges not work when using user mode\n\r\n**Environment**:\r\n\r\n - CTFd Version/Commit:`2bd310b5d99f4305f8b61076f9af3b7e3c91ba8f`\r\n - Operating System: `ubuntu 16.04`\r\n - Web Browser and Version: `Chrome`\r\n\r\nwhen using user mode , `solve_count` always get 0 \r\n\r\nhttps://github.com/CTFd/CTFd/blob/master/CTFd/plugins/dynamic_challenges/__init__.py#L95\r\n```\r\n solve_count = Solves.query \\\r\n .join(Teams, Solves.team_id == Teams.id) \\\r\n .filter(Solves.challenge_id == challenge.id, Teams.banned == False) \\\r\n .count()\r\n```\r\n\n", "before_files": [{"content": "from __future__ import division # Use floating point for math calculations\nfrom CTFd.plugins.challenges import BaseChallenge, CHALLENGE_CLASSES\nfrom CTFd.plugins import register_plugin_assets_directory\nfrom CTFd.plugins.flags import get_flag_class\nfrom CTFd.models import db, Solves, Fails, Flags, Challenges, ChallengeFiles, Tags, Teams, Hints\nfrom CTFd import utils\nfrom CTFd.utils.migrations import upgrade\nfrom CTFd.utils.user import get_ip\nfrom CTFd.utils.uploads import upload_file, delete_file\nfrom flask import Blueprint\nimport math\n\n\nclass DynamicValueChallenge(BaseChallenge):\n id = \"dynamic\" # Unique identifier used to register challenges\n name = \"dynamic\" # Name of a challenge type\n templates = { # Handlebars templates used for each aspect of challenge editing & viewing\n 'create': '/plugins/dynamic_challenges/assets/create.html',\n 'update': '/plugins/dynamic_challenges/assets/update.html',\n 'view': '/plugins/dynamic_challenges/assets/view.html',\n }\n scripts = { # Scripts that are loaded when a template is loaded\n 'create': '/plugins/dynamic_challenges/assets/create.js',\n 'update': '/plugins/dynamic_challenges/assets/update.js',\n 'view': '/plugins/dynamic_challenges/assets/view.js',\n }\n # Route at which files are accessible. This must be registered using register_plugin_assets_directory()\n route = '/plugins/dynamic_challenges/assets/'\n # Blueprint used to access the static_folder directory.\n blueprint = Blueprint('dynamic_challenges', __name__, template_folder='templates', static_folder='assets')\n\n @staticmethod\n def create(request):\n \"\"\"\n This method is used to process the challenge creation request.\n\n :param request:\n :return:\n \"\"\"\n data = request.form or request.get_json()\n challenge = DynamicChallenge(**data)\n\n db.session.add(challenge)\n db.session.commit()\n\n return challenge\n\n @staticmethod\n def read(challenge):\n \"\"\"\n This method is in used to access the data of a challenge in a format processable by the front end.\n\n :param challenge:\n :return: Challenge object, data dictionary to be returned to the user\n \"\"\"\n challenge = DynamicChallenge.query.filter_by(id=challenge.id).first()\n data = {\n 'id': challenge.id,\n 'name': challenge.name,\n 'value': challenge.value,\n 'initial': challenge.initial,\n 'decay': challenge.decay,\n 'minimum': challenge.minimum,\n 'description': challenge.description,\n 'category': challenge.category,\n 'state': challenge.state,\n 'max_attempts': challenge.max_attempts,\n 'type': challenge.type,\n 'type_data': {\n 'id': DynamicValueChallenge.id,\n 'name': DynamicValueChallenge.name,\n 'templates': DynamicValueChallenge.templates,\n 'scripts': DynamicValueChallenge.scripts,\n }\n }\n return data\n\n @staticmethod\n def update(challenge, request):\n \"\"\"\n This method is used to update the information associated with a challenge. This should be kept strictly to the\n Challenges table and any child tables.\n\n :param challenge:\n :param request:\n :return:\n \"\"\"\n data = request.form or request.get_json()\n data['initial'] = float(data.get('initial', 0))\n data['minimum'] = float(data.get('minimum', 0))\n data['decay'] = float(data.get('decay', 0))\n for attr, value in data.items():\n setattr(challenge, attr, value)\n\n solve_count = Solves.query \\\n .join(Teams, Solves.team_id == Teams.id) \\\n .filter(Solves.challenge_id == challenge.id, Teams.banned == False) \\\n .count()\n\n # It is important that this calculation takes into account floats.\n # Hence this file uses from __future__ import division\n value = (((challenge.minimum - challenge.initial) / (challenge.decay ** 2)) * (solve_count ** 2)) + challenge.initial\n\n value = math.ceil(value)\n\n if value < challenge.minimum:\n value = challenge.minimum\n\n challenge.value = value\n\n db.session.commit()\n return challenge\n\n @staticmethod\n def delete(challenge):\n \"\"\"\n This method is used to delete the resources used by a challenge.\n\n :param challenge:\n :return:\n \"\"\"\n Fails.query.filter_by(challenge_id=challenge.id).delete()\n Solves.query.filter_by(challenge_id=challenge.id).delete()\n Flags.query.filter_by(challenge_id=challenge.id).delete()\n files = ChallengeFiles.query.filter_by(challenge_id=challenge.id).all()\n for f in files:\n delete_file(f.id)\n ChallengeFiles.query.filter_by(challenge_id=challenge.id).delete()\n Tags.query.filter_by(challenge_id=challenge.id).delete()\n Hints.query.filter_by(challenge_id=challenge.id).delete()\n DynamicChallenge.query.filter_by(id=challenge.id).delete()\n Challenges.query.filter_by(id=challenge.id).delete()\n db.session.commit()\n\n @staticmethod\n def attempt(challenge, request):\n \"\"\"\n This method is used to check whether a given input is right or wrong. It does not make any changes and should\n return a boolean for correctness and a string to be shown to the user. It is also in charge of parsing the\n user's input from the request itself.\n\n :param challenge: The Challenge object from the database\n :param request: The request the user submitted\n :return: (boolean, string)\n \"\"\"\n data = request.form or request.get_json()\n submission = data['submission'].strip()\n flags = Flags.query.filter_by(challenge_id=challenge.id).all()\n for flag in flags:\n if get_flag_class(flag.type).compare(flag, submission):\n return True, 'Correct'\n return False, 'Incorrect'\n\n @staticmethod\n def solve(user, team, challenge, request):\n \"\"\"\n This method is used to insert Solves into the database in order to mark a challenge as solved.\n\n :param team: The Team object from the database\n :param chal: The Challenge object from the database\n :param request: The request the user submitted\n :return:\n \"\"\"\n chal = DynamicChallenge.query.filter_by(id=challenge.id).first()\n data = request.form or request.get_json()\n submission = data['submission'].strip()\n\n solve_count = Solves.query\\\n .join(Teams, Solves.team_id == Teams.id)\\\n .filter(Solves.challenge_id == chal.id, Teams.banned == False)\\\n .count()\n\n # It is important that this calculation takes into account floats.\n # Hence this file uses from __future__ import division\n value = (\n (\n (chal.minimum - chal.initial) / (chal.decay**2)\n ) * (solve_count**2)\n ) + chal.initial\n\n value = math.ceil(value)\n\n if value < chal.minimum:\n value = chal.minimum\n\n chal.value = value\n\n solve = Solves(\n user_id=user.id,\n team_id=team.id if team else None,\n challenge_id=challenge.id,\n ip=get_ip(req=request),\n provided=submission\n )\n db.session.add(solve)\n db.session.commit()\n db.session.close()\n\n @staticmethod\n def fail(user, team, challenge, request):\n \"\"\"\n This method is used to insert Fails into the database in order to mark an answer incorrect.\n\n :param team: The Team object from the database\n :param challenge: The Challenge object from the database\n :param request: The request the user submitted\n :return:\n \"\"\"\n data = request.form or request.get_json()\n submission = data['submission'].strip()\n wrong = Fails(\n user_id=user.id,\n team_id=team.id if team else None,\n challenge_id=challenge.id,\n ip=get_ip(request),\n provided=submission\n )\n db.session.add(wrong)\n db.session.commit()\n db.session.close()\n\n\nclass DynamicChallenge(Challenges):\n __mapper_args__ = {'polymorphic_identity': 'dynamic'}\n id = db.Column(None, db.ForeignKey('challenges.id'), primary_key=True)\n initial = db.Column(db.Integer)\n minimum = db.Column(db.Integer)\n decay = db.Column(db.Integer)\n\n def __init__(self, *args, **kwargs):\n super(DynamicChallenge, self).__init__(**kwargs)\n self.initial = kwargs['value']\n\n\ndef load(app):\n # upgrade()\n app.db.create_all()\n CHALLENGE_CLASSES['dynamic'] = DynamicValueChallenge\n register_plugin_assets_directory(app, base_path='/plugins/dynamic_challenges/assets/')\n", "path": "CTFd/plugins/dynamic_challenges/__init__.py"}], "after_files": [{"content": "from __future__ import division # Use floating point for math calculations\nfrom CTFd.plugins.challenges import BaseChallenge, CHALLENGE_CLASSES\nfrom CTFd.plugins import register_plugin_assets_directory\nfrom CTFd.plugins.flags import get_flag_class\nfrom CTFd.models import db, Solves, Fails, Flags, Challenges, ChallengeFiles, Tags, Teams, Hints\nfrom CTFd import utils\nfrom CTFd.utils.migrations import upgrade\nfrom CTFd.utils.user import get_ip\nfrom CTFd.utils.uploads import upload_file, delete_file\nfrom CTFd.utils.modes import get_model\nfrom flask import Blueprint\nimport math\n\n\nclass DynamicValueChallenge(BaseChallenge):\n id = \"dynamic\" # Unique identifier used to register challenges\n name = \"dynamic\" # Name of a challenge type\n templates = { # Handlebars templates used for each aspect of challenge editing & viewing\n 'create': '/plugins/dynamic_challenges/assets/create.html',\n 'update': '/plugins/dynamic_challenges/assets/update.html',\n 'view': '/plugins/dynamic_challenges/assets/view.html',\n }\n scripts = { # Scripts that are loaded when a template is loaded\n 'create': '/plugins/dynamic_challenges/assets/create.js',\n 'update': '/plugins/dynamic_challenges/assets/update.js',\n 'view': '/plugins/dynamic_challenges/assets/view.js',\n }\n # Route at which files are accessible. This must be registered using register_plugin_assets_directory()\n route = '/plugins/dynamic_challenges/assets/'\n # Blueprint used to access the static_folder directory.\n blueprint = Blueprint('dynamic_challenges', __name__, template_folder='templates', static_folder='assets')\n\n @staticmethod\n def create(request):\n \"\"\"\n This method is used to process the challenge creation request.\n\n :param request:\n :return:\n \"\"\"\n data = request.form or request.get_json()\n challenge = DynamicChallenge(**data)\n\n db.session.add(challenge)\n db.session.commit()\n\n return challenge\n\n @staticmethod\n def read(challenge):\n \"\"\"\n This method is in used to access the data of a challenge in a format processable by the front end.\n\n :param challenge:\n :return: Challenge object, data dictionary to be returned to the user\n \"\"\"\n challenge = DynamicChallenge.query.filter_by(id=challenge.id).first()\n data = {\n 'id': challenge.id,\n 'name': challenge.name,\n 'value': challenge.value,\n 'initial': challenge.initial,\n 'decay': challenge.decay,\n 'minimum': challenge.minimum,\n 'description': challenge.description,\n 'category': challenge.category,\n 'state': challenge.state,\n 'max_attempts': challenge.max_attempts,\n 'type': challenge.type,\n 'type_data': {\n 'id': DynamicValueChallenge.id,\n 'name': DynamicValueChallenge.name,\n 'templates': DynamicValueChallenge.templates,\n 'scripts': DynamicValueChallenge.scripts,\n }\n }\n return data\n\n @staticmethod\n def update(challenge, request):\n \"\"\"\n This method is used to update the information associated with a challenge. This should be kept strictly to the\n Challenges table and any child tables.\n\n :param challenge:\n :param request:\n :return:\n \"\"\"\n data = request.form or request.get_json()\n data['initial'] = float(data.get('initial', 0))\n data['minimum'] = float(data.get('minimum', 0))\n data['decay'] = float(data.get('decay', 0))\n for attr, value in data.items():\n setattr(challenge, attr, value)\n\n Model = get_model()\n\n solve_count = Solves.query \\\n .join(Model, Solves.account_id == Model.id) \\\n .filter(Solves.challenge_id == challenge.id, Model.hidden == False, Model.banned == False) \\\n .count()\n\n # It is important that this calculation takes into account floats.\n # Hence this file uses from __future__ import division\n value = (((challenge.minimum - challenge.initial) / (challenge.decay ** 2)) * (solve_count ** 2)) + challenge.initial\n\n value = math.ceil(value)\n\n if value < challenge.minimum:\n value = challenge.minimum\n\n challenge.value = value\n\n db.session.commit()\n return challenge\n\n @staticmethod\n def delete(challenge):\n \"\"\"\n This method is used to delete the resources used by a challenge.\n\n :param challenge:\n :return:\n \"\"\"\n Fails.query.filter_by(challenge_id=challenge.id).delete()\n Solves.query.filter_by(challenge_id=challenge.id).delete()\n Flags.query.filter_by(challenge_id=challenge.id).delete()\n files = ChallengeFiles.query.filter_by(challenge_id=challenge.id).all()\n for f in files:\n delete_file(f.id)\n ChallengeFiles.query.filter_by(challenge_id=challenge.id).delete()\n Tags.query.filter_by(challenge_id=challenge.id).delete()\n Hints.query.filter_by(challenge_id=challenge.id).delete()\n DynamicChallenge.query.filter_by(id=challenge.id).delete()\n Challenges.query.filter_by(id=challenge.id).delete()\n db.session.commit()\n\n @staticmethod\n def attempt(challenge, request):\n \"\"\"\n This method is used to check whether a given input is right or wrong. It does not make any changes and should\n return a boolean for correctness and a string to be shown to the user. It is also in charge of parsing the\n user's input from the request itself.\n\n :param challenge: The Challenge object from the database\n :param request: The request the user submitted\n :return: (boolean, string)\n \"\"\"\n data = request.form or request.get_json()\n submission = data['submission'].strip()\n flags = Flags.query.filter_by(challenge_id=challenge.id).all()\n for flag in flags:\n if get_flag_class(flag.type).compare(flag, submission):\n return True, 'Correct'\n return False, 'Incorrect'\n\n @staticmethod\n def solve(user, team, challenge, request):\n \"\"\"\n This method is used to insert Solves into the database in order to mark a challenge as solved.\n\n :param team: The Team object from the database\n :param chal: The Challenge object from the database\n :param request: The request the user submitted\n :return:\n \"\"\"\n chal = DynamicChallenge.query.filter_by(id=challenge.id).first()\n data = request.form or request.get_json()\n submission = data['submission'].strip()\n\n Model = get_model()\n\n solve_count = Solves.query \\\n .join(Model, Solves.account_id == Model.id) \\\n .filter(Solves.challenge_id == challenge.id, Model.hidden == False, Model.banned == False) \\\n .count()\n\n # It is important that this calculation takes into account floats.\n # Hence this file uses from __future__ import division\n value = (\n (\n (chal.minimum - chal.initial) / (chal.decay**2)\n ) * (solve_count**2)\n ) + chal.initial\n\n value = math.ceil(value)\n\n if value < chal.minimum:\n value = chal.minimum\n\n chal.value = value\n\n solve = Solves(\n user_id=user.id,\n team_id=team.id if team else None,\n challenge_id=challenge.id,\n ip=get_ip(req=request),\n provided=submission\n )\n db.session.add(solve)\n db.session.commit()\n db.session.close()\n\n @staticmethod\n def fail(user, team, challenge, request):\n \"\"\"\n This method is used to insert Fails into the database in order to mark an answer incorrect.\n\n :param team: The Team object from the database\n :param challenge: The Challenge object from the database\n :param request: The request the user submitted\n :return:\n \"\"\"\n data = request.form or request.get_json()\n submission = data['submission'].strip()\n wrong = Fails(\n user_id=user.id,\n team_id=team.id if team else None,\n challenge_id=challenge.id,\n ip=get_ip(request),\n provided=submission\n )\n db.session.add(wrong)\n db.session.commit()\n db.session.close()\n\n\nclass DynamicChallenge(Challenges):\n __mapper_args__ = {'polymorphic_identity': 'dynamic'}\n id = db.Column(None, db.ForeignKey('challenges.id'), primary_key=True)\n initial = db.Column(db.Integer)\n minimum = db.Column(db.Integer)\n decay = db.Column(db.Integer)\n\n def __init__(self, *args, **kwargs):\n super(DynamicChallenge, self).__init__(**kwargs)\n self.initial = kwargs['value']\n\n\ndef load(app):\n # upgrade()\n app.db.create_all()\n CHALLENGE_CLASSES['dynamic'] = DynamicValueChallenge\n register_plugin_assets_directory(app, base_path='/plugins/dynamic_challenges/assets/')\n", "path": "CTFd/plugins/dynamic_challenges/__init__.py"}]} | 2,967 | 422 |
gh_patches_debug_112 | rasdani/github-patches | git_diff | InstaPy__InstaPy-4046 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Instapy-chromedriver not supporting latest Chrome browser version
The Instapy-chrome driver only supports Chrome upto versions 71 and since the update, the whole program quits with the error of ensure chromedriver is installed at .../insta-py/chromedriver_linux64..
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `instapy/__init__.py`
Content:
```
1 # flake8: noqa
2
3 from .instapy import InstaPy
4 from .util import smart_run
5 from .settings import Settings
6 from .file_manager import set_workspace
7 from .file_manager import get_workspace
8
9
10 # __variables__ with double-quoted values will be available in setup.py
11 __version__ = "0.2.1"
12
13
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/instapy/__init__.py b/instapy/__init__.py
--- a/instapy/__init__.py
+++ b/instapy/__init__.py
@@ -8,5 +8,5 @@
# __variables__ with double-quoted values will be available in setup.py
-__version__ = "0.2.1"
+__version__ = "0.2.2"
| {"golden_diff": "diff --git a/instapy/__init__.py b/instapy/__init__.py\n--- a/instapy/__init__.py\n+++ b/instapy/__init__.py\n@@ -8,5 +8,5 @@\n \n \n # __variables__ with double-quoted values will be available in setup.py\n-__version__ = \"0.2.1\"\n+__version__ = \"0.2.2\"\n", "issue": "Instapy-chromedriver not supporting latest Chrome browser version\nThe Instapy-chrome driver only supports Chrome upto versions 71 and since the update, the whole program quits with the error of ensure chromedriver is installed at .../insta-py/chromedriver_linux64..\n", "before_files": [{"content": "# flake8: noqa\n\nfrom .instapy import InstaPy\nfrom .util import smart_run\nfrom .settings import Settings\nfrom .file_manager import set_workspace\nfrom .file_manager import get_workspace\n\n\n# __variables__ with double-quoted values will be available in setup.py\n__version__ = \"0.2.1\"\n\n", "path": "instapy/__init__.py"}], "after_files": [{"content": "# flake8: noqa\n\nfrom .instapy import InstaPy\nfrom .util import smart_run\nfrom .settings import Settings\nfrom .file_manager import set_workspace\nfrom .file_manager import get_workspace\n\n\n# __variables__ with double-quoted values will be available in setup.py\n__version__ = \"0.2.2\"\n\n", "path": "instapy/__init__.py"}]} | 412 | 91 |
gh_patches_debug_20853 | rasdani/github-patches | git_diff | adap__flower-2768 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
If gRPC server is running behing a load balancer, clients may not be able to connect
### Describe the bug
Flower is using the `peer()` method from `grpc.ServicerContext` in order to identify unique flower clients. However, in some situations (like when running the gRPC server behind a load balancer or proxy) different clients can have the same peer identifier (i.e. the same `IP:PORT`), as HTTP/2 connections are multiplexed.
### Steps/Code to Reproduce
This can be tested with a simple [Traefik](https://doc.traefik.io/traefik/) setup as follows (https://doc.traefik.io/traefik/user-guides/grpc/):
`traefik.yaml`:
```
entryPoints:
web:
address: :80
providers:
file:
filename: grpc.yaml
api: {}
```
`grpc.yaml`:
```
http:
routers:
to-grpc:
entryPoints:
- "web"
service: "srv-grpc"
rule: "HostRegexp(`.*`)"
services:
srv-grpc:
loadBalancer:
servers:
- url: h2c://127.0.0.1:5000
```
The Traefik server can be executed as follows:
```
./traefik --configFile=traefik.yaml
```
Then we can launch then a Flower sever with the following code:
```
import flwr as fl
def acc_wavg(metrics):
n = sum([i for i, _ in metrics])
acc = sum([i * metric["accuracy"] / n for i, metric in metrics])
return {"accuracy": acc}
strategy = fl.server.strategy.FedAvg(
min_available_clients=2,
evaluate_metrics_aggregation_fn=acc_wavg
)
fl.server.start_server(
server_address="0.0.0.0:5000",
config=fl.server.ServerConfig(num_rounds=3),
strategy=strategy,
)
```
### Expected Results
When we connect two clients to the loadbalancer IP, on port 80, the first one connects and keeps waiting:
```
INFO flwr 2023-06-28 10:03:14,848 | grpc.py:50 | Opened insecure gRPC connection (no certificates were passed)
DEBUG flwr 2023-06-28 10:03:14,849 | connection.py:39 | ChannelConnectivity.IDLE
DEBUG flwr 2023-06-28 10:03:14,849 | connection.py:39 | ChannelConnectivity.CONNECTING
DEBUG flwr 2023-06-28 10:03:14,875 | connection.py:39 | ChannelConnectivity.READY
```
However, the second one is not connected:
```
DEBUG flwr 2023-06-28 09:37:32,554 | connection.py:39 | ChannelConnectivity.IDLE
DEBUG flwr 2023-06-28 09:37:32,554 | connection.py:39 | ChannelConnectivity.CONNECTING
DEBUG flwr 2023-06-28 09:37:32,578 | connection.py:39 | ChannelConnectivity.READY
DEBUG flwr 2023-06-28 09:37:32,784 | connection.py:113 | gRPC channel closed
Traceback (most recent call last):
File "/home/alvaro/w/prj/fl/fl.py", line 68, in <module>
fl.client.start_numpy_client(
File "/home/alvaro/w/prj/fl/VENV/lib/python3.11/site-packages/flwr/client/app.py", line 252, in start_numpy_client
start_client(
File "/home/alvaro/w/prj/fl/VENV/lib/python3.11/site-packages/flwr/client/app.py", line 174, in start_client
server_message = receive()
^^^^^^^^^
File "/home/alvaro/w/prj/fl/VENV/lib/python3.11/site-packages/flwr/client/grpc_client/connection.py", line 105, in <lambda>
receive: Callable[[], ServerMessage] = lambda: next(server_message_iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/alvaro/w/prj/fl/VENV/lib/python3.11/site-packages/grpc/_channel.py", line 475, in __next__
return self._next()
^^^^^^^^^^^^
File "/home/alvaro/w/prj/fl/VENV/lib/python3.11/site-packages/grpc/_channel.py", line 879, in _next
raise StopIteration()
StopIteration
```
This is because the second client connection is discarded [here]https://github.com/adap/flower/blob/main/src/py/flwr/server/client_manager.py#L149) as it is coming from the same gRPC peer (i.e. the load balancer) due to this code [here](https://github.com/adap/flower/blob/main/src/py/flwr/server/fleet/grpc_bidi/flower_service_servicer.py#L91), as due to the above I assume that the tuple `ip:port` is not unique for gRPC clients, therefore it cannot be used to identify a unique flower client (as a matter of fact, the `ip:port` are those of the load balancer, not those of the actual client).
As a test and quick hack we have moved to use uuids to identify the clients [here](https://github.com/AI4EOSC/flower/commit/b215d9f3cce1ad8806e296db4fe105a8b7f5c6c9). To my understanding, it is safe to do that way, as the peer identifier is not used elsewhere, but I do not know if it is planned to do something else with them.
### Actual Results
Two and more clients should be able to connect when running behind a load balancer.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/py/flwr/server/fleet/grpc_bidi/flower_service_servicer.py`
Content:
```
1 # Copyright 2020 Flower Labs GmbH. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Servicer for FlowerService.
16
17 Relevant knowledge for reading this modules code:
18 - https://github.com/grpc/grpc/blob/master/doc/statuscodes.md
19 """
20
21
22 from typing import Callable, Iterator
23
24 import grpc
25 from iterators import TimeoutIterator
26
27 from flwr.proto import transport_pb2_grpc # pylint: disable=E0611
28 from flwr.proto.transport_pb2 import ( # pylint: disable=E0611
29 ClientMessage,
30 ServerMessage,
31 )
32 from flwr.server.client_manager import ClientManager
33 from flwr.server.fleet.grpc_bidi.grpc_bridge import GrpcBridge, InsWrapper, ResWrapper
34 from flwr.server.fleet.grpc_bidi.grpc_client_proxy import GrpcClientProxy
35
36
37 def default_bridge_factory() -> GrpcBridge:
38 """Return GrpcBridge instance."""
39 return GrpcBridge()
40
41
42 def default_grpc_client_proxy_factory(cid: str, bridge: GrpcBridge) -> GrpcClientProxy:
43 """Return GrpcClientProxy instance."""
44 return GrpcClientProxy(cid=cid, bridge=bridge)
45
46
47 def register_client_proxy(
48 client_manager: ClientManager,
49 client_proxy: GrpcClientProxy,
50 context: grpc.ServicerContext,
51 ) -> bool:
52 """Try registering GrpcClientProxy with ClientManager."""
53 is_success = client_manager.register(client_proxy)
54 if is_success:
55
56 def rpc_termination_callback() -> None:
57 client_proxy.bridge.close()
58 client_manager.unregister(client_proxy)
59
60 context.add_callback(rpc_termination_callback)
61 return is_success
62
63
64 class FlowerServiceServicer(transport_pb2_grpc.FlowerServiceServicer):
65 """FlowerServiceServicer for bi-directional gRPC message stream."""
66
67 def __init__(
68 self,
69 client_manager: ClientManager,
70 grpc_bridge_factory: Callable[[], GrpcBridge] = default_bridge_factory,
71 grpc_client_proxy_factory: Callable[
72 [str, GrpcBridge], GrpcClientProxy
73 ] = default_grpc_client_proxy_factory,
74 ) -> None:
75 self.client_manager: ClientManager = client_manager
76 self.grpc_bridge_factory = grpc_bridge_factory
77 self.client_proxy_factory = grpc_client_proxy_factory
78
79 def Join( # pylint: disable=invalid-name
80 self,
81 request_iterator: Iterator[ClientMessage],
82 context: grpc.ServicerContext,
83 ) -> Iterator[ServerMessage]:
84 """Facilitate bi-directional streaming of messages between server and client.
85
86 Invoked by each gRPC client which participates in the network.
87
88 Protocol:
89 - The first message is sent from the server to the client
90 - Both `ServerMessage` and `ClientMessage` are message "wrappers"
91 wrapping the actual message
92 - The `Join` method is (pretty much) unaware of the protocol
93 """
94 peer: str = context.peer()
95 bridge = self.grpc_bridge_factory()
96 client_proxy = self.client_proxy_factory(peer, bridge)
97 is_success = register_client_proxy(self.client_manager, client_proxy, context)
98
99 if is_success:
100 # Get iterators
101 client_message_iterator = TimeoutIterator(
102 iterator=request_iterator, reset_on_next=True
103 )
104 ins_wrapper_iterator = bridge.ins_wrapper_iterator()
105
106 # All messages will be pushed to client bridge directly
107 while True:
108 try:
109 # Get ins_wrapper from bridge and yield server_message
110 ins_wrapper: InsWrapper = next(ins_wrapper_iterator)
111 yield ins_wrapper.server_message
112
113 # Set current timeout, might be None
114 if ins_wrapper.timeout is not None:
115 client_message_iterator.set_timeout(ins_wrapper.timeout)
116
117 # Wait for client message
118 client_message = next(client_message_iterator)
119
120 if client_message is client_message_iterator.get_sentinel():
121 # Important: calling `context.abort` in gRPC always
122 # raises an exception so that all code after the call to
123 # `context.abort` will not run. If subsequent code should
124 # be executed, the `rpc_termination_callback` can be used
125 # (as shown in the `register_client` function).
126 details = f"Timeout of {ins_wrapper.timeout}sec was exceeded."
127 context.abort(
128 code=grpc.StatusCode.DEADLINE_EXCEEDED,
129 details=details,
130 )
131 # This return statement is only for the linter so it understands
132 # that client_message in subsequent lines is not None
133 # It does not understand that `context.abort` will terminate
134 # this execution context by raising an exception.
135 return
136
137 bridge.set_res_wrapper(
138 res_wrapper=ResWrapper(client_message=client_message)
139 )
140 except StopIteration:
141 break
142
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/py/flwr/server/fleet/grpc_bidi/flower_service_servicer.py b/src/py/flwr/server/fleet/grpc_bidi/flower_service_servicer.py
--- a/src/py/flwr/server/fleet/grpc_bidi/flower_service_servicer.py
+++ b/src/py/flwr/server/fleet/grpc_bidi/flower_service_servicer.py
@@ -18,7 +18,7 @@
- https://github.com/grpc/grpc/blob/master/doc/statuscodes.md
"""
-
+import uuid
from typing import Callable, Iterator
import grpc
@@ -88,7 +88,11 @@
wrapping the actual message
- The `Join` method is (pretty much) unaware of the protocol
"""
- peer: str = context.peer()
+ # NOTE(aloga): this is a hack. when running flower behind a proxy, the peer can
+ # be the same for different clients (i.e. ip:port) so we use a uuid that is
+ # unique.
+ # peer: str = context.peer()
+ peer = uuid.uuid4().hex
bridge = self.grpc_bridge_factory()
client_proxy = self.client_proxy_factory(peer, bridge)
is_success = register_client_proxy(self.client_manager, client_proxy, context)
| {"golden_diff": "diff --git a/src/py/flwr/server/fleet/grpc_bidi/flower_service_servicer.py b/src/py/flwr/server/fleet/grpc_bidi/flower_service_servicer.py\n--- a/src/py/flwr/server/fleet/grpc_bidi/flower_service_servicer.py\n+++ b/src/py/flwr/server/fleet/grpc_bidi/flower_service_servicer.py\n@@ -18,7 +18,7 @@\n - https://github.com/grpc/grpc/blob/master/doc/statuscodes.md\n \"\"\"\n \n-\n+import uuid\n from typing import Callable, Iterator\n \n import grpc\n@@ -88,7 +88,11 @@\n wrapping the actual message\n - The `Join` method is (pretty much) unaware of the protocol\n \"\"\"\n- peer: str = context.peer()\n+ # NOTE(aloga): this is a hack. when running flower behind a proxy, the peer can\n+ # be the same for different clients (i.e. ip:port) so we use a uuid that is\n+ # unique.\n+ # peer: str = context.peer()\n+ peer = uuid.uuid4().hex\n bridge = self.grpc_bridge_factory()\n client_proxy = self.client_proxy_factory(peer, bridge)\n is_success = register_client_proxy(self.client_manager, client_proxy, context)\n", "issue": "If gRPC server is running behing a load balancer, clients may not be able to connect\n### Describe the bug\r\n\r\nFlower is using the `peer()` method from `grpc.ServicerContext` in order to identify unique flower clients. However, in some situations (like when running the gRPC server behind a load balancer or proxy) different clients can have the same peer identifier (i.e. the same `IP:PORT`), as HTTP/2 connections are multiplexed.\r\n\r\n### Steps/Code to Reproduce\r\n\r\nThis can be tested with a simple [Traefik](https://doc.traefik.io/traefik/) setup as follows (https://doc.traefik.io/traefik/user-guides/grpc/):\r\n\r\n`traefik.yaml`:\r\n```\r\nentryPoints:\r\n web:\r\n address: :80\r\n\r\nproviders:\r\n file:\r\n filename: grpc.yaml\r\n\r\napi: {}\r\n```\r\n\r\n`grpc.yaml`:\r\n```\r\nhttp:\r\n routers:\r\n to-grpc:\r\n entryPoints: \r\n - \"web\"\r\n service: \"srv-grpc\"\r\n rule: \"HostRegexp(`.*`)\"\r\n\r\n services:\r\n srv-grpc:\r\n loadBalancer:\r\n servers:\r\n - url: h2c://127.0.0.1:5000\r\n```\r\n\r\nThe Traefik server can be executed as follows:\r\n```\r\n ./traefik --configFile=traefik.yaml\r\n```\r\n\r\nThen we can launch then a Flower sever with the following code:\r\n```\r\nimport flwr as fl\r\n\r\ndef acc_wavg(metrics):\r\n n = sum([i for i, _ in metrics])\r\n acc = sum([i * metric[\"accuracy\"] / n for i, metric in metrics])\r\n return {\"accuracy\": acc}\r\n \r\nstrategy = fl.server.strategy.FedAvg(\r\n min_available_clients=2,\r\n evaluate_metrics_aggregation_fn=acc_wavg\r\n)\r\n \r\nfl.server.start_server(\r\n server_address=\"0.0.0.0:5000\",\r\n config=fl.server.ServerConfig(num_rounds=3),\r\n strategy=strategy,\r\n)\r\n```\r\n\r\n### Expected Results\r\n\r\nWhen we connect two clients to the loadbalancer IP, on port 80, the first one connects and keeps waiting:\r\n\r\n```\r\nINFO flwr 2023-06-28 10:03:14,848 | grpc.py:50 | Opened insecure gRPC connection (no certificates were passed)\r\nDEBUG flwr 2023-06-28 10:03:14,849 | connection.py:39 | ChannelConnectivity.IDLE\r\nDEBUG flwr 2023-06-28 10:03:14,849 | connection.py:39 | ChannelConnectivity.CONNECTING\r\nDEBUG flwr 2023-06-28 10:03:14,875 | connection.py:39 | ChannelConnectivity.READY\r\n```\r\n\r\nHowever, the second one is not connected:\r\n```\r\nDEBUG flwr 2023-06-28 09:37:32,554 | connection.py:39 | ChannelConnectivity.IDLE\r\nDEBUG flwr 2023-06-28 09:37:32,554 | connection.py:39 | ChannelConnectivity.CONNECTING\r\nDEBUG flwr 2023-06-28 09:37:32,578 | connection.py:39 | ChannelConnectivity.READY\r\nDEBUG flwr 2023-06-28 09:37:32,784 | connection.py:113 | gRPC channel closed\r\nTraceback (most recent call last):\r\n File \"/home/alvaro/w/prj/fl/fl.py\", line 68, in <module>\r\n fl.client.start_numpy_client(\r\n File \"/home/alvaro/w/prj/fl/VENV/lib/python3.11/site-packages/flwr/client/app.py\", line 252, in start_numpy_client\r\n start_client(\r\n File \"/home/alvaro/w/prj/fl/VENV/lib/python3.11/site-packages/flwr/client/app.py\", line 174, in start_client\r\n server_message = receive()\r\n ^^^^^^^^^\r\n File \"/home/alvaro/w/prj/fl/VENV/lib/python3.11/site-packages/flwr/client/grpc_client/connection.py\", line 105, in <lambda>\r\n receive: Callable[[], ServerMessage] = lambda: next(server_message_iterator)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/alvaro/w/prj/fl/VENV/lib/python3.11/site-packages/grpc/_channel.py\", line 475, in __next__\r\n return self._next()\r\n ^^^^^^^^^^^^\r\n File \"/home/alvaro/w/prj/fl/VENV/lib/python3.11/site-packages/grpc/_channel.py\", line 879, in _next\r\n raise StopIteration()\r\nStopIteration\r\n```\r\n\r\nThis is because the second client connection is discarded [here]https://github.com/adap/flower/blob/main/src/py/flwr/server/client_manager.py#L149) as it is coming from the same gRPC peer (i.e. the load balancer) due to this code [here](https://github.com/adap/flower/blob/main/src/py/flwr/server/fleet/grpc_bidi/flower_service_servicer.py#L91), as due to the above I assume that the tuple `ip:port` is not unique for gRPC clients, therefore it cannot be used to identify a unique flower client (as a matter of fact, the `ip:port` are those of the load balancer, not those of the actual client).\r\n\r\nAs a test and quick hack we have moved to use uuids to identify the clients [here](https://github.com/AI4EOSC/flower/commit/b215d9f3cce1ad8806e296db4fe105a8b7f5c6c9). To my understanding, it is safe to do that way, as the peer identifier is not used elsewhere, but I do not know if it is planned to do something else with them.\r\n\r\n### Actual Results\r\n\r\nTwo and more clients should be able to connect when running behind a load balancer.\n", "before_files": [{"content": "# Copyright 2020 Flower Labs GmbH. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Servicer for FlowerService.\n\nRelevant knowledge for reading this modules code:\n- https://github.com/grpc/grpc/blob/master/doc/statuscodes.md\n\"\"\"\n\n\nfrom typing import Callable, Iterator\n\nimport grpc\nfrom iterators import TimeoutIterator\n\nfrom flwr.proto import transport_pb2_grpc # pylint: disable=E0611\nfrom flwr.proto.transport_pb2 import ( # pylint: disable=E0611\n ClientMessage,\n ServerMessage,\n)\nfrom flwr.server.client_manager import ClientManager\nfrom flwr.server.fleet.grpc_bidi.grpc_bridge import GrpcBridge, InsWrapper, ResWrapper\nfrom flwr.server.fleet.grpc_bidi.grpc_client_proxy import GrpcClientProxy\n\n\ndef default_bridge_factory() -> GrpcBridge:\n \"\"\"Return GrpcBridge instance.\"\"\"\n return GrpcBridge()\n\n\ndef default_grpc_client_proxy_factory(cid: str, bridge: GrpcBridge) -> GrpcClientProxy:\n \"\"\"Return GrpcClientProxy instance.\"\"\"\n return GrpcClientProxy(cid=cid, bridge=bridge)\n\n\ndef register_client_proxy(\n client_manager: ClientManager,\n client_proxy: GrpcClientProxy,\n context: grpc.ServicerContext,\n) -> bool:\n \"\"\"Try registering GrpcClientProxy with ClientManager.\"\"\"\n is_success = client_manager.register(client_proxy)\n if is_success:\n\n def rpc_termination_callback() -> None:\n client_proxy.bridge.close()\n client_manager.unregister(client_proxy)\n\n context.add_callback(rpc_termination_callback)\n return is_success\n\n\nclass FlowerServiceServicer(transport_pb2_grpc.FlowerServiceServicer):\n \"\"\"FlowerServiceServicer for bi-directional gRPC message stream.\"\"\"\n\n def __init__(\n self,\n client_manager: ClientManager,\n grpc_bridge_factory: Callable[[], GrpcBridge] = default_bridge_factory,\n grpc_client_proxy_factory: Callable[\n [str, GrpcBridge], GrpcClientProxy\n ] = default_grpc_client_proxy_factory,\n ) -> None:\n self.client_manager: ClientManager = client_manager\n self.grpc_bridge_factory = grpc_bridge_factory\n self.client_proxy_factory = grpc_client_proxy_factory\n\n def Join( # pylint: disable=invalid-name\n self,\n request_iterator: Iterator[ClientMessage],\n context: grpc.ServicerContext,\n ) -> Iterator[ServerMessage]:\n \"\"\"Facilitate bi-directional streaming of messages between server and client.\n\n Invoked by each gRPC client which participates in the network.\n\n Protocol:\n - The first message is sent from the server to the client\n - Both `ServerMessage` and `ClientMessage` are message \"wrappers\"\n wrapping the actual message\n - The `Join` method is (pretty much) unaware of the protocol\n \"\"\"\n peer: str = context.peer()\n bridge = self.grpc_bridge_factory()\n client_proxy = self.client_proxy_factory(peer, bridge)\n is_success = register_client_proxy(self.client_manager, client_proxy, context)\n\n if is_success:\n # Get iterators\n client_message_iterator = TimeoutIterator(\n iterator=request_iterator, reset_on_next=True\n )\n ins_wrapper_iterator = bridge.ins_wrapper_iterator()\n\n # All messages will be pushed to client bridge directly\n while True:\n try:\n # Get ins_wrapper from bridge and yield server_message\n ins_wrapper: InsWrapper = next(ins_wrapper_iterator)\n yield ins_wrapper.server_message\n\n # Set current timeout, might be None\n if ins_wrapper.timeout is not None:\n client_message_iterator.set_timeout(ins_wrapper.timeout)\n\n # Wait for client message\n client_message = next(client_message_iterator)\n\n if client_message is client_message_iterator.get_sentinel():\n # Important: calling `context.abort` in gRPC always\n # raises an exception so that all code after the call to\n # `context.abort` will not run. If subsequent code should\n # be executed, the `rpc_termination_callback` can be used\n # (as shown in the `register_client` function).\n details = f\"Timeout of {ins_wrapper.timeout}sec was exceeded.\"\n context.abort(\n code=grpc.StatusCode.DEADLINE_EXCEEDED,\n details=details,\n )\n # This return statement is only for the linter so it understands\n # that client_message in subsequent lines is not None\n # It does not understand that `context.abort` will terminate\n # this execution context by raising an exception.\n return\n\n bridge.set_res_wrapper(\n res_wrapper=ResWrapper(client_message=client_message)\n )\n except StopIteration:\n break\n", "path": "src/py/flwr/server/fleet/grpc_bidi/flower_service_servicer.py"}], "after_files": [{"content": "# Copyright 2020 Flower Labs GmbH. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Servicer for FlowerService.\n\nRelevant knowledge for reading this modules code:\n- https://github.com/grpc/grpc/blob/master/doc/statuscodes.md\n\"\"\"\n\nimport uuid\nfrom typing import Callable, Iterator\n\nimport grpc\nfrom iterators import TimeoutIterator\n\nfrom flwr.proto import transport_pb2_grpc\nfrom flwr.proto.transport_pb2 import ClientMessage, ServerMessage\nfrom flwr.server.client_manager import ClientManager\nfrom flwr.server.fleet.grpc_bidi.grpc_bridge import GrpcBridge, InsWrapper, ResWrapper\nfrom flwr.server.fleet.grpc_bidi.grpc_client_proxy import GrpcClientProxy\n\n\ndef default_bridge_factory() -> GrpcBridge:\n \"\"\"Return GrpcBridge instance.\"\"\"\n return GrpcBridge()\n\n\ndef default_grpc_client_proxy_factory(cid: str, bridge: GrpcBridge) -> GrpcClientProxy:\n \"\"\"Return GrpcClientProxy instance.\"\"\"\n return GrpcClientProxy(cid=cid, bridge=bridge)\n\n\ndef register_client_proxy(\n client_manager: ClientManager,\n client_proxy: GrpcClientProxy,\n context: grpc.ServicerContext,\n) -> bool:\n \"\"\"Try registering GrpcClientProxy with ClientManager.\"\"\"\n is_success = client_manager.register(client_proxy)\n if is_success:\n\n def rpc_termination_callback() -> None:\n client_proxy.bridge.close()\n client_manager.unregister(client_proxy)\n\n context.add_callback(rpc_termination_callback)\n return is_success\n\n\nclass FlowerServiceServicer(transport_pb2_grpc.FlowerServiceServicer):\n \"\"\"FlowerServiceServicer for bi-directional gRPC message stream.\"\"\"\n\n def __init__(\n self,\n client_manager: ClientManager,\n grpc_bridge_factory: Callable[[], GrpcBridge] = default_bridge_factory,\n grpc_client_proxy_factory: Callable[\n [str, GrpcBridge], GrpcClientProxy\n ] = default_grpc_client_proxy_factory,\n ) -> None:\n self.client_manager: ClientManager = client_manager\n self.grpc_bridge_factory = grpc_bridge_factory\n self.client_proxy_factory = grpc_client_proxy_factory\n\n def Join( # pylint: disable=invalid-name\n self,\n request_iterator: Iterator[ClientMessage],\n context: grpc.ServicerContext,\n ) -> Iterator[ServerMessage]:\n \"\"\"Facilitate bi-directional streaming of messages between server and client.\n\n Invoked by each gRPC client which participates in the network.\n\n Protocol:\n - The first message is sent from the server to the client\n - Both `ServerMessage` and `ClientMessage` are message \"wrappers\"\n wrapping the actual message\n - The `Join` method is (pretty much) unaware of the protocol\n \"\"\"\n # NOTE(aloga): this is a hack. when running flower behind a proxy, the peer can\n # be the same for different clients (i.e. ip:port) so we use a uuid that is\n # unique.\n # peer: str = context.peer()\n peer = uuid.uuid4().hex\n bridge = self.grpc_bridge_factory()\n client_proxy = self.client_proxy_factory(peer, bridge)\n is_success = register_client_proxy(self.client_manager, client_proxy, context)\n\n if is_success:\n # Get iterators\n client_message_iterator = TimeoutIterator(\n iterator=request_iterator, reset_on_next=True\n )\n ins_wrapper_iterator = bridge.ins_wrapper_iterator()\n\n # All messages will be pushed to client bridge directly\n while True:\n try:\n # Get ins_wrapper from bridge and yield server_message\n ins_wrapper: InsWrapper = next(ins_wrapper_iterator)\n yield ins_wrapper.server_message\n\n # Set current timeout, might be None\n if ins_wrapper.timeout is not None:\n client_message_iterator.set_timeout(ins_wrapper.timeout)\n\n # Wait for client message\n client_message = next(client_message_iterator)\n\n if client_message is client_message_iterator.get_sentinel():\n # Important: calling `context.abort` in gRPC always\n # raises an exception so that all code after the call to\n # `context.abort` will not run. If subsequent code should\n # be executed, the `rpc_termination_callback` can be used\n # (as shown in the `register_client` function).\n details = f\"Timeout of {ins_wrapper.timeout}sec was exceeded.\"\n context.abort(\n code=grpc.StatusCode.DEADLINE_EXCEEDED,\n details=details,\n )\n # This return statement is only for the linter so it understands\n # that client_message in subsequent lines is not None\n # It does not understand that `context.abort` will terminate\n # this execution context by raising an exception.\n return\n\n bridge.set_res_wrapper(\n res_wrapper=ResWrapper(client_message=client_message)\n )\n except StopIteration:\n break\n", "path": "src/py/flwr/server/fleet/grpc_bidi/flower_service_servicer.py"}]} | 3,107 | 275 |
gh_patches_debug_19887 | rasdani/github-patches | git_diff | aws__aws-cli-1526 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
aws s3 sync --delete behaves differently if source is <LocalPath> rather than <S3Path>
This issue was raised on: https://stackoverflow.com/questions/32304326/amazon-s3-sync-deleting-excluded-files
Basically, doing `aws s3 sync . s3://bucket2 --delete` behaves differently to `aws s3 sync s3://bucket1 s3://bucket2 --delete` even if the two source directories are identical.
It would appear that the logic for `--delete` behaves differently between Local and S3. An example is well-documented in the stackoverflow issue.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awscli/customizations/s3/filters.py`
Content:
```
1 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 import logging
14 import fnmatch
15 import os
16
17 from awscli.customizations.s3.utils import split_s3_bucket_key
18
19
20 LOG = logging.getLogger(__name__)
21
22
23 def create_filter(parameters):
24 """Given the CLI parameters dict, create a Filter object."""
25 # We need to evaluate all the filters based on the source
26 # directory.
27 if parameters['filters']:
28 cli_filters = parameters['filters']
29 real_filters = []
30 for filter_type, filter_pattern in cli_filters:
31 real_filters.append((filter_type.lstrip('-'),
32 filter_pattern))
33 source_location = parameters['src']
34 if source_location.startswith('s3://'):
35 # This gives us (bucket, keyname) and we want
36 # the bucket to be the root dir.
37 src_rootdir = _get_s3_root(source_location,
38 parameters['dir_op'])
39 dst_rootdir = _get_local_root(parameters['dest'],
40 parameters['dir_op'])
41 else:
42 src_rootdir = _get_local_root(parameters['src'], parameters['dir_op'])
43 dst_rootdir = _get_s3_root(parameters['dest'],
44 parameters['dir_op'])
45
46 return Filter(real_filters, src_rootdir, dst_rootdir)
47 else:
48 return Filter({}, None, None)
49
50
51 def _get_s3_root(source_location, dir_op):
52 # Obtain the bucket and the key.
53 bucket, key = split_s3_bucket_key(source_location)
54 if not dir_op and not key.endswith('/'):
55 # If we are not performing an operation on a directory and the key
56 # is of the form: ``prefix/key``. We only want ``prefix`` included in
57 # the the s3 root and not ``key``.
58 key = '/'.join(key.split('/')[:-1])
59 # Rejoin the bucket and key back together.
60 s3_path = '/'.join([bucket, key])
61 return s3_path
62
63
64 def _get_local_root(source_location, dir_op):
65 if dir_op:
66 rootdir = os.path.abspath(source_location)
67 else:
68 rootdir = os.path.abspath(os.path.dirname(source_location))
69 return rootdir
70
71
72 class Filter(object):
73 """
74 This is a universal exclude/include filter.
75 """
76 def __init__(self, patterns, rootdir, dst_rootdir):
77 """
78 :var patterns: A list of patterns. A pattern consits of a list
79 whose first member is a string 'exclude' or 'include'.
80 The second member is the actual rule.
81 :var rootdir: The root directory where the patterns are evaluated.
82 This will generally be the directory of the source location.
83 :var dst_rootdir: The destination root directory where the patterns are
84 evaluated. This is only useful when the --delete option is
85 also specified.
86
87 """
88 self._original_patterns = patterns
89 self.patterns = self._full_path_patterns(patterns, rootdir)
90 self.dst_patterns = self._full_path_patterns(patterns, dst_rootdir)
91
92 def _full_path_patterns(self, original_patterns, rootdir):
93 # We need to transform the patterns into patterns that have
94 # the root dir prefixed, so things like ``--exclude "*"``
95 # will actually be ['exclude', '/path/to/root/*']
96 full_patterns = []
97 for pattern in original_patterns:
98 full_patterns.append(
99 (pattern[0], os.path.join(rootdir, pattern[1])))
100 return full_patterns
101
102 def call(self, file_infos):
103 """
104 This function iterates over through the yielded file_info objects. It
105 determines the type of the file and applies pattern matching to
106 determine if the rule applies. While iterating though the patterns the
107 file is assigned a boolean flag to determine if a file should be
108 yielded on past the filer. Anything identified by the exclude filter
109 has its flag set to false. Anything identified by the include filter
110 has its flag set to True. All files begin with the flag set to true.
111 Rules listed at the end will overwrite flags thrown by rules listed
112 before it.
113 """
114 for file_info in file_infos:
115 file_path = file_info.src
116 file_status = (file_info, True)
117 for pattern, dst_pattern in zip(self.patterns, self.dst_patterns):
118 current_file_status = self._match_pattern(pattern, file_info)
119 if current_file_status is not None:
120 file_status = current_file_status
121 dst_current_file_status = self._match_pattern(dst_pattern, file_info)
122 if dst_current_file_status is not None:
123 file_status = dst_current_file_status
124 LOG.debug("=%s final filtered status, should_include: %s",
125 file_path, file_status[1])
126 if file_status[1]:
127 yield file_info
128
129 def _match_pattern(self, pattern, file_info):
130 file_status = None
131 file_path = file_info.src
132 pattern_type = pattern[0]
133 if file_info.src_type == 'local':
134 path_pattern = pattern[1].replace('/', os.sep)
135 else:
136 path_pattern = pattern[1].replace(os.sep, '/')
137 is_match = fnmatch.fnmatch(file_path, path_pattern)
138 if is_match and pattern_type == 'include':
139 file_status = (file_info, True)
140 LOG.debug("%s matched include filter: %s",
141 file_path, path_pattern)
142 elif is_match and pattern_type == 'exclude':
143 file_status = (file_info, False)
144 LOG.debug("%s matched exclude filter: %s",
145 file_path, path_pattern)
146 else:
147 LOG.debug("%s did not match %s filter: %s",
148 file_path, pattern_type[2:], path_pattern)
149 return file_status
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/awscli/customizations/s3/filters.py b/awscli/customizations/s3/filters.py
--- a/awscli/customizations/s3/filters.py
+++ b/awscli/customizations/s3/filters.py
@@ -36,12 +36,16 @@
# the bucket to be the root dir.
src_rootdir = _get_s3_root(source_location,
parameters['dir_op'])
- dst_rootdir = _get_local_root(parameters['dest'],
- parameters['dir_op'])
else:
src_rootdir = _get_local_root(parameters['src'], parameters['dir_op'])
+
+ destination_location = parameters['dest']
+ if destination_location.startswith('s3://'):
dst_rootdir = _get_s3_root(parameters['dest'],
parameters['dir_op'])
+ else:
+ dst_rootdir = _get_local_root(parameters['dest'],
+ parameters['dir_op'])
return Filter(real_filters, src_rootdir, dst_rootdir)
else:
| {"golden_diff": "diff --git a/awscli/customizations/s3/filters.py b/awscli/customizations/s3/filters.py\n--- a/awscli/customizations/s3/filters.py\n+++ b/awscli/customizations/s3/filters.py\n@@ -36,12 +36,16 @@\n # the bucket to be the root dir.\n src_rootdir = _get_s3_root(source_location,\n parameters['dir_op'])\n- dst_rootdir = _get_local_root(parameters['dest'],\n- parameters['dir_op'])\n else:\n src_rootdir = _get_local_root(parameters['src'], parameters['dir_op'])\n+\n+ destination_location = parameters['dest']\n+ if destination_location.startswith('s3://'):\n dst_rootdir = _get_s3_root(parameters['dest'],\n parameters['dir_op'])\n+ else:\n+ dst_rootdir = _get_local_root(parameters['dest'],\n+ parameters['dir_op'])\n \n return Filter(real_filters, src_rootdir, dst_rootdir)\n else:\n", "issue": "aws s3 sync --delete behaves differently if source is <LocalPath> rather than <S3Path>\nThis issue was raised on: https://stackoverflow.com/questions/32304326/amazon-s3-sync-deleting-excluded-files\n\nBasically, doing `aws s3 sync . s3://bucket2 --delete` behaves differently to `aws s3 sync s3://bucket1 s3://bucket2 --delete` even if the two source directories are identical.\n\nIt would appear that the logic for `--delete` behaves differently between Local and S3. An example is well-documented in the stackoverflow issue.\n\n", "before_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nimport logging\nimport fnmatch\nimport os\n\nfrom awscli.customizations.s3.utils import split_s3_bucket_key\n\n\nLOG = logging.getLogger(__name__)\n\n\ndef create_filter(parameters):\n \"\"\"Given the CLI parameters dict, create a Filter object.\"\"\"\n # We need to evaluate all the filters based on the source\n # directory.\n if parameters['filters']:\n cli_filters = parameters['filters']\n real_filters = []\n for filter_type, filter_pattern in cli_filters:\n real_filters.append((filter_type.lstrip('-'),\n filter_pattern))\n source_location = parameters['src']\n if source_location.startswith('s3://'):\n # This gives us (bucket, keyname) and we want\n # the bucket to be the root dir.\n src_rootdir = _get_s3_root(source_location,\n parameters['dir_op'])\n dst_rootdir = _get_local_root(parameters['dest'],\n parameters['dir_op'])\n else:\n src_rootdir = _get_local_root(parameters['src'], parameters['dir_op'])\n dst_rootdir = _get_s3_root(parameters['dest'],\n parameters['dir_op'])\n\n return Filter(real_filters, src_rootdir, dst_rootdir)\n else:\n return Filter({}, None, None)\n\n\ndef _get_s3_root(source_location, dir_op):\n # Obtain the bucket and the key.\n bucket, key = split_s3_bucket_key(source_location)\n if not dir_op and not key.endswith('/'):\n # If we are not performing an operation on a directory and the key\n # is of the form: ``prefix/key``. We only want ``prefix`` included in\n # the the s3 root and not ``key``.\n key = '/'.join(key.split('/')[:-1])\n # Rejoin the bucket and key back together.\n s3_path = '/'.join([bucket, key])\n return s3_path\n\n\ndef _get_local_root(source_location, dir_op):\n if dir_op:\n rootdir = os.path.abspath(source_location)\n else:\n rootdir = os.path.abspath(os.path.dirname(source_location))\n return rootdir\n\n\nclass Filter(object):\n \"\"\"\n This is a universal exclude/include filter.\n \"\"\"\n def __init__(self, patterns, rootdir, dst_rootdir):\n \"\"\"\n :var patterns: A list of patterns. A pattern consits of a list\n whose first member is a string 'exclude' or 'include'.\n The second member is the actual rule.\n :var rootdir: The root directory where the patterns are evaluated.\n This will generally be the directory of the source location.\n :var dst_rootdir: The destination root directory where the patterns are\n evaluated. This is only useful when the --delete option is\n also specified.\n\n \"\"\"\n self._original_patterns = patterns\n self.patterns = self._full_path_patterns(patterns, rootdir)\n self.dst_patterns = self._full_path_patterns(patterns, dst_rootdir)\n\n def _full_path_patterns(self, original_patterns, rootdir):\n # We need to transform the patterns into patterns that have\n # the root dir prefixed, so things like ``--exclude \"*\"``\n # will actually be ['exclude', '/path/to/root/*']\n full_patterns = []\n for pattern in original_patterns:\n full_patterns.append(\n (pattern[0], os.path.join(rootdir, pattern[1])))\n return full_patterns\n\n def call(self, file_infos):\n \"\"\"\n This function iterates over through the yielded file_info objects. It\n determines the type of the file and applies pattern matching to\n determine if the rule applies. While iterating though the patterns the\n file is assigned a boolean flag to determine if a file should be\n yielded on past the filer. Anything identified by the exclude filter\n has its flag set to false. Anything identified by the include filter\n has its flag set to True. All files begin with the flag set to true.\n Rules listed at the end will overwrite flags thrown by rules listed\n before it.\n \"\"\"\n for file_info in file_infos:\n file_path = file_info.src\n file_status = (file_info, True)\n for pattern, dst_pattern in zip(self.patterns, self.dst_patterns):\n current_file_status = self._match_pattern(pattern, file_info)\n if current_file_status is not None:\n file_status = current_file_status\n dst_current_file_status = self._match_pattern(dst_pattern, file_info)\n if dst_current_file_status is not None:\n file_status = dst_current_file_status\n LOG.debug(\"=%s final filtered status, should_include: %s\",\n file_path, file_status[1])\n if file_status[1]:\n yield file_info\n\n def _match_pattern(self, pattern, file_info):\n file_status = None\n file_path = file_info.src\n pattern_type = pattern[0]\n if file_info.src_type == 'local':\n path_pattern = pattern[1].replace('/', os.sep)\n else:\n path_pattern = pattern[1].replace(os.sep, '/')\n is_match = fnmatch.fnmatch(file_path, path_pattern)\n if is_match and pattern_type == 'include':\n file_status = (file_info, True)\n LOG.debug(\"%s matched include filter: %s\",\n file_path, path_pattern)\n elif is_match and pattern_type == 'exclude':\n file_status = (file_info, False)\n LOG.debug(\"%s matched exclude filter: %s\",\n file_path, path_pattern)\n else:\n LOG.debug(\"%s did not match %s filter: %s\",\n file_path, pattern_type[2:], path_pattern)\n return file_status\n", "path": "awscli/customizations/s3/filters.py"}], "after_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nimport logging\nimport fnmatch\nimport os\n\nfrom awscli.customizations.s3.utils import split_s3_bucket_key\n\n\nLOG = logging.getLogger(__name__)\n\n\ndef create_filter(parameters):\n \"\"\"Given the CLI parameters dict, create a Filter object.\"\"\"\n # We need to evaluate all the filters based on the source\n # directory.\n if parameters['filters']:\n cli_filters = parameters['filters']\n real_filters = []\n for filter_type, filter_pattern in cli_filters:\n real_filters.append((filter_type.lstrip('-'),\n filter_pattern))\n source_location = parameters['src']\n if source_location.startswith('s3://'):\n # This gives us (bucket, keyname) and we want\n # the bucket to be the root dir.\n src_rootdir = _get_s3_root(source_location,\n parameters['dir_op'])\n else:\n src_rootdir = _get_local_root(parameters['src'], parameters['dir_op'])\n\n destination_location = parameters['dest']\n if destination_location.startswith('s3://'):\n dst_rootdir = _get_s3_root(parameters['dest'],\n parameters['dir_op'])\n else:\n dst_rootdir = _get_local_root(parameters['dest'],\n parameters['dir_op'])\n\n return Filter(real_filters, src_rootdir, dst_rootdir)\n else:\n return Filter({}, None, None)\n\n\ndef _get_s3_root(source_location, dir_op):\n # Obtain the bucket and the key.\n bucket, key = split_s3_bucket_key(source_location)\n if not dir_op and not key.endswith('/'):\n # If we are not performing an operation on a directory and the key\n # is of the form: ``prefix/key``. We only want ``prefix`` included in\n # the the s3 root and not ``key``.\n key = '/'.join(key.split('/')[:-1])\n # Rejoin the bucket and key back together.\n s3_path = '/'.join([bucket, key])\n return s3_path\n\n\ndef _get_local_root(source_location, dir_op):\n if dir_op:\n rootdir = os.path.abspath(source_location)\n else:\n rootdir = os.path.abspath(os.path.dirname(source_location))\n return rootdir\n\n\nclass Filter(object):\n \"\"\"\n This is a universal exclude/include filter.\n \"\"\"\n def __init__(self, patterns, rootdir, dst_rootdir):\n \"\"\"\n :var patterns: A list of patterns. A pattern consits of a list\n whose first member is a string 'exclude' or 'include'.\n The second member is the actual rule.\n :var rootdir: The root directory where the patterns are evaluated.\n This will generally be the directory of the source location.\n :var dst_rootdir: The destination root directory where the patterns are\n evaluated. This is only useful when the --delete option is\n also specified.\n\n \"\"\"\n self._original_patterns = patterns\n self.patterns = self._full_path_patterns(patterns, rootdir)\n self.dst_patterns = self._full_path_patterns(patterns, dst_rootdir)\n\n def _full_path_patterns(self, original_patterns, rootdir):\n # We need to transform the patterns into patterns that have\n # the root dir prefixed, so things like ``--exclude \"*\"``\n # will actually be ['exclude', '/path/to/root/*']\n full_patterns = []\n for pattern in original_patterns:\n full_patterns.append(\n (pattern[0], os.path.join(rootdir, pattern[1])))\n return full_patterns\n\n def call(self, file_infos):\n \"\"\"\n This function iterates over through the yielded file_info objects. It\n determines the type of the file and applies pattern matching to\n determine if the rule applies. While iterating though the patterns the\n file is assigned a boolean flag to determine if a file should be\n yielded on past the filer. Anything identified by the exclude filter\n has its flag set to false. Anything identified by the include filter\n has its flag set to True. All files begin with the flag set to true.\n Rules listed at the end will overwrite flags thrown by rules listed\n before it.\n \"\"\"\n for file_info in file_infos:\n file_path = file_info.src\n file_status = (file_info, True)\n for pattern, dst_pattern in zip(self.patterns, self.dst_patterns):\n current_file_status = self._match_pattern(pattern, file_info)\n if current_file_status is not None:\n file_status = current_file_status\n dst_current_file_status = self._match_pattern(dst_pattern, file_info)\n if dst_current_file_status is not None:\n file_status = dst_current_file_status\n LOG.debug(\"=%s final filtered status, should_include: %s\",\n file_path, file_status[1])\n if file_status[1]:\n yield file_info\n\n def _match_pattern(self, pattern, file_info):\n file_status = None\n file_path = file_info.src\n pattern_type = pattern[0]\n if file_info.src_type == 'local':\n path_pattern = pattern[1].replace('/', os.sep)\n else:\n path_pattern = pattern[1].replace(os.sep, '/')\n is_match = fnmatch.fnmatch(file_path, path_pattern)\n if is_match and pattern_type == 'include':\n file_status = (file_info, True)\n LOG.debug(\"%s matched include filter: %s\",\n file_path, path_pattern)\n elif is_match and pattern_type == 'exclude':\n file_status = (file_info, False)\n LOG.debug(\"%s matched exclude filter: %s\",\n file_path, path_pattern)\n else:\n LOG.debug(\"%s did not match %s filter: %s\",\n file_path, pattern_type[2:], path_pattern)\n return file_status\n", "path": "awscli/customizations/s3/filters.py"}]} | 2,095 | 217 |
gh_patches_debug_24791 | rasdani/github-patches | git_diff | svthalia__concrexit-1302 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TypeError: '<' not supported between instances of 'datetime.date' and 'NoneType'
Sentry Issue: [CONCREXIT-3T](https://sentry.io/organizations/thalia/issues/1937285207/?referrer=github_integration)
```
TypeError: '<' not supported between instances of 'datetime.date' and 'NoneType'
(5 additional frame(s) were not displayed)
...
File "django/views/generic/base.py", line 98, in dispatch
return handler(request, *args, **kwargs)
File "django/views/generic/detail.py", line 107, in get
context = self.get_context_data(object=self.object)
File "members/views.py", line 181, in get_context_data
societies = member_societies(member)
File "members/services.py", line 91, in member_societies
societies = _member_group_memberships(
File "members/services.py", line 45, in _member_group_memberships
if data[name]["latest"] < membership.until:
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/members/services.py`
Content:
```
1 """Services defined in the members package"""
2 from datetime import date
3 from typing import Callable, List, Dict, Any
4
5 from django.conf import settings
6 from django.db.models import Q, Count
7 from django.utils import timezone
8 from django.utils.translation import gettext
9
10 from members import emails
11 from members.models import Membership, Member
12 from utils.snippets import datetime_to_lectureyear
13
14
15 def _member_group_memberships(
16 member: Member, condition: Callable[[Membership], bool]
17 ) -> Dict[str, Any]:
18 """
19 Determines the group membership of a user based on a condition
20 :return: Object with group memberships
21 """
22 memberships = member.membergroupmembership_set.all()
23 data = {}
24
25 for membership in memberships:
26 if not condition(membership):
27 continue
28 period = {
29 "since": membership.since,
30 "until": membership.until,
31 "chair": membership.chair,
32 }
33
34 if hasattr(membership.group, "board"):
35 period["role"] = membership.role
36
37 if membership.until is None and hasattr(membership.group, "board"):
38 period["until"] = membership.group.board.until
39
40 name = membership.group.name
41 if data.get(name):
42 data[name]["periods"].append(period)
43 if data[name]["earliest"] > membership.since:
44 data[name]["earliest"] = membership.since
45 if data[name]["latest"] < membership.until:
46 data[name]["latest"] = membership.until
47 data[name]["periods"].sort(key=lambda x: x["since"])
48 else:
49 data[name] = {
50 "pk": membership.group.pk,
51 "active": membership.group.active,
52 "name": name,
53 "periods": [period],
54 "url": settings.BASE_URL + membership.group.get_absolute_url(),
55 "earliest": membership.since,
56 "latest": membership.until,
57 }
58 return data
59
60
61 def member_achievements(member) -> List:
62 """
63 Derives a list of achievements of a member
64 Committee and board memberships + mentorships
65 """
66 achievements = _member_group_memberships(
67 member,
68 lambda membership: (
69 hasattr(membership.group, "board") or hasattr(membership.group, "committee")
70 ),
71 )
72
73 mentor_years = member.mentorship_set.all()
74 for mentor_year in mentor_years:
75 name = "Mentor in {}".format(mentor_year.year)
76 # Ensure mentorships appear last but are sorted
77 earliest = date.today()
78 earliest = earliest.replace(year=earliest.year + mentor_year.year)
79 if not achievements.get(name):
80 achievements[name] = {
81 "name": name,
82 "earliest": earliest,
83 }
84 return sorted(achievements.values(), key=lambda x: x["earliest"])
85
86
87 def member_societies(member) -> List:
88 """
89 Derives a list of societies a member was part of
90 """
91 societies = _member_group_memberships(
92 member, lambda membership: (hasattr(membership.group, "society"))
93 )
94 return sorted(societies.values(), key=lambda x: x["earliest"])
95
96
97 def gen_stats_member_type() -> Dict[str, int]:
98 """
99 Generate a dictionary where every key is a member type with
100 the value being the number of current members of that type
101 """
102
103 data = {}
104 for key, display in Membership.MEMBERSHIP_TYPES:
105 data[str(display)] = (
106 Membership.objects.filter(since__lte=date.today())
107 .filter(Q(until__isnull=True) | Q(until__gt=date.today()))
108 .filter(type=key)
109 .count()
110 )
111 return data
112
113
114 def gen_stats_year() -> Dict[str, Dict[str, int]]:
115 """
116 Generate list with 6 entries, where each entry represents the total amount
117 of Thalia members in a year. The sixth element contains all the multi-year
118 students.
119 """
120 stats_year = {}
121 current_year = datetime_to_lectureyear(date.today())
122
123 for i in range(5):
124 new = {}
125 for key, _ in Membership.MEMBERSHIP_TYPES:
126 new[key] = (
127 Membership.objects.filter(user__profile__starting_year=current_year - i)
128 .filter(since__lte=date.today())
129 .filter(Q(until__isnull=True) | Q(until__gt=date.today()))
130 .filter(type=key)
131 .count()
132 )
133 stats_year[str(current_year - i)] = new
134
135 # Add multi year members
136 new = {}
137 for key, _ in Membership.MEMBERSHIP_TYPES:
138 new[key] = (
139 Membership.objects.filter(user__profile__starting_year__lt=current_year - 4)
140 .filter(since__lte=date.today())
141 .filter(Q(until__isnull=True) | Q(until__gt=date.today()))
142 .filter(type=key)
143 .count()
144 )
145 stats_year[str(gettext("Older"))] = new
146
147 return stats_year
148
149
150 def verify_email_change(change_request) -> None:
151 """
152 Mark the email change request as verified
153
154 :param change_request: the email change request
155 """
156 change_request.verified = True
157 change_request.save()
158
159 process_email_change(change_request)
160
161
162 def confirm_email_change(change_request) -> None:
163 """
164 Mark the email change request as verified
165
166 :param change_request: the email change request
167 """
168 change_request.confirmed = True
169 change_request.save()
170
171 process_email_change(change_request)
172
173
174 def process_email_change(change_request) -> None:
175 """
176 Change the user's email address if the request was completed and
177 send the completion email
178
179 :param change_request: the email change request
180 """
181 if not change_request.completed:
182 return
183
184 member = change_request.member
185 member.email = change_request.email
186 member.save()
187
188 emails.send_email_change_completion_message(change_request)
189
190
191 def execute_data_minimisation(dry_run=False, members=None) -> List[Member]:
192 """
193 Clean the profiles of members/users of whom the last membership ended
194 at least 31 days ago
195
196 :param dry_run: does not really remove data if True
197 :param members: queryset of members to process, optional
198 :return: list of processed members
199 """
200 if not members:
201 members = Member.objects
202 members = (
203 members.annotate(membership_count=Count("membership"))
204 .exclude(
205 (
206 Q(membership__until__isnull=True)
207 | Q(membership__until__gt=timezone.now().date())
208 )
209 & Q(membership_count__gt=0)
210 )
211 .distinct()
212 .prefetch_related("membership_set", "profile")
213 )
214 deletion_period = timezone.now().date() - timezone.timedelta(days=31)
215 processed_members = []
216 for member in members:
217 if (
218 member.latest_membership is None
219 or member.latest_membership.until <= deletion_period
220 ):
221 processed_members.append(member)
222 profile = member.profile
223 profile.student_number = None
224 profile.phone_number = None
225 profile.address_street = None
226 profile.address_street2 = None
227 profile.address_postal_code = None
228 profile.address_city = None
229 profile.address_country = None
230 profile.birthday = None
231 profile.emergency_contact_phone_number = None
232 profile.emergency_contact = None
233 member.bank_accounts.all().delete()
234 if not dry_run:
235 profile.save()
236
237 return processed_members
238
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/members/services.py b/website/members/services.py
--- a/website/members/services.py
+++ b/website/members/services.py
@@ -40,10 +40,13 @@
name = membership.group.name
if data.get(name):
data[name]["periods"].append(period)
- if data[name]["earliest"] > membership.since:
- data[name]["earliest"] = membership.since
- if data[name]["latest"] < membership.until:
- data[name]["latest"] = membership.until
+ if data[name]["earliest"] > period["since"]:
+ data[name]["earliest"] = period["since"]
+ if period["until"] is None or (
+ data[name]["latest"] is not None
+ and data[name]["latest"] < period["until"]
+ ):
+ data[name]["latest"] = period["until"]
data[name]["periods"].sort(key=lambda x: x["since"])
else:
data[name] = {
@@ -52,8 +55,8 @@
"name": name,
"periods": [period],
"url": settings.BASE_URL + membership.group.get_absolute_url(),
- "earliest": membership.since,
- "latest": membership.until,
+ "earliest": period["since"],
+ "latest": period["until"],
}
return data
| {"golden_diff": "diff --git a/website/members/services.py b/website/members/services.py\n--- a/website/members/services.py\n+++ b/website/members/services.py\n@@ -40,10 +40,13 @@\n name = membership.group.name\n if data.get(name):\n data[name][\"periods\"].append(period)\n- if data[name][\"earliest\"] > membership.since:\n- data[name][\"earliest\"] = membership.since\n- if data[name][\"latest\"] < membership.until:\n- data[name][\"latest\"] = membership.until\n+ if data[name][\"earliest\"] > period[\"since\"]:\n+ data[name][\"earliest\"] = period[\"since\"]\n+ if period[\"until\"] is None or (\n+ data[name][\"latest\"] is not None\n+ and data[name][\"latest\"] < period[\"until\"]\n+ ):\n+ data[name][\"latest\"] = period[\"until\"]\n data[name][\"periods\"].sort(key=lambda x: x[\"since\"])\n else:\n data[name] = {\n@@ -52,8 +55,8 @@\n \"name\": name,\n \"periods\": [period],\n \"url\": settings.BASE_URL + membership.group.get_absolute_url(),\n- \"earliest\": membership.since,\n- \"latest\": membership.until,\n+ \"earliest\": period[\"since\"],\n+ \"latest\": period[\"until\"],\n }\n return data\n", "issue": "TypeError: '<' not supported between instances of 'datetime.date' and 'NoneType'\nSentry Issue: [CONCREXIT-3T](https://sentry.io/organizations/thalia/issues/1937285207/?referrer=github_integration)\n\n```\nTypeError: '<' not supported between instances of 'datetime.date' and 'NoneType'\n(5 additional frame(s) were not displayed)\n...\n File \"django/views/generic/base.py\", line 98, in dispatch\n return handler(request, *args, **kwargs)\n File \"django/views/generic/detail.py\", line 107, in get\n context = self.get_context_data(object=self.object)\n File \"members/views.py\", line 181, in get_context_data\n societies = member_societies(member)\n File \"members/services.py\", line 91, in member_societies\n societies = _member_group_memberships(\n File \"members/services.py\", line 45, in _member_group_memberships\n if data[name][\"latest\"] < membership.until:\n```\n", "before_files": [{"content": "\"\"\"Services defined in the members package\"\"\"\nfrom datetime import date\nfrom typing import Callable, List, Dict, Any\n\nfrom django.conf import settings\nfrom django.db.models import Q, Count\nfrom django.utils import timezone\nfrom django.utils.translation import gettext\n\nfrom members import emails\nfrom members.models import Membership, Member\nfrom utils.snippets import datetime_to_lectureyear\n\n\ndef _member_group_memberships(\n member: Member, condition: Callable[[Membership], bool]\n) -> Dict[str, Any]:\n \"\"\"\n Determines the group membership of a user based on a condition\n :return: Object with group memberships\n \"\"\"\n memberships = member.membergroupmembership_set.all()\n data = {}\n\n for membership in memberships:\n if not condition(membership):\n continue\n period = {\n \"since\": membership.since,\n \"until\": membership.until,\n \"chair\": membership.chair,\n }\n\n if hasattr(membership.group, \"board\"):\n period[\"role\"] = membership.role\n\n if membership.until is None and hasattr(membership.group, \"board\"):\n period[\"until\"] = membership.group.board.until\n\n name = membership.group.name\n if data.get(name):\n data[name][\"periods\"].append(period)\n if data[name][\"earliest\"] > membership.since:\n data[name][\"earliest\"] = membership.since\n if data[name][\"latest\"] < membership.until:\n data[name][\"latest\"] = membership.until\n data[name][\"periods\"].sort(key=lambda x: x[\"since\"])\n else:\n data[name] = {\n \"pk\": membership.group.pk,\n \"active\": membership.group.active,\n \"name\": name,\n \"periods\": [period],\n \"url\": settings.BASE_URL + membership.group.get_absolute_url(),\n \"earliest\": membership.since,\n \"latest\": membership.until,\n }\n return data\n\n\ndef member_achievements(member) -> List:\n \"\"\"\n Derives a list of achievements of a member\n Committee and board memberships + mentorships\n \"\"\"\n achievements = _member_group_memberships(\n member,\n lambda membership: (\n hasattr(membership.group, \"board\") or hasattr(membership.group, \"committee\")\n ),\n )\n\n mentor_years = member.mentorship_set.all()\n for mentor_year in mentor_years:\n name = \"Mentor in {}\".format(mentor_year.year)\n # Ensure mentorships appear last but are sorted\n earliest = date.today()\n earliest = earliest.replace(year=earliest.year + mentor_year.year)\n if not achievements.get(name):\n achievements[name] = {\n \"name\": name,\n \"earliest\": earliest,\n }\n return sorted(achievements.values(), key=lambda x: x[\"earliest\"])\n\n\ndef member_societies(member) -> List:\n \"\"\"\n Derives a list of societies a member was part of\n \"\"\"\n societies = _member_group_memberships(\n member, lambda membership: (hasattr(membership.group, \"society\"))\n )\n return sorted(societies.values(), key=lambda x: x[\"earliest\"])\n\n\ndef gen_stats_member_type() -> Dict[str, int]:\n \"\"\"\n Generate a dictionary where every key is a member type with\n the value being the number of current members of that type\n \"\"\"\n\n data = {}\n for key, display in Membership.MEMBERSHIP_TYPES:\n data[str(display)] = (\n Membership.objects.filter(since__lte=date.today())\n .filter(Q(until__isnull=True) | Q(until__gt=date.today()))\n .filter(type=key)\n .count()\n )\n return data\n\n\ndef gen_stats_year() -> Dict[str, Dict[str, int]]:\n \"\"\"\n Generate list with 6 entries, where each entry represents the total amount\n of Thalia members in a year. The sixth element contains all the multi-year\n students.\n \"\"\"\n stats_year = {}\n current_year = datetime_to_lectureyear(date.today())\n\n for i in range(5):\n new = {}\n for key, _ in Membership.MEMBERSHIP_TYPES:\n new[key] = (\n Membership.objects.filter(user__profile__starting_year=current_year - i)\n .filter(since__lte=date.today())\n .filter(Q(until__isnull=True) | Q(until__gt=date.today()))\n .filter(type=key)\n .count()\n )\n stats_year[str(current_year - i)] = new\n\n # Add multi year members\n new = {}\n for key, _ in Membership.MEMBERSHIP_TYPES:\n new[key] = (\n Membership.objects.filter(user__profile__starting_year__lt=current_year - 4)\n .filter(since__lte=date.today())\n .filter(Q(until__isnull=True) | Q(until__gt=date.today()))\n .filter(type=key)\n .count()\n )\n stats_year[str(gettext(\"Older\"))] = new\n\n return stats_year\n\n\ndef verify_email_change(change_request) -> None:\n \"\"\"\n Mark the email change request as verified\n\n :param change_request: the email change request\n \"\"\"\n change_request.verified = True\n change_request.save()\n\n process_email_change(change_request)\n\n\ndef confirm_email_change(change_request) -> None:\n \"\"\"\n Mark the email change request as verified\n\n :param change_request: the email change request\n \"\"\"\n change_request.confirmed = True\n change_request.save()\n\n process_email_change(change_request)\n\n\ndef process_email_change(change_request) -> None:\n \"\"\"\n Change the user's email address if the request was completed and\n send the completion email\n\n :param change_request: the email change request\n \"\"\"\n if not change_request.completed:\n return\n\n member = change_request.member\n member.email = change_request.email\n member.save()\n\n emails.send_email_change_completion_message(change_request)\n\n\ndef execute_data_minimisation(dry_run=False, members=None) -> List[Member]:\n \"\"\"\n Clean the profiles of members/users of whom the last membership ended\n at least 31 days ago\n\n :param dry_run: does not really remove data if True\n :param members: queryset of members to process, optional\n :return: list of processed members\n \"\"\"\n if not members:\n members = Member.objects\n members = (\n members.annotate(membership_count=Count(\"membership\"))\n .exclude(\n (\n Q(membership__until__isnull=True)\n | Q(membership__until__gt=timezone.now().date())\n )\n & Q(membership_count__gt=0)\n )\n .distinct()\n .prefetch_related(\"membership_set\", \"profile\")\n )\n deletion_period = timezone.now().date() - timezone.timedelta(days=31)\n processed_members = []\n for member in members:\n if (\n member.latest_membership is None\n or member.latest_membership.until <= deletion_period\n ):\n processed_members.append(member)\n profile = member.profile\n profile.student_number = None\n profile.phone_number = None\n profile.address_street = None\n profile.address_street2 = None\n profile.address_postal_code = None\n profile.address_city = None\n profile.address_country = None\n profile.birthday = None\n profile.emergency_contact_phone_number = None\n profile.emergency_contact = None\n member.bank_accounts.all().delete()\n if not dry_run:\n profile.save()\n\n return processed_members\n", "path": "website/members/services.py"}], "after_files": [{"content": "\"\"\"Services defined in the members package\"\"\"\nfrom datetime import date\nfrom typing import Callable, List, Dict, Any\n\nfrom django.conf import settings\nfrom django.db.models import Q, Count\nfrom django.utils import timezone\nfrom django.utils.translation import gettext\n\nfrom members import emails\nfrom members.models import Membership, Member\nfrom utils.snippets import datetime_to_lectureyear\n\n\ndef _member_group_memberships(\n member: Member, condition: Callable[[Membership], bool]\n) -> Dict[str, Any]:\n \"\"\"\n Determines the group membership of a user based on a condition\n :return: Object with group memberships\n \"\"\"\n memberships = member.membergroupmembership_set.all()\n data = {}\n\n for membership in memberships:\n if not condition(membership):\n continue\n period = {\n \"since\": membership.since,\n \"until\": membership.until,\n \"chair\": membership.chair,\n }\n\n if hasattr(membership.group, \"board\"):\n period[\"role\"] = membership.role\n\n if membership.until is None and hasattr(membership.group, \"board\"):\n period[\"until\"] = membership.group.board.until\n\n name = membership.group.name\n if data.get(name):\n data[name][\"periods\"].append(period)\n if data[name][\"earliest\"] > period[\"since\"]:\n data[name][\"earliest\"] = period[\"since\"]\n if period[\"until\"] is None or (\n data[name][\"latest\"] is not None\n and data[name][\"latest\"] < period[\"until\"]\n ):\n data[name][\"latest\"] = period[\"until\"]\n data[name][\"periods\"].sort(key=lambda x: x[\"since\"])\n else:\n data[name] = {\n \"pk\": membership.group.pk,\n \"active\": membership.group.active,\n \"name\": name,\n \"periods\": [period],\n \"url\": settings.BASE_URL + membership.group.get_absolute_url(),\n \"earliest\": period[\"since\"],\n \"latest\": period[\"until\"],\n }\n return data\n\n\ndef member_achievements(member) -> List:\n \"\"\"\n Derives a list of achievements of a member\n Committee and board memberships + mentorships\n \"\"\"\n achievements = _member_group_memberships(\n member,\n lambda membership: (\n hasattr(membership.group, \"board\") or hasattr(membership.group, \"committee\")\n ),\n )\n\n mentor_years = member.mentorship_set.all()\n for mentor_year in mentor_years:\n name = \"Mentor in {}\".format(mentor_year.year)\n # Ensure mentorships appear last but are sorted\n earliest = date.today()\n earliest = earliest.replace(year=earliest.year + mentor_year.year)\n if not achievements.get(name):\n achievements[name] = {\n \"name\": name,\n \"earliest\": earliest,\n }\n return sorted(achievements.values(), key=lambda x: x[\"earliest\"])\n\n\ndef member_societies(member) -> List:\n \"\"\"\n Derives a list of societies a member was part of\n \"\"\"\n societies = _member_group_memberships(\n member, lambda membership: (hasattr(membership.group, \"society\"))\n )\n return sorted(societies.values(), key=lambda x: x[\"earliest\"])\n\n\ndef gen_stats_member_type() -> Dict[str, int]:\n \"\"\"\n Generate a dictionary where every key is a member type with\n the value being the number of current members of that type\n \"\"\"\n\n data = {}\n for key, display in Membership.MEMBERSHIP_TYPES:\n data[str(display)] = (\n Membership.objects.filter(since__lte=date.today())\n .filter(Q(until__isnull=True) | Q(until__gt=date.today()))\n .filter(type=key)\n .count()\n )\n return data\n\n\ndef gen_stats_year() -> Dict[str, Dict[str, int]]:\n \"\"\"\n Generate list with 6 entries, where each entry represents the total amount\n of Thalia members in a year. The sixth element contains all the multi-year\n students.\n \"\"\"\n stats_year = {}\n current_year = datetime_to_lectureyear(date.today())\n\n for i in range(5):\n new = {}\n for key, _ in Membership.MEMBERSHIP_TYPES:\n new[key] = (\n Membership.objects.filter(user__profile__starting_year=current_year - i)\n .filter(since__lte=date.today())\n .filter(Q(until__isnull=True) | Q(until__gt=date.today()))\n .filter(type=key)\n .count()\n )\n stats_year[str(current_year - i)] = new\n\n # Add multi year members\n new = {}\n for key, _ in Membership.MEMBERSHIP_TYPES:\n new[key] = (\n Membership.objects.filter(user__profile__starting_year__lt=current_year - 4)\n .filter(since__lte=date.today())\n .filter(Q(until__isnull=True) | Q(until__gt=date.today()))\n .filter(type=key)\n .count()\n )\n stats_year[str(gettext(\"Older\"))] = new\n\n return stats_year\n\n\ndef verify_email_change(change_request) -> None:\n \"\"\"\n Mark the email change request as verified\n\n :param change_request: the email change request\n \"\"\"\n change_request.verified = True\n change_request.save()\n\n process_email_change(change_request)\n\n\ndef confirm_email_change(change_request) -> None:\n \"\"\"\n Mark the email change request as verified\n\n :param change_request: the email change request\n \"\"\"\n change_request.confirmed = True\n change_request.save()\n\n process_email_change(change_request)\n\n\ndef process_email_change(change_request) -> None:\n \"\"\"\n Change the user's email address if the request was completed and\n send the completion email\n\n :param change_request: the email change request\n \"\"\"\n if not change_request.completed:\n return\n\n member = change_request.member\n member.email = change_request.email\n member.save()\n\n emails.send_email_change_completion_message(change_request)\n\n\ndef execute_data_minimisation(dry_run=False, members=None) -> List[Member]:\n \"\"\"\n Clean the profiles of members/users of whom the last membership ended\n at least 31 days ago\n\n :param dry_run: does not really remove data if True\n :param members: queryset of members to process, optional\n :return: list of processed members\n \"\"\"\n if not members:\n members = Member.objects\n members = (\n members.annotate(membership_count=Count(\"membership\"))\n .exclude(\n (\n Q(membership__until__isnull=True)\n | Q(membership__until__gt=timezone.now().date())\n )\n & Q(membership_count__gt=0)\n )\n .distinct()\n .prefetch_related(\"membership_set\", \"profile\")\n )\n deletion_period = timezone.now().date() - timezone.timedelta(days=31)\n processed_members = []\n for member in members:\n if (\n member.latest_membership is None\n or member.latest_membership.until <= deletion_period\n ):\n processed_members.append(member)\n profile = member.profile\n profile.student_number = None\n profile.phone_number = None\n profile.address_street = None\n profile.address_street2 = None\n profile.address_postal_code = None\n profile.address_city = None\n profile.address_country = None\n profile.birthday = None\n profile.emergency_contact_phone_number = None\n profile.emergency_contact = None\n member.bank_accounts.all().delete()\n if not dry_run:\n profile.save()\n\n return processed_members\n", "path": "website/members/services.py"}]} | 2,717 | 308 |
gh_patches_debug_10004 | rasdani/github-patches | git_diff | apluslms__a-plus-616 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add max_submissions in the course points API endpoint
This API endpoint `/api/v2/courses/COURSEID/points/USERID/` for course points shows the user's points in exercises. It also includes some exercise info that is also provided by the exercise endpoint (`/api/v2/exercises/ID/`). The points API already contains `max_points` and `points_to_pass`, so why not also include `max_submissions` (maximum number of allowed submissions)?
Relevant code:
https://github.com/apluslms/a-plus/blob/d34f94b7c9a8e4d22cd28d08968b36787bb344c4/exercise/api/views.py#L420
https://github.com/apluslms/a-plus/blob/d34f94b7c9a8e4d22cd28d08968b36787bb344c4/exercise/api/custom_serializers.py#L76
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `exercise/api/serializers.py`
Content:
```
1 from rest_framework import serializers
2 from rest_framework.reverse import reverse
3
4 from lib.api.fields import NestedHyperlinkedIdentityField
5 from lib.api.serializers import AplusModelSerializer, HtmlViewField
6 from userprofile.api.serializers import UserBriefSerializer
7 from ..models import Submission, SubmittedFile, BaseExercise
8
9
10 __all__ = [
11 'ExerciseBriefSerializer',
12 'SubmissionBriefSerializer',
13 'SubmittedFileBriefSerializer',
14 'SubmitterStatsBriefSerializer',
15 ]
16
17
18 class ExerciseBriefSerializer(AplusModelSerializer):
19 url = NestedHyperlinkedIdentityField(
20 view_name='api:exercise-detail',
21 lookup_map='exercise.api.views.ExerciseViewSet',
22 )
23 display_name = serializers.CharField(source='__str__')
24
25 class Meta(AplusModelSerializer.Meta):
26 model = BaseExercise
27 fields = (
28 'url',
29 'html_url',
30 'display_name',
31 )
32
33
34 class SubmissionBriefSerializer(AplusModelSerializer):
35 #display_name = serializers.CharField(source='__str__')
36
37 class Meta(AplusModelSerializer.Meta):
38 model = Submission
39 fields = (
40 'submission_time',
41 )
42 extra_kwargs = {
43 'url': {
44 'view_name': 'api:submission-detail',
45 'lookup_map': 'exercise.api.views.SubmissionViewSet',
46 }
47 }
48
49
50 class SubmittedFileBriefSerializer(AplusModelSerializer):
51 #url = HtmlViewField()
52 url = NestedHyperlinkedIdentityField(
53 view_name='api:submission-files-detail',
54 lookup_map='exercise.api.views.SubmissionFileViewSet',
55 )
56
57 class Meta(AplusModelSerializer.Meta):
58 model = SubmittedFile
59 fields = (
60 'url',
61 'filename',
62 'param_name',
63 )
64
65
66 class SubmitterStatsBriefSerializer(UserBriefSerializer):
67 stats = serializers.SerializerMethodField()
68
69 def get_stats(self, profile):
70 return reverse(
71 'api:exercise-submitter_stats-detail',
72 kwargs={
73 'exercise_id': self.context['view'].exercise.id,
74 'user_id': profile.user.id,
75 },
76 request=self.context['request']
77 )
78
79 class Meta(UserBriefSerializer.Meta):
80 fields = UserBriefSerializer.Meta.fields + (
81 'stats',
82 )
83
```
Path: `exercise/api/full_serializers.py`
Content:
```
1 from rest_framework import serializers
2
3 from lib.api.fields import NestedHyperlinkedIdentityField
4 from lib.api.serializers import (
5 AlwaysListSerializer,
6 CompositeListSerializer,
7 AplusSerializerMeta,
8 AplusModelSerializerBase,
9 )
10 from course.api.serializers import CourseBriefSerializer
11 from userprofile.api.serializers import UserBriefSerializer, UserListField
12
13 from ..models import Submission
14 from .serializers import (
15 ExerciseBriefSerializer,
16 SubmissionBriefSerializer,
17 SubmittedFileBriefSerializer,
18 )
19
20
21 __all__ = [
22 'ExerciseSerializer',
23 'ExerciseGraderSerializer',
24 'SubmissionSerializer',
25 'SubmissionGraderSerializer',
26 ]
27
28
29 class ExerciseSerializer(ExerciseBriefSerializer):
30 course = CourseBriefSerializer(source='course_instance')
31 post_url = serializers.SerializerMethodField()
32 exercise_info = serializers.JSONField()
33 submissions = NestedHyperlinkedIdentityField(
34 view_name='api:exercise-submissions-list',
35 lookup_map='exercise.api.views.ExerciseViewSet',
36 )
37 my_submissions = NestedHyperlinkedIdentityField(
38 view_name='api:exercise-submissions-detail',
39 lookup_map={
40 'exercise_id': 'id',
41 'user_id': lambda o=None: 'me',
42 },
43 )
44 my_stats = NestedHyperlinkedIdentityField(
45 view_name='api:exercise-submitter_stats-detail',
46 lookup_map={
47 'exercise_id': 'id',
48 'user_id': lambda o=None: 'me',
49 },
50 )
51
52 def get_post_url(self, obj):
53 # FIXME: obj should implement .get_post_url() and that should be used here
54 if obj.is_submittable:
55 request = self.context['request']
56 url = obj.get_url("exercise")
57 return request.build_absolute_uri(url)
58 return None
59
60 class Meta(ExerciseBriefSerializer.Meta):
61 fields = (
62 'name',
63 'course',
64 'is_submittable',
65 'post_url',
66 'max_points',
67 'max_submissions',
68 'exercise_info',
69 'templates',
70 'submissions',
71 'my_submissions',
72 'my_stats',
73 )
74
75
76 class ExerciseGraderSerializer(AplusModelSerializerBase):
77 url = NestedHyperlinkedIdentityField(
78 view_name='api:exercise-grader',
79 lookup_map='exercise.api.views.ExerciseViewSet',
80 )
81 exercise = ExerciseBriefSerializer(source='*')
82
83 class Meta(AplusSerializerMeta):
84 model = Submission
85 fields = (
86 'url',
87 'exercise',
88 )
89
90
91 class SubmitterLinks(AlwaysListSerializer, UserBriefSerializer):
92 pass
93
94
95 class SubmittedFileLinks(AlwaysListSerializer, SubmittedFileBriefSerializer):
96 pass
97
98
99 class SubmissionSerializer(SubmissionBriefSerializer):
100 exercise = ExerciseBriefSerializer()
101 submitters = SubmitterLinks()
102 submission_data = serializers.JSONField()
103 files = SubmittedFileLinks()
104 grader = UserBriefSerializer()
105 grading_data = serializers.JSONField()
106
107 class Meta(SubmissionBriefSerializer.Meta):
108 fields = (
109 'html_url',
110 'exercise',
111 'submitters',
112 'submission_data',
113 'files',
114 'status',
115 'grade',
116 'late_penalty_applied',
117 'grading_time',
118 'grader',
119 'feedback',
120 'assistant_feedback',
121 'grading_data',
122 )
123
124
125 class SubmissionInGraderSerializer(SubmissionBriefSerializer):
126 class Meta(SubmissionBriefSerializer.Meta):
127 fields = (
128 'html_url',
129 )
130
131
132 class SubmissionGraderSerializer(AplusModelSerializerBase):
133 url = NestedHyperlinkedIdentityField(
134 view_name='api:submission-grader',
135 lookup_map='exercise.api.views.SubmissionViewSet',
136 )
137 submission = SubmissionInGraderSerializer(source='*')
138 exercise = ExerciseBriefSerializer()
139
140 class Meta(AplusSerializerMeta):
141 model = Submission
142 fields = (
143 'url',
144 'submission',
145 'exercise',
146 'grading_data',
147 'is_graded',
148 )
149
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/exercise/api/full_serializers.py b/exercise/api/full_serializers.py
--- a/exercise/api/full_serializers.py
+++ b/exercise/api/full_serializers.py
@@ -63,8 +63,6 @@
'course',
'is_submittable',
'post_url',
- 'max_points',
- 'max_submissions',
'exercise_info',
'templates',
'submissions',
diff --git a/exercise/api/serializers.py b/exercise/api/serializers.py
--- a/exercise/api/serializers.py
+++ b/exercise/api/serializers.py
@@ -28,6 +28,8 @@
'url',
'html_url',
'display_name',
+ 'max_points',
+ 'max_submissions',
)
| {"golden_diff": "diff --git a/exercise/api/full_serializers.py b/exercise/api/full_serializers.py\n--- a/exercise/api/full_serializers.py\n+++ b/exercise/api/full_serializers.py\n@@ -63,8 +63,6 @@\n 'course',\n 'is_submittable',\n 'post_url',\n- 'max_points',\n- 'max_submissions',\n 'exercise_info',\n 'templates',\n 'submissions',\ndiff --git a/exercise/api/serializers.py b/exercise/api/serializers.py\n--- a/exercise/api/serializers.py\n+++ b/exercise/api/serializers.py\n@@ -28,6 +28,8 @@\n 'url',\n 'html_url',\n 'display_name',\n+ 'max_points',\n+ 'max_submissions',\n )\n", "issue": "Add max_submissions in the course points API endpoint\nThis API endpoint `/api/v2/courses/COURSEID/points/USERID/` for course points shows the user's points in exercises. It also includes some exercise info that is also provided by the exercise endpoint (`/api/v2/exercises/ID/`). The points API already contains `max_points` and `points_to_pass`, so why not also include `max_submissions` (maximum number of allowed submissions)?\r\n\r\nRelevant code:\r\nhttps://github.com/apluslms/a-plus/blob/d34f94b7c9a8e4d22cd28d08968b36787bb344c4/exercise/api/views.py#L420\r\nhttps://github.com/apluslms/a-plus/blob/d34f94b7c9a8e4d22cd28d08968b36787bb344c4/exercise/api/custom_serializers.py#L76\n", "before_files": [{"content": "from rest_framework import serializers\nfrom rest_framework.reverse import reverse\n\nfrom lib.api.fields import NestedHyperlinkedIdentityField\nfrom lib.api.serializers import AplusModelSerializer, HtmlViewField\nfrom userprofile.api.serializers import UserBriefSerializer\nfrom ..models import Submission, SubmittedFile, BaseExercise\n\n\n__all__ = [\n 'ExerciseBriefSerializer',\n 'SubmissionBriefSerializer',\n 'SubmittedFileBriefSerializer',\n 'SubmitterStatsBriefSerializer',\n]\n\n\nclass ExerciseBriefSerializer(AplusModelSerializer):\n url = NestedHyperlinkedIdentityField(\n view_name='api:exercise-detail',\n lookup_map='exercise.api.views.ExerciseViewSet',\n )\n display_name = serializers.CharField(source='__str__')\n\n class Meta(AplusModelSerializer.Meta):\n model = BaseExercise\n fields = (\n 'url',\n 'html_url',\n 'display_name',\n )\n\n\nclass SubmissionBriefSerializer(AplusModelSerializer):\n #display_name = serializers.CharField(source='__str__')\n\n class Meta(AplusModelSerializer.Meta):\n model = Submission\n fields = (\n 'submission_time',\n )\n extra_kwargs = {\n 'url': {\n 'view_name': 'api:submission-detail',\n 'lookup_map': 'exercise.api.views.SubmissionViewSet',\n }\n }\n\n\nclass SubmittedFileBriefSerializer(AplusModelSerializer):\n #url = HtmlViewField()\n url = NestedHyperlinkedIdentityField(\n view_name='api:submission-files-detail',\n lookup_map='exercise.api.views.SubmissionFileViewSet',\n )\n\n class Meta(AplusModelSerializer.Meta):\n model = SubmittedFile\n fields = (\n 'url',\n 'filename',\n 'param_name',\n )\n\n\nclass SubmitterStatsBriefSerializer(UserBriefSerializer):\n stats = serializers.SerializerMethodField()\n\n def get_stats(self, profile):\n return reverse(\n 'api:exercise-submitter_stats-detail',\n kwargs={\n 'exercise_id': self.context['view'].exercise.id,\n 'user_id': profile.user.id,\n },\n request=self.context['request']\n )\n\n class Meta(UserBriefSerializer.Meta):\n fields = UserBriefSerializer.Meta.fields + (\n 'stats',\n )\n", "path": "exercise/api/serializers.py"}, {"content": "from rest_framework import serializers\n\nfrom lib.api.fields import NestedHyperlinkedIdentityField\nfrom lib.api.serializers import (\n AlwaysListSerializer,\n CompositeListSerializer,\n AplusSerializerMeta,\n AplusModelSerializerBase,\n)\nfrom course.api.serializers import CourseBriefSerializer\nfrom userprofile.api.serializers import UserBriefSerializer, UserListField\n\nfrom ..models import Submission\nfrom .serializers import (\n ExerciseBriefSerializer,\n SubmissionBriefSerializer,\n SubmittedFileBriefSerializer,\n)\n\n\n__all__ = [\n 'ExerciseSerializer',\n 'ExerciseGraderSerializer',\n 'SubmissionSerializer',\n 'SubmissionGraderSerializer',\n]\n\n\nclass ExerciseSerializer(ExerciseBriefSerializer):\n course = CourseBriefSerializer(source='course_instance')\n post_url = serializers.SerializerMethodField()\n exercise_info = serializers.JSONField()\n submissions = NestedHyperlinkedIdentityField(\n view_name='api:exercise-submissions-list',\n lookup_map='exercise.api.views.ExerciseViewSet',\n )\n my_submissions = NestedHyperlinkedIdentityField(\n view_name='api:exercise-submissions-detail',\n lookup_map={\n 'exercise_id': 'id',\n 'user_id': lambda o=None: 'me',\n },\n )\n my_stats = NestedHyperlinkedIdentityField(\n view_name='api:exercise-submitter_stats-detail',\n lookup_map={\n 'exercise_id': 'id',\n 'user_id': lambda o=None: 'me',\n },\n )\n\n def get_post_url(self, obj):\n # FIXME: obj should implement .get_post_url() and that should be used here\n if obj.is_submittable:\n request = self.context['request']\n url = obj.get_url(\"exercise\")\n return request.build_absolute_uri(url)\n return None\n\n class Meta(ExerciseBriefSerializer.Meta):\n fields = (\n 'name',\n 'course',\n 'is_submittable',\n 'post_url',\n 'max_points',\n 'max_submissions',\n 'exercise_info',\n 'templates',\n 'submissions',\n 'my_submissions',\n 'my_stats',\n )\n\n\nclass ExerciseGraderSerializer(AplusModelSerializerBase):\n url = NestedHyperlinkedIdentityField(\n view_name='api:exercise-grader',\n lookup_map='exercise.api.views.ExerciseViewSet',\n )\n exercise = ExerciseBriefSerializer(source='*')\n\n class Meta(AplusSerializerMeta):\n model = Submission\n fields = (\n 'url',\n 'exercise',\n )\n\n\nclass SubmitterLinks(AlwaysListSerializer, UserBriefSerializer):\n pass\n\n\nclass SubmittedFileLinks(AlwaysListSerializer, SubmittedFileBriefSerializer):\n pass\n\n\nclass SubmissionSerializer(SubmissionBriefSerializer):\n exercise = ExerciseBriefSerializer()\n submitters = SubmitterLinks()\n submission_data = serializers.JSONField()\n files = SubmittedFileLinks()\n grader = UserBriefSerializer()\n grading_data = serializers.JSONField()\n\n class Meta(SubmissionBriefSerializer.Meta):\n fields = (\n 'html_url',\n 'exercise',\n 'submitters',\n 'submission_data',\n 'files',\n 'status',\n 'grade',\n 'late_penalty_applied',\n 'grading_time',\n 'grader',\n 'feedback',\n 'assistant_feedback',\n 'grading_data',\n )\n\n\nclass SubmissionInGraderSerializer(SubmissionBriefSerializer):\n class Meta(SubmissionBriefSerializer.Meta):\n fields = (\n 'html_url',\n )\n\n\nclass SubmissionGraderSerializer(AplusModelSerializerBase):\n url = NestedHyperlinkedIdentityField(\n view_name='api:submission-grader',\n lookup_map='exercise.api.views.SubmissionViewSet',\n )\n submission = SubmissionInGraderSerializer(source='*')\n exercise = ExerciseBriefSerializer()\n\n class Meta(AplusSerializerMeta):\n model = Submission\n fields = (\n 'url',\n 'submission',\n 'exercise',\n 'grading_data',\n 'is_graded',\n )\n", "path": "exercise/api/full_serializers.py"}], "after_files": [{"content": "from rest_framework import serializers\nfrom rest_framework.reverse import reverse\n\nfrom lib.api.fields import NestedHyperlinkedIdentityField\nfrom lib.api.serializers import AplusModelSerializer, HtmlViewField\nfrom userprofile.api.serializers import UserBriefSerializer\nfrom ..models import Submission, SubmittedFile, BaseExercise\n\n\n__all__ = [\n 'ExerciseBriefSerializer',\n 'SubmissionBriefSerializer',\n 'SubmittedFileBriefSerializer',\n 'SubmitterStatsBriefSerializer',\n]\n\n\nclass ExerciseBriefSerializer(AplusModelSerializer):\n url = NestedHyperlinkedIdentityField(\n view_name='api:exercise-detail',\n lookup_map='exercise.api.views.ExerciseViewSet',\n )\n display_name = serializers.CharField(source='__str__')\n\n class Meta(AplusModelSerializer.Meta):\n model = BaseExercise\n fields = (\n 'url',\n 'html_url',\n 'display_name',\n 'max_points',\n 'max_submissions',\n )\n\n\nclass SubmissionBriefSerializer(AplusModelSerializer):\n #display_name = serializers.CharField(source='__str__')\n\n class Meta(AplusModelSerializer.Meta):\n model = Submission\n fields = (\n 'submission_time',\n )\n extra_kwargs = {\n 'url': {\n 'view_name': 'api:submission-detail',\n 'lookup_map': 'exercise.api.views.SubmissionViewSet',\n }\n }\n\n\nclass SubmittedFileBriefSerializer(AplusModelSerializer):\n #url = HtmlViewField()\n url = NestedHyperlinkedIdentityField(\n view_name='api:submission-files-detail',\n lookup_map='exercise.api.views.SubmissionFileViewSet',\n )\n\n class Meta(AplusModelSerializer.Meta):\n model = SubmittedFile\n fields = (\n 'url',\n 'filename',\n 'param_name',\n )\n\n\nclass SubmitterStatsBriefSerializer(UserBriefSerializer):\n stats = serializers.SerializerMethodField()\n\n def get_stats(self, profile):\n return reverse(\n 'api:exercise-submitter_stats-detail',\n kwargs={\n 'exercise_id': self.context['view'].exercise.id,\n 'user_id': profile.user.id,\n },\n request=self.context['request']\n )\n\n class Meta(UserBriefSerializer.Meta):\n fields = UserBriefSerializer.Meta.fields + (\n 'stats',\n )\n", "path": "exercise/api/serializers.py"}, {"content": "from rest_framework import serializers\n\nfrom lib.api.fields import NestedHyperlinkedIdentityField\nfrom lib.api.serializers import (\n AlwaysListSerializer,\n CompositeListSerializer,\n AplusSerializerMeta,\n AplusModelSerializerBase,\n)\nfrom course.api.serializers import CourseBriefSerializer\nfrom userprofile.api.serializers import UserBriefSerializer, UserListField\n\nfrom ..models import Submission\nfrom .serializers import (\n ExerciseBriefSerializer,\n SubmissionBriefSerializer,\n SubmittedFileBriefSerializer,\n)\n\n\n__all__ = [\n 'ExerciseSerializer',\n 'ExerciseGraderSerializer',\n 'SubmissionSerializer',\n 'SubmissionGraderSerializer',\n]\n\n\nclass ExerciseSerializer(ExerciseBriefSerializer):\n course = CourseBriefSerializer(source='course_instance')\n post_url = serializers.SerializerMethodField()\n exercise_info = serializers.JSONField()\n submissions = NestedHyperlinkedIdentityField(\n view_name='api:exercise-submissions-list',\n lookup_map='exercise.api.views.ExerciseViewSet',\n )\n my_submissions = NestedHyperlinkedIdentityField(\n view_name='api:exercise-submissions-detail',\n lookup_map={\n 'exercise_id': 'id',\n 'user_id': lambda o=None: 'me',\n },\n )\n my_stats = NestedHyperlinkedIdentityField(\n view_name='api:exercise-submitter_stats-detail',\n lookup_map={\n 'exercise_id': 'id',\n 'user_id': lambda o=None: 'me',\n },\n )\n\n def get_post_url(self, obj):\n # FIXME: obj should implement .get_post_url() and that should be used here\n if obj.is_submittable:\n request = self.context['request']\n url = obj.get_url(\"exercise\")\n return request.build_absolute_uri(url)\n return None\n\n class Meta(ExerciseBriefSerializer.Meta):\n fields = (\n 'name',\n 'course',\n 'is_submittable',\n 'post_url',\n 'exercise_info',\n 'templates',\n 'submissions',\n 'my_submissions',\n 'my_stats',\n )\n\n\nclass ExerciseGraderSerializer(AplusModelSerializerBase):\n url = NestedHyperlinkedIdentityField(\n view_name='api:exercise-grader',\n lookup_map='exercise.api.views.ExerciseViewSet',\n )\n exercise = ExerciseBriefSerializer(source='*')\n\n class Meta(AplusSerializerMeta):\n model = Submission\n fields = (\n 'url',\n 'exercise',\n )\n\n\nclass SubmitterLinks(AlwaysListSerializer, UserBriefSerializer):\n pass\n\n\nclass SubmittedFileLinks(AlwaysListSerializer, SubmittedFileBriefSerializer):\n pass\n\n\nclass SubmissionSerializer(SubmissionBriefSerializer):\n exercise = ExerciseBriefSerializer()\n submitters = SubmitterLinks()\n submission_data = serializers.JSONField()\n files = SubmittedFileLinks()\n grader = UserBriefSerializer()\n grading_data = serializers.JSONField()\n\n class Meta(SubmissionBriefSerializer.Meta):\n fields = (\n 'html_url',\n 'exercise',\n 'submitters',\n 'submission_data',\n 'files',\n 'status',\n 'grade',\n 'late_penalty_applied',\n 'grading_time',\n 'grader',\n 'feedback',\n 'assistant_feedback',\n 'grading_data',\n )\n\n\nclass SubmissionInGraderSerializer(SubmissionBriefSerializer):\n class Meta(SubmissionBriefSerializer.Meta):\n fields = (\n 'html_url',\n )\n\n\nclass SubmissionGraderSerializer(AplusModelSerializerBase):\n url = NestedHyperlinkedIdentityField(\n view_name='api:submission-grader',\n lookup_map='exercise.api.views.SubmissionViewSet',\n )\n submission = SubmissionInGraderSerializer(source='*')\n exercise = ExerciseBriefSerializer()\n\n class Meta(AplusSerializerMeta):\n model = Submission\n fields = (\n 'url',\n 'submission',\n 'exercise',\n 'grading_data',\n 'is_graded',\n )\n", "path": "exercise/api/full_serializers.py"}]} | 2,304 | 176 |
gh_patches_debug_25769 | rasdani/github-patches | git_diff | encode__starlette-1401 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
templateing: jinja2: pass kwargs for environment
I think it would be good to pass something like `env_kwargs` via https://github.com/blueyed/starlette/blob/24c135de71ac56a73f7f797258115941579155bf/starlette/templating.py#L51-L53.
While you can change the env afterwards, it would allow Jinja2 to validate e.g. `enable_async`, and call `load_extensions` etc.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlette/templating.py`
Content:
```
1 import typing
2 from os import PathLike
3
4 from starlette.background import BackgroundTask
5 from starlette.responses import Response
6 from starlette.types import Receive, Scope, Send
7
8 try:
9 import jinja2
10
11 # @contextfunction renamed to @pass_context in Jinja 3.0, to be removed in 3.1
12 if hasattr(jinja2, "pass_context"):
13 pass_context = jinja2.pass_context
14 else: # pragma: nocover
15 pass_context = jinja2.contextfunction
16 except ImportError: # pragma: nocover
17 jinja2 = None # type: ignore
18
19
20 class _TemplateResponse(Response):
21 media_type = "text/html"
22
23 def __init__(
24 self,
25 template: typing.Any,
26 context: dict,
27 status_code: int = 200,
28 headers: dict = None,
29 media_type: str = None,
30 background: BackgroundTask = None,
31 ):
32 self.template = template
33 self.context = context
34 content = template.render(context)
35 super().__init__(content, status_code, headers, media_type, background)
36
37 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
38 request = self.context.get("request", {})
39 extensions = request.get("extensions", {})
40 if "http.response.template" in extensions:
41 await send(
42 {
43 "type": "http.response.template",
44 "template": self.template,
45 "context": self.context,
46 }
47 )
48 await super().__call__(scope, receive, send)
49
50
51 class Jinja2Templates:
52 """
53 templates = Jinja2Templates("templates")
54
55 return templates.TemplateResponse("index.html", {"request": request})
56 """
57
58 def __init__(self, directory: typing.Union[str, PathLike]) -> None:
59 assert jinja2 is not None, "jinja2 must be installed to use Jinja2Templates"
60 self.env = self._create_env(directory)
61
62 def _create_env(
63 self, directory: typing.Union[str, PathLike]
64 ) -> "jinja2.Environment":
65 @pass_context
66 def url_for(context: dict, name: str, **path_params: typing.Any) -> str:
67 request = context["request"]
68 return request.url_for(name, **path_params)
69
70 loader = jinja2.FileSystemLoader(directory)
71 env = jinja2.Environment(loader=loader, autoescape=True)
72 env.globals["url_for"] = url_for
73 return env
74
75 def get_template(self, name: str) -> "jinja2.Template":
76 return self.env.get_template(name)
77
78 def TemplateResponse(
79 self,
80 name: str,
81 context: dict,
82 status_code: int = 200,
83 headers: dict = None,
84 media_type: str = None,
85 background: BackgroundTask = None,
86 ) -> _TemplateResponse:
87 if "request" not in context:
88 raise ValueError('context must include a "request" key')
89 template = self.get_template(name)
90 return _TemplateResponse(
91 template,
92 context,
93 status_code=status_code,
94 headers=headers,
95 media_type=media_type,
96 background=background,
97 )
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlette/templating.py b/starlette/templating.py
--- a/starlette/templating.py
+++ b/starlette/templating.py
@@ -55,12 +55,14 @@
return templates.TemplateResponse("index.html", {"request": request})
"""
- def __init__(self, directory: typing.Union[str, PathLike]) -> None:
+ def __init__(
+ self, directory: typing.Union[str, PathLike], **env_options: typing.Any
+ ) -> None:
assert jinja2 is not None, "jinja2 must be installed to use Jinja2Templates"
- self.env = self._create_env(directory)
+ self.env = self._create_env(directory, **env_options)
def _create_env(
- self, directory: typing.Union[str, PathLike]
+ self, directory: typing.Union[str, PathLike], **env_options: typing.Any
) -> "jinja2.Environment":
@pass_context
def url_for(context: dict, name: str, **path_params: typing.Any) -> str:
@@ -68,7 +70,10 @@
return request.url_for(name, **path_params)
loader = jinja2.FileSystemLoader(directory)
- env = jinja2.Environment(loader=loader, autoescape=True)
+ env_options.setdefault("loader", loader)
+ env_options.setdefault("autoescape", True)
+
+ env = jinja2.Environment(**env_options)
env.globals["url_for"] = url_for
return env
| {"golden_diff": "diff --git a/starlette/templating.py b/starlette/templating.py\n--- a/starlette/templating.py\n+++ b/starlette/templating.py\n@@ -55,12 +55,14 @@\n return templates.TemplateResponse(\"index.html\", {\"request\": request})\n \"\"\"\n \n- def __init__(self, directory: typing.Union[str, PathLike]) -> None:\n+ def __init__(\n+ self, directory: typing.Union[str, PathLike], **env_options: typing.Any\n+ ) -> None:\n assert jinja2 is not None, \"jinja2 must be installed to use Jinja2Templates\"\n- self.env = self._create_env(directory)\n+ self.env = self._create_env(directory, **env_options)\n \n def _create_env(\n- self, directory: typing.Union[str, PathLike]\n+ self, directory: typing.Union[str, PathLike], **env_options: typing.Any\n ) -> \"jinja2.Environment\":\n @pass_context\n def url_for(context: dict, name: str, **path_params: typing.Any) -> str:\n@@ -68,7 +70,10 @@\n return request.url_for(name, **path_params)\n \n loader = jinja2.FileSystemLoader(directory)\n- env = jinja2.Environment(loader=loader, autoescape=True)\n+ env_options.setdefault(\"loader\", loader)\n+ env_options.setdefault(\"autoescape\", True)\n+\n+ env = jinja2.Environment(**env_options)\n env.globals[\"url_for\"] = url_for\n return env\n", "issue": "templateing: jinja2: pass kwargs for environment\nI think it would be good to pass something like `env_kwargs` via https://github.com/blueyed/starlette/blob/24c135de71ac56a73f7f797258115941579155bf/starlette/templating.py#L51-L53.\r\n\r\nWhile you can change the env afterwards, it would allow Jinja2 to validate e.g. `enable_async`, and call `load_extensions` etc.\n", "before_files": [{"content": "import typing\nfrom os import PathLike\n\nfrom starlette.background import BackgroundTask\nfrom starlette.responses import Response\nfrom starlette.types import Receive, Scope, Send\n\ntry:\n import jinja2\n\n # @contextfunction renamed to @pass_context in Jinja 3.0, to be removed in 3.1\n if hasattr(jinja2, \"pass_context\"):\n pass_context = jinja2.pass_context\n else: # pragma: nocover\n pass_context = jinja2.contextfunction\nexcept ImportError: # pragma: nocover\n jinja2 = None # type: ignore\n\n\nclass _TemplateResponse(Response):\n media_type = \"text/html\"\n\n def __init__(\n self,\n template: typing.Any,\n context: dict,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ):\n self.template = template\n self.context = context\n content = template.render(context)\n super().__init__(content, status_code, headers, media_type, background)\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n request = self.context.get(\"request\", {})\n extensions = request.get(\"extensions\", {})\n if \"http.response.template\" in extensions:\n await send(\n {\n \"type\": \"http.response.template\",\n \"template\": self.template,\n \"context\": self.context,\n }\n )\n await super().__call__(scope, receive, send)\n\n\nclass Jinja2Templates:\n \"\"\"\n templates = Jinja2Templates(\"templates\")\n\n return templates.TemplateResponse(\"index.html\", {\"request\": request})\n \"\"\"\n\n def __init__(self, directory: typing.Union[str, PathLike]) -> None:\n assert jinja2 is not None, \"jinja2 must be installed to use Jinja2Templates\"\n self.env = self._create_env(directory)\n\n def _create_env(\n self, directory: typing.Union[str, PathLike]\n ) -> \"jinja2.Environment\":\n @pass_context\n def url_for(context: dict, name: str, **path_params: typing.Any) -> str:\n request = context[\"request\"]\n return request.url_for(name, **path_params)\n\n loader = jinja2.FileSystemLoader(directory)\n env = jinja2.Environment(loader=loader, autoescape=True)\n env.globals[\"url_for\"] = url_for\n return env\n\n def get_template(self, name: str) -> \"jinja2.Template\":\n return self.env.get_template(name)\n\n def TemplateResponse(\n self,\n name: str,\n context: dict,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ) -> _TemplateResponse:\n if \"request\" not in context:\n raise ValueError('context must include a \"request\" key')\n template = self.get_template(name)\n return _TemplateResponse(\n template,\n context,\n status_code=status_code,\n headers=headers,\n media_type=media_type,\n background=background,\n )\n", "path": "starlette/templating.py"}], "after_files": [{"content": "import typing\nfrom os import PathLike\n\nfrom starlette.background import BackgroundTask\nfrom starlette.responses import Response\nfrom starlette.types import Receive, Scope, Send\n\ntry:\n import jinja2\n\n # @contextfunction renamed to @pass_context in Jinja 3.0, to be removed in 3.1\n if hasattr(jinja2, \"pass_context\"):\n pass_context = jinja2.pass_context\n else: # pragma: nocover\n pass_context = jinja2.contextfunction\nexcept ImportError: # pragma: nocover\n jinja2 = None # type: ignore\n\n\nclass _TemplateResponse(Response):\n media_type = \"text/html\"\n\n def __init__(\n self,\n template: typing.Any,\n context: dict,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ):\n self.template = template\n self.context = context\n content = template.render(context)\n super().__init__(content, status_code, headers, media_type, background)\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n request = self.context.get(\"request\", {})\n extensions = request.get(\"extensions\", {})\n if \"http.response.template\" in extensions:\n await send(\n {\n \"type\": \"http.response.template\",\n \"template\": self.template,\n \"context\": self.context,\n }\n )\n await super().__call__(scope, receive, send)\n\n\nclass Jinja2Templates:\n \"\"\"\n templates = Jinja2Templates(\"templates\")\n\n return templates.TemplateResponse(\"index.html\", {\"request\": request})\n \"\"\"\n\n def __init__(\n self, directory: typing.Union[str, PathLike], **env_options: typing.Any\n ) -> None:\n assert jinja2 is not None, \"jinja2 must be installed to use Jinja2Templates\"\n self.env = self._create_env(directory, **env_options)\n\n def _create_env(\n self, directory: typing.Union[str, PathLike], **env_options: typing.Any\n ) -> \"jinja2.Environment\":\n @pass_context\n def url_for(context: dict, name: str, **path_params: typing.Any) -> str:\n request = context[\"request\"]\n return request.url_for(name, **path_params)\n\n loader = jinja2.FileSystemLoader(directory)\n env_options.setdefault(\"loader\", loader)\n env_options.setdefault(\"autoescape\", True)\n\n env = jinja2.Environment(**env_options)\n env.globals[\"url_for\"] = url_for\n return env\n\n def get_template(self, name: str) -> \"jinja2.Template\":\n return self.env.get_template(name)\n\n def TemplateResponse(\n self,\n name: str,\n context: dict,\n status_code: int = 200,\n headers: dict = None,\n media_type: str = None,\n background: BackgroundTask = None,\n ) -> _TemplateResponse:\n if \"request\" not in context:\n raise ValueError('context must include a \"request\" key')\n template = self.get_template(name)\n return _TemplateResponse(\n template,\n context,\n status_code=status_code,\n headers=headers,\n media_type=media_type,\n background=background,\n )\n", "path": "starlette/templating.py"}]} | 1,268 | 349 |
gh_patches_debug_11341 | rasdani/github-patches | git_diff | LMFDB__lmfdb-5629 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dirichlet character server error
The page https://www.lmfdb.org/Character/Dirichlet/68800/66667 is throwing a server error
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lmfdb/characters/TinyConrey.py`
Content:
```
1 from sage.all import (gcd, Mod, Integer, Integers, Rational, pari,
2 DirichletGroup, CyclotomicField, euler_phi, lcm)
3 from sage.misc.cachefunc import cached_method
4 from sage.modular.dirichlet import DirichletCharacter
5 from lmfdb.logger import make_logger
6 logger = make_logger("TinyConrey")
7
8 def symbol_numerator(cond, parity):
9 # Reference: Sect. 9.3, Montgomery, Hugh L; Vaughan, Robert C. (2007).
10 # Multiplicative number theory. I. Classical theory. Cambridge Studies in
11 # Advanced Mathematics 97
12 #
13 # Let F = Q(\sqrt(d)) with d a non zero squarefree integer then a real
14 # Dirichlet character \chi(n) can be represented as a Kronecker symbol
15 # (m / n) where { m = d if # d = 1 mod 4 else m = 4d if d = 2,3 (mod) 4 }
16 # and m is the discriminant of F. The conductor of \chi is |m|.
17 #
18 # symbol_numerator returns the appropriate Kronecker symbol depending on
19 # the conductor of \chi.
20 m = cond
21 if cond % 2 == 1:
22 if cond % 4 == 3:
23 m = -cond
24 elif cond % 8 == 4:
25 # Fixed cond % 16 == 4 and cond % 16 == 12 were switched in the
26 # previous version of the code.
27 #
28 # Let d be a non zero squarefree integer. If d = 2,3 (mod) 4 and if
29 # cond = 4d = 4 ( 4n + 2) or 4 (4n + 3) = 16 n + 8 or 16n + 12 then we
30 # set m = cond. On the other hand if d = 1 (mod) 4 and cond = 4d = 4
31 # (4n +1) = 16n + 4 then we set m = -cond.
32 if cond % 16 == 4:
33 m = -cond
34 elif cond % 16 == 8:
35 if parity == 1:
36 m = -cond
37 else:
38 return None
39 return m
40
41
42 def kronecker_symbol(m):
43 if m:
44 return r'\(\displaystyle\left(\frac{%s}{\bullet}\right)\)' % (m)
45 else:
46 return None
47
48 ###############################################################################
49 # Conrey character with no call to Jonathan's code
50 # in order to handle big moduli
51
52
53 def get_sage_genvalues(modulus, order, genvalues, zeta_order):
54 """
55 Helper method for computing correct genvalues when constructing
56 the sage character
57 """
58 phi_mod = euler_phi(modulus)
59 exponent_factor = phi_mod / order
60 genvalues_exponent = (x * exponent_factor for x in genvalues)
61 return [x * zeta_order / phi_mod for x in genvalues_exponent]
62
63
64 class PariConreyGroup():
65
66 def __init__(self, modulus):
67 self.modulus = int(modulus)
68 self.G = pari(f"znstar({modulus},1)")
69
70 def gens(self):
71 return Integers(self.modulus).unit_gens()
72
73 def invariants(self):
74 return pari(f"{self.G}.cyc")
75
76 @cached_method
77 def first_chars(self, limit=31):
78 if self.modulus == 1:
79 return [1]
80 r = []
81 for i,c in enumerate(Integers(self.modulus).list_of_elements_of_multiplicative_group()):
82 r.append(c)
83 if i > limit:
84 self.rowtruncate = True
85 break
86 return r
87
88 @cached_method
89 def first_chars_with_orbit(self, limit=31):
90 """ would be nice to compute those directly
91 instead of querying each to db
92 """
93 pass
94
95
96 class ConreyCharacter():
97 """
98 minimal implementation of character from its Conrey index
99 use Pari/GP functions when available
100 """
101
102 def __init__(self, modulus, number):
103 assert gcd(modulus, number)==1
104 self.modulus = Integer(modulus)
105 self.number = Integer(number)
106 self.conrey = Mod(number,modulus)
107 self.G = pari("znstar({},1)".format(modulus))
108 self.G_gens = Integers(self.modulus).unit_gens() # use sage generators
109 self.chi_pari = self.G.znconreylog(self.number)
110 self.chi_0 = None
111 self.indlabel = None
112
113 @property
114 def texname(self):
115 from lmfdb.characters.web_character import WebDirichlet
116 return WebDirichlet.char2tex(self.modulus, self.number)
117
118 @cached_method
119 def modfactor(self):
120 return self.modulus.factor()
121
122 @cached_method
123 def conductor(self):
124 B = pari(f"znconreyconductor({self.G},{self.chi_pari},&chi0)")
125 if B.type() == 't_INT':
126 # means chi is primitive
127 self.chi_0 = self.chi_pari
128 self.indlabel = self.number
129 return int(B)
130 else:
131 self.chi_0 = pari("chi0")
132 G_0 = pari(f"znstar({B},1)")
133 self.indlabel = int(G_0.znconreyexp(self.chi_0))
134 return int(B[0])
135
136 @cached_method
137 def is_primitive(self):
138 return self.conductor() == self.modulus
139
140 @cached_method
141 def parity(self):
142 return self.G.zncharisodd(self.chi_pari)
143
144 def is_odd(self):
145 return self.parity() == 1
146
147 def is_even(self):
148 return self.parity() == 0
149
150 @property
151 def order(self):
152 return self.conrey.multiplicative_order()
153
154 @property
155 def genvalues(self):
156 # This assumes that the generators are ordered in the way
157 # that Sage returns
158 return [self.conreyangle(k) * self.order for k in self.G_gens]
159
160 @property
161 def values_gens(self):
162 # This may be considered the full version of genvalues;
163 # that is, it returns both the generators as well as the values
164 # at those generators
165 return [[k, self.conreyangle(k) * self.order] for k in self.G_gens]
166
167 @cached_method
168 def kronecker_symbol(self):
169 c = self.conductor()
170 p = self.parity()
171 return kronecker_symbol(symbol_numerator(c, p))
172
173 def conreyangle(self,x):
174 return Rational(self.G.chareval(self.chi_pari,x))
175
176 def gauss_sum_numerical(self, a):
177 # There seems to be a bug in pari when a is a multiple of the modulus,
178 # so we deal with that separately
179 if self.modulus.divides(a):
180 if self.conductor() == 1:
181 return euler_phi(self.modulus)
182 else:
183 return Integer(0)
184 else:
185 return self.G.znchargauss(self.chi_pari,a)
186
187 def sage_zeta_order(self, order):
188 return 1 if self.modulus <= 2 else lcm(2,order)
189
190 def sage_character(self, order=None, genvalues=None):
191
192 if order is None:
193 order = self.order
194
195 if genvalues is None:
196 genvalues = self.genvalues
197
198 H = DirichletGroup(self.modulus, base_ring=CyclotomicField(self.sage_zeta_order(order)))
199 M = H._module
200 order_corrected_genvalues = get_sage_genvalues(self.modulus, order, genvalues, self.sage_zeta_order(order))
201 return DirichletCharacter(H,M(order_corrected_genvalues))
202
203 @cached_method
204 def galois_orbit(self, limit=31):
205 """
206 orbit under Galois of the value field,
207 can be used to find first conjugate or list of first conjugates
208 """
209 logger.debug(f"## galois_orbit({limit})")
210 order = self.order
211 if order == 1:
212 return [1]
213 elif order < limit or order * order < limit * self.modulus:
214 logger.debug(f"compute all conjugate characters and return first {limit}")
215 return self.galois_orbit_all(limit)
216 elif limit == 1 or self.modulus < 30 * order:
217 logger.debug(f"compute {limit} first conjugate characters")
218 return self.galois_orbit_search(limit)
219 else:
220 logger.debug(f"galois orbit of size {order} too expansive, give up")
221 return []
222
223 def galois_orbit_all(self, limit=31):
224 # construct all Galois orbit, assume not too large
225 order = self.order
226 chik = self.conrey
227 output = []
228 for k in range(1,order):
229 if gcd(k,order) == 1:
230 output.append(Integer(chik))
231 chik *= self.conrey
232 output.sort()
233 return output[:limit]
234
235 def galois_orbit_search(self, limit=31):
236 # fishing strategy, assume orbit relatively dense
237 order = self.order
238 num = self.number
239 mod = self.modulus
240 kmin = 1
241 width = kmax = min(mod,limit * 50)
242 while True:
243 cmd = f"a=Mod({num},{mod});my(valid(k)=my(l=znlog(k,a,{order}));l&&gcd(l,{order})==1);[ k | k <- [{kmin}..{kmax}], gcd(k,{mod})==1 && valid(k) ]"
244 ans = [Integer(m) for m in pari(cmd)[:limit]]
245 if ans:
246 return ans
247 kmin += width
248 kmax += width
249
250 @property
251 def min_conrey_conj(self):
252 return self.galois_orbit(1)[0]
253
254 @cached_method
255 def kernel_field_poly(self):
256 pol = self.G.galoissubcyclo(self.G.charker(self.chi_pari))
257 if self.order <= 12:
258 pol = pol.polredabs()
259 return pol
260
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lmfdb/characters/TinyConrey.py b/lmfdb/characters/TinyConrey.py
--- a/lmfdb/characters/TinyConrey.py
+++ b/lmfdb/characters/TinyConrey.py
@@ -213,7 +213,7 @@
elif order < limit or order * order < limit * self.modulus:
logger.debug(f"compute all conjugate characters and return first {limit}")
return self.galois_orbit_all(limit)
- elif limit == 1 or self.modulus < 30 * order:
+ elif limit == 1 or self.modulus <= 1000000:
logger.debug(f"compute {limit} first conjugate characters")
return self.galois_orbit_search(limit)
else:
| {"golden_diff": "diff --git a/lmfdb/characters/TinyConrey.py b/lmfdb/characters/TinyConrey.py\n--- a/lmfdb/characters/TinyConrey.py\n+++ b/lmfdb/characters/TinyConrey.py\n@@ -213,7 +213,7 @@\n elif order < limit or order * order < limit * self.modulus:\n logger.debug(f\"compute all conjugate characters and return first {limit}\")\n return self.galois_orbit_all(limit)\n- elif limit == 1 or self.modulus < 30 * order:\n+ elif limit == 1 or self.modulus <= 1000000:\n logger.debug(f\"compute {limit} first conjugate characters\")\n return self.galois_orbit_search(limit)\n else:\n", "issue": "Dirichlet character server error\nThe page https://www.lmfdb.org/Character/Dirichlet/68800/66667 is throwing a server error\n", "before_files": [{"content": "from sage.all import (gcd, Mod, Integer, Integers, Rational, pari,\n DirichletGroup, CyclotomicField, euler_phi, lcm)\nfrom sage.misc.cachefunc import cached_method\nfrom sage.modular.dirichlet import DirichletCharacter\nfrom lmfdb.logger import make_logger\nlogger = make_logger(\"TinyConrey\")\n\ndef symbol_numerator(cond, parity):\n # Reference: Sect. 9.3, Montgomery, Hugh L; Vaughan, Robert C. (2007).\n # Multiplicative number theory. I. Classical theory. Cambridge Studies in\n # Advanced Mathematics 97\n #\n # Let F = Q(\\sqrt(d)) with d a non zero squarefree integer then a real\n # Dirichlet character \\chi(n) can be represented as a Kronecker symbol\n # (m / n) where { m = d if # d = 1 mod 4 else m = 4d if d = 2,3 (mod) 4 }\n # and m is the discriminant of F. The conductor of \\chi is |m|.\n #\n # symbol_numerator returns the appropriate Kronecker symbol depending on\n # the conductor of \\chi.\n m = cond\n if cond % 2 == 1:\n if cond % 4 == 3:\n m = -cond\n elif cond % 8 == 4:\n # Fixed cond % 16 == 4 and cond % 16 == 12 were switched in the\n # previous version of the code.\n #\n # Let d be a non zero squarefree integer. If d = 2,3 (mod) 4 and if\n # cond = 4d = 4 ( 4n + 2) or 4 (4n + 3) = 16 n + 8 or 16n + 12 then we\n # set m = cond. On the other hand if d = 1 (mod) 4 and cond = 4d = 4\n # (4n +1) = 16n + 4 then we set m = -cond.\n if cond % 16 == 4:\n m = -cond\n elif cond % 16 == 8:\n if parity == 1:\n m = -cond\n else:\n return None\n return m\n\n\ndef kronecker_symbol(m):\n if m:\n return r'\\(\\displaystyle\\left(\\frac{%s}{\\bullet}\\right)\\)' % (m)\n else:\n return None\n\n###############################################################################\n# Conrey character with no call to Jonathan's code\n# in order to handle big moduli\n\n\ndef get_sage_genvalues(modulus, order, genvalues, zeta_order):\n \"\"\"\n Helper method for computing correct genvalues when constructing\n the sage character\n \"\"\"\n phi_mod = euler_phi(modulus)\n exponent_factor = phi_mod / order\n genvalues_exponent = (x * exponent_factor for x in genvalues)\n return [x * zeta_order / phi_mod for x in genvalues_exponent]\n\n\nclass PariConreyGroup():\n\n def __init__(self, modulus):\n self.modulus = int(modulus)\n self.G = pari(f\"znstar({modulus},1)\")\n\n def gens(self):\n return Integers(self.modulus).unit_gens()\n\n def invariants(self):\n return pari(f\"{self.G}.cyc\")\n\n @cached_method\n def first_chars(self, limit=31):\n if self.modulus == 1:\n return [1]\n r = []\n for i,c in enumerate(Integers(self.modulus).list_of_elements_of_multiplicative_group()):\n r.append(c)\n if i > limit:\n self.rowtruncate = True\n break\n return r\n\n @cached_method\n def first_chars_with_orbit(self, limit=31):\n \"\"\" would be nice to compute those directly\n instead of querying each to db\n \"\"\"\n pass\n\n\nclass ConreyCharacter():\n \"\"\"\n minimal implementation of character from its Conrey index\n use Pari/GP functions when available\n \"\"\"\n\n def __init__(self, modulus, number):\n assert gcd(modulus, number)==1\n self.modulus = Integer(modulus)\n self.number = Integer(number)\n self.conrey = Mod(number,modulus)\n self.G = pari(\"znstar({},1)\".format(modulus))\n self.G_gens = Integers(self.modulus).unit_gens() # use sage generators\n self.chi_pari = self.G.znconreylog(self.number)\n self.chi_0 = None\n self.indlabel = None\n\n @property\n def texname(self):\n from lmfdb.characters.web_character import WebDirichlet\n return WebDirichlet.char2tex(self.modulus, self.number)\n\n @cached_method\n def modfactor(self):\n return self.modulus.factor()\n\n @cached_method\n def conductor(self):\n B = pari(f\"znconreyconductor({self.G},{self.chi_pari},&chi0)\")\n if B.type() == 't_INT':\n # means chi is primitive\n self.chi_0 = self.chi_pari\n self.indlabel = self.number\n return int(B)\n else:\n self.chi_0 = pari(\"chi0\")\n G_0 = pari(f\"znstar({B},1)\")\n self.indlabel = int(G_0.znconreyexp(self.chi_0))\n return int(B[0])\n\n @cached_method\n def is_primitive(self):\n return self.conductor() == self.modulus\n\n @cached_method\n def parity(self):\n return self.G.zncharisodd(self.chi_pari)\n\n def is_odd(self):\n return self.parity() == 1\n\n def is_even(self):\n return self.parity() == 0\n\n @property\n def order(self):\n return self.conrey.multiplicative_order()\n\n @property\n def genvalues(self):\n # This assumes that the generators are ordered in the way\n # that Sage returns\n return [self.conreyangle(k) * self.order for k in self.G_gens]\n\n @property\n def values_gens(self):\n # This may be considered the full version of genvalues;\n # that is, it returns both the generators as well as the values\n # at those generators\n return [[k, self.conreyangle(k) * self.order] for k in self.G_gens]\n\n @cached_method\n def kronecker_symbol(self):\n c = self.conductor()\n p = self.parity()\n return kronecker_symbol(symbol_numerator(c, p))\n\n def conreyangle(self,x):\n return Rational(self.G.chareval(self.chi_pari,x))\n\n def gauss_sum_numerical(self, a):\n # There seems to be a bug in pari when a is a multiple of the modulus,\n # so we deal with that separately\n if self.modulus.divides(a):\n if self.conductor() == 1:\n return euler_phi(self.modulus)\n else:\n return Integer(0)\n else:\n return self.G.znchargauss(self.chi_pari,a)\n\n def sage_zeta_order(self, order):\n return 1 if self.modulus <= 2 else lcm(2,order)\n\n def sage_character(self, order=None, genvalues=None):\n\n if order is None:\n order = self.order\n\n if genvalues is None:\n genvalues = self.genvalues\n\n H = DirichletGroup(self.modulus, base_ring=CyclotomicField(self.sage_zeta_order(order)))\n M = H._module\n order_corrected_genvalues = get_sage_genvalues(self.modulus, order, genvalues, self.sage_zeta_order(order))\n return DirichletCharacter(H,M(order_corrected_genvalues))\n\n @cached_method\n def galois_orbit(self, limit=31):\n \"\"\"\n orbit under Galois of the value field,\n can be used to find first conjugate or list of first conjugates\n \"\"\"\n logger.debug(f\"## galois_orbit({limit})\")\n order = self.order\n if order == 1:\n return [1]\n elif order < limit or order * order < limit * self.modulus:\n logger.debug(f\"compute all conjugate characters and return first {limit}\")\n return self.galois_orbit_all(limit)\n elif limit == 1 or self.modulus < 30 * order:\n logger.debug(f\"compute {limit} first conjugate characters\")\n return self.galois_orbit_search(limit)\n else:\n logger.debug(f\"galois orbit of size {order} too expansive, give up\")\n return []\n\n def galois_orbit_all(self, limit=31):\n # construct all Galois orbit, assume not too large\n order = self.order\n chik = self.conrey\n output = []\n for k in range(1,order):\n if gcd(k,order) == 1:\n output.append(Integer(chik))\n chik *= self.conrey\n output.sort()\n return output[:limit]\n\n def galois_orbit_search(self, limit=31):\n # fishing strategy, assume orbit relatively dense\n order = self.order\n num = self.number\n mod = self.modulus\n kmin = 1\n width = kmax = min(mod,limit * 50)\n while True:\n cmd = f\"a=Mod({num},{mod});my(valid(k)=my(l=znlog(k,a,{order}));l&&gcd(l,{order})==1);[ k | k <- [{kmin}..{kmax}], gcd(k,{mod})==1 && valid(k) ]\"\n ans = [Integer(m) for m in pari(cmd)[:limit]]\n if ans:\n return ans\n kmin += width\n kmax += width\n\n @property\n def min_conrey_conj(self):\n return self.galois_orbit(1)[0]\n\n @cached_method\n def kernel_field_poly(self):\n pol = self.G.galoissubcyclo(self.G.charker(self.chi_pari))\n if self.order <= 12:\n pol = pol.polredabs()\n return pol\n", "path": "lmfdb/characters/TinyConrey.py"}], "after_files": [{"content": "from sage.all import (gcd, Mod, Integer, Integers, Rational, pari,\n DirichletGroup, CyclotomicField, euler_phi, lcm)\nfrom sage.misc.cachefunc import cached_method\nfrom sage.modular.dirichlet import DirichletCharacter\nfrom lmfdb.logger import make_logger\nlogger = make_logger(\"TinyConrey\")\n\ndef symbol_numerator(cond, parity):\n # Reference: Sect. 9.3, Montgomery, Hugh L; Vaughan, Robert C. (2007).\n # Multiplicative number theory. I. Classical theory. Cambridge Studies in\n # Advanced Mathematics 97\n #\n # Let F = Q(\\sqrt(d)) with d a non zero squarefree integer then a real\n # Dirichlet character \\chi(n) can be represented as a Kronecker symbol\n # (m / n) where { m = d if # d = 1 mod 4 else m = 4d if d = 2,3 (mod) 4 }\n # and m is the discriminant of F. The conductor of \\chi is |m|.\n #\n # symbol_numerator returns the appropriate Kronecker symbol depending on\n # the conductor of \\chi.\n m = cond\n if cond % 2 == 1:\n if cond % 4 == 3:\n m = -cond\n elif cond % 8 == 4:\n # Fixed cond % 16 == 4 and cond % 16 == 12 were switched in the\n # previous version of the code.\n #\n # Let d be a non zero squarefree integer. If d = 2,3 (mod) 4 and if\n # cond = 4d = 4 ( 4n + 2) or 4 (4n + 3) = 16 n + 8 or 16n + 12 then we\n # set m = cond. On the other hand if d = 1 (mod) 4 and cond = 4d = 4\n # (4n +1) = 16n + 4 then we set m = -cond.\n if cond % 16 == 4:\n m = -cond\n elif cond % 16 == 8:\n if parity == 1:\n m = -cond\n else:\n return None\n return m\n\n\ndef kronecker_symbol(m):\n if m:\n return r'\\(\\displaystyle\\left(\\frac{%s}{\\bullet}\\right)\\)' % (m)\n else:\n return None\n\n###############################################################################\n# Conrey character with no call to Jonathan's code\n# in order to handle big moduli\n\n\ndef get_sage_genvalues(modulus, order, genvalues, zeta_order):\n \"\"\"\n Helper method for computing correct genvalues when constructing\n the sage character\n \"\"\"\n phi_mod = euler_phi(modulus)\n exponent_factor = phi_mod / order\n genvalues_exponent = (x * exponent_factor for x in genvalues)\n return [x * zeta_order / phi_mod for x in genvalues_exponent]\n\n\nclass PariConreyGroup():\n\n def __init__(self, modulus):\n self.modulus = int(modulus)\n self.G = pari(f\"znstar({modulus},1)\")\n\n def gens(self):\n return Integers(self.modulus).unit_gens()\n\n def invariants(self):\n return pari(f\"{self.G}.cyc\")\n\n @cached_method\n def first_chars(self, limit=31):\n if self.modulus == 1:\n return [1]\n r = []\n for i,c in enumerate(Integers(self.modulus).list_of_elements_of_multiplicative_group()):\n r.append(c)\n if i > limit:\n self.rowtruncate = True\n break\n return r\n\n @cached_method\n def first_chars_with_orbit(self, limit=31):\n \"\"\" would be nice to compute those directly\n instead of querying each to db\n \"\"\"\n pass\n\n\nclass ConreyCharacter():\n \"\"\"\n minimal implementation of character from its Conrey index\n use Pari/GP functions when available\n \"\"\"\n\n def __init__(self, modulus, number):\n assert gcd(modulus, number)==1\n self.modulus = Integer(modulus)\n self.number = Integer(number)\n self.conrey = Mod(number,modulus)\n self.G = pari(\"znstar({},1)\".format(modulus))\n self.G_gens = Integers(self.modulus).unit_gens() # use sage generators\n self.chi_pari = self.G.znconreylog(self.number)\n self.chi_0 = None\n self.indlabel = None\n\n @property\n def texname(self):\n from lmfdb.characters.web_character import WebDirichlet\n return WebDirichlet.char2tex(self.modulus, self.number)\n\n @cached_method\n def modfactor(self):\n return self.modulus.factor()\n\n @cached_method\n def conductor(self):\n B = pari(f\"znconreyconductor({self.G},{self.chi_pari},&chi0)\")\n if B.type() == 't_INT':\n # means chi is primitive\n self.chi_0 = self.chi_pari\n self.indlabel = self.number\n return int(B)\n else:\n self.chi_0 = pari(\"chi0\")\n G_0 = pari(f\"znstar({B},1)\")\n self.indlabel = int(G_0.znconreyexp(self.chi_0))\n return int(B[0])\n\n @cached_method\n def is_primitive(self):\n return self.conductor() == self.modulus\n\n @cached_method\n def parity(self):\n return self.G.zncharisodd(self.chi_pari)\n\n def is_odd(self):\n return self.parity() == 1\n\n def is_even(self):\n return self.parity() == 0\n\n @property\n def order(self):\n return self.conrey.multiplicative_order()\n\n @property\n def genvalues(self):\n # This assumes that the generators are ordered in the way\n # that Sage returns\n return [self.conreyangle(k) * self.order for k in self.G_gens]\n\n @property\n def values_gens(self):\n # This may be considered the full version of genvalues;\n # that is, it returns both the generators as well as the values\n # at those generators\n return [[k, self.conreyangle(k) * self.order] for k in self.G_gens]\n\n @cached_method\n def kronecker_symbol(self):\n c = self.conductor()\n p = self.parity()\n return kronecker_symbol(symbol_numerator(c, p))\n\n def conreyangle(self,x):\n return Rational(self.G.chareval(self.chi_pari,x))\n\n def gauss_sum_numerical(self, a):\n # There seems to be a bug in pari when a is a multiple of the modulus,\n # so we deal with that separately\n if self.modulus.divides(a):\n if self.conductor() == 1:\n return euler_phi(self.modulus)\n else:\n return Integer(0)\n else:\n return self.G.znchargauss(self.chi_pari,a)\n\n def sage_zeta_order(self, order):\n return 1 if self.modulus <= 2 else lcm(2,order)\n\n def sage_character(self, order=None, genvalues=None):\n\n if order is None:\n order = self.order\n\n if genvalues is None:\n genvalues = self.genvalues\n\n H = DirichletGroup(self.modulus, base_ring=CyclotomicField(self.sage_zeta_order(order)))\n M = H._module\n order_corrected_genvalues = get_sage_genvalues(self.modulus, order, genvalues, self.sage_zeta_order(order))\n return DirichletCharacter(H,M(order_corrected_genvalues))\n\n @cached_method\n def galois_orbit(self, limit=31):\n \"\"\"\n orbit under Galois of the value field,\n can be used to find first conjugate or list of first conjugates\n \"\"\"\n logger.debug(f\"## galois_orbit({limit})\")\n order = self.order\n if order == 1:\n return [1]\n elif order < limit or order * order < limit * self.modulus:\n logger.debug(f\"compute all conjugate characters and return first {limit}\")\n return self.galois_orbit_all(limit)\n elif limit == 1 or self.modulus <= 1000000:\n logger.debug(f\"compute {limit} first conjugate characters\")\n return self.galois_orbit_search(limit)\n else:\n logger.debug(f\"galois orbit of size {order} too expansive, give up\")\n return []\n\n def galois_orbit_all(self, limit=31):\n # construct all Galois orbit, assume not too large\n order = self.order\n chik = self.conrey\n output = []\n for k in range(1,order):\n if gcd(k,order) == 1:\n output.append(Integer(chik))\n chik *= self.conrey\n output.sort()\n return output[:limit]\n\n def galois_orbit_search(self, limit=31):\n # fishing strategy, assume orbit relatively dense\n order = self.order\n num = self.number\n mod = self.modulus\n kmin = 1\n width = kmax = min(mod,limit * 50)\n while True:\n cmd = f\"a=Mod({num},{mod});my(valid(k)=my(l=znlog(k,a,{order}));l&&gcd(l,{order})==1);[ k | k <- [{kmin}..{kmax}], gcd(k,{mod})==1 && valid(k) ]\"\n ans = [Integer(m) for m in pari(cmd)[:limit]]\n if ans:\n return ans\n kmin += width\n kmax += width\n\n @property\n def min_conrey_conj(self):\n return self.galois_orbit(1)[0]\n\n @cached_method\n def kernel_field_poly(self):\n pol = self.G.galoissubcyclo(self.G.charker(self.chi_pari))\n if self.order <= 12:\n pol = pol.polredabs()\n return pol\n", "path": "lmfdb/characters/TinyConrey.py"}]} | 3,281 | 176 |
gh_patches_debug_32314 | rasdani/github-patches | git_diff | rucio__rucio-4959 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error message 'NoSuchBucket' not matched via stringification
The following test uses `str(e)` to find out whether the string contains 'NoSuchBucket', however (at least in boto 2.49.0) that message is contained in `e.message` and the default error stringification (inherited form BotoServerError):
https://github.com/boto/boto/blob/91ba037e54ef521c379263b0ac769c66182527d7/boto/exception.py#L154-L156
does not include that instance attribute.
https://github.com/rucio/rucio/blob/59af37a14e343f938131863969d130f5336e4a28/lib/rucio/rse/protocols/s3boto.py#L247
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/rucio/rse/protocols/s3boto.py`
Content:
```
1 # Copyright European Organization for Nuclear Research (CERN)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # You may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Authors:
9 # - Wen Guan, <[email protected]>, 2014-2017
10 # - Vincent Garonne, <[email protected]>, 2016-2017
11 # - Nicolo Magini, <[email protected]>, 2018
12 # - Hannes Hansen, <[email protected]>, 2019
13 #
14 # PY3K COMPATIBLE
15
16 import os
17 try:
18 # PY2
19 import urlparse
20 except ImportError:
21 # PY3
22 import urllib.parse as urlparse
23 import logging
24
25 import boto
26 from boto import connect_s3
27 from boto.s3.connection import OrdinaryCallingFormat
28 from boto.s3.key import Key
29
30 from rucio.common import exception
31 from rucio.common.config import get_rse_credentials
32
33 from rucio.rse.protocols import protocol
34
35 logging.getLogger('boto').setLevel(logging.INFO)
36
37
38 class Default(protocol.RSEProtocol):
39 """ Implementing access to RSEs using the S3 protocol."""
40
41 def __init__(self, protocol_attr, rse_settings, logger=None):
42 super(Default, self).__init__(protocol_attr, rse_settings, logger=logger)
43 if 'determinism_type' in self.attributes:
44 self.attributes['determinism_type'] = 's3'
45 self.__conn = None
46 self.renaming = False
47 self.overwrite = True
48 self.http_proxy = os.environ.get("http_proxy")
49 self.https_proxy = os.environ.get("https_proxy")
50
51 def _disable_http_proxy(self):
52 """
53 Disable http and https proxy if exists.
54 """
55 if self.http_proxy:
56 del os.environ['http_proxy']
57 if self.https_proxy:
58 del os.environ['https_proxy']
59
60 def _reset_http_proxy(self):
61 """
62 Reset http and https proxy if exists.
63 """
64 if self.http_proxy:
65 os.environ['http_proxy'] = self.http_proxy
66 if self.https_proxy:
67 os.environ['https_proxy'] = self.https_proxy
68
69 def get_bucket_key_name(self, pfn):
70 """
71 Gets boto key for a pfn
72
73 :param pfn: Physical file name
74
75 :returns: bucket name and key name as string
76 """
77 try:
78 parsed = urlparse.urlparse(pfn)
79 hash_path = parsed.path.strip("/")
80
81 pos = hash_path.index("/")
82 bucket_name = hash_path[:pos]
83 key_name = hash_path[pos + 1:]
84
85 return bucket_name, key_name
86 except Exception as e:
87 raise exception.RucioException(str(e))
88
89 def get_bucket_key(self, pfn, create=False, validate=True):
90 """
91 Gets boto key for a pfn
92
93 :param pfn: Physical file name
94 :param create: True if needs to create the key, False if not
95
96 :returns: boto bucket and key object
97 """
98 try:
99 bucket_name, key_name = self.get_bucket_key_name(pfn)
100
101 if create:
102 try:
103 bucket = self.__conn.get_bucket(bucket_name, validate=True)
104 except boto.exception.S3ResponseError as e:
105 if e.status == 404: # bucket not found
106 bucket = self.__conn.create_bucket(bucket_name)
107 else:
108 raise e
109 key = Key(bucket, key_name)
110 else:
111 bucket = self.__conn.get_bucket(bucket_name, validate=False)
112 key = bucket.get_key(key_name, validate=validate)
113 return bucket, key
114 except boto.exception.S3ResponseError as e:
115 if e.status == 404:
116 raise exception.SourceNotFound(str(e))
117 else:
118 raise exception.ServiceUnavailable(e)
119
120 def exists(self, path):
121 """
122 Checks if the requested file is known by the referred RSE.
123
124 :param path: Physical file name
125
126 :returns: True if the file exists, False if it doesn't
127
128 :raises SourceNotFound: if the source file was not found on the referred storage.
129 """
130 try:
131 bucket, key = self.get_bucket_key(path)
132 if key:
133 return True
134 else:
135 return False
136 except exception.SourceNotFound:
137 return False
138 except Exception as e:
139 raise exception.ServiceUnavailable(e)
140
141 def connect(self):
142 """
143 Establishes the actual connection to the referred RSE.
144
145 :param: credentials needed to establish a connection with the stroage.
146
147 :raises RSEAccessDenied: if no connection could be established.
148 """
149 try:
150 scheme, prefix = self.attributes.get('scheme'), self.attributes.get('prefix')
151 netloc, port = self.attributes['hostname'], self.attributes.get('port', 80)
152 service_url = '%(scheme)s://%(netloc)s:%(port)s' % locals()
153
154 access_key, secret_key, is_secure = None, None, None
155 if 'S3_ACCESS_KEY' in os.environ:
156 access_key = os.environ['S3_ACCESS_KEY']
157 if 'S3_SECRET_KEY' in os.environ:
158 secret_key = os.environ['S3_SECRET_KEY']
159 if 'S3_IS_SECURE' in os.environ:
160 if str(os.environ['S3_IS_SECURE']).lower() == 'true':
161 is_secure = True
162 elif str(os.environ['S3_IS_SECURE']).lower() == 'false':
163 is_secure = False
164
165 if is_secure is None or access_key is None or secret_key is None:
166 credentials = get_rse_credentials()
167 self.rse['credentials'] = credentials.get(self.rse['rse'])
168
169 if not access_key:
170 access_key = self.rse['credentials']['access_key']
171 if not secret_key:
172 secret_key = self.rse['credentials']['secret_key']
173 if not is_secure:
174 is_secure = self.rse['credentials'].get('is_secure', {}).\
175 get(service_url, False)
176
177 self._disable_http_proxy()
178 self.__conn = connect_s3(host=self.attributes['hostname'],
179 port=int(port),
180 aws_access_key_id=access_key,
181 aws_secret_access_key=secret_key,
182 is_secure=is_secure,
183 calling_format=OrdinaryCallingFormat())
184 self._reset_http_proxy()
185 except Exception as e:
186 self._reset_http_proxy()
187 raise exception.RSEAccessDenied(e)
188
189 def close(self):
190 """ Closes the connection to RSE."""
191 pass
192
193 def get(self, pfn, dest, transfer_timeout=None):
194 """
195 Provides access to files stored inside connected the RSE.
196
197 :param path: Physical file name of requested file
198 :param dest: Name and path of the files when stored at the client
199 :param transfer_timeout: Transfer timeout (in seconds) - dummy
200
201 :raises DestinationNotAccessible: if the destination storage was not accessible.
202 :raises ServiceUnavailable: if some generic error occured in the library.
203 :raises SourceNotFound: if the source file was not found on the referred storage.
204 """
205 try:
206 bucket, key = self.get_bucket_key(pfn, validate=False)
207 if key is None:
208 raise exception.SourceNotFound('Cannot get the source key from S3')
209 key.get_contents_to_filename(dest)
210 except IOError as e:
211 if e.errno == 2:
212 raise exception.DestinationNotAccessible(e)
213 else:
214 raise exception.ServiceUnavailable(e)
215 except boto.exception.S3ResponseError as e:
216 if e.status == 404:
217 raise exception.SourceNotFound(str(e))
218 else:
219 raise exception.ServiceUnavailable(e)
220 except exception.SourceNotFound as e:
221 raise exception.SourceNotFound(e)
222 except Exception as e:
223 if os.path.exists(dest):
224 os.remove(dest)
225 raise exception.ServiceUnavailable(e)
226
227 def put(self, source, target, source_dir=None, transfer_timeout=None):
228 """
229 Allows to store files inside the referred RSE.
230
231 :param source: path to the source file on the client file system
232 :param target: path to the destination file on the storage
233 :param source_dir: Path where the to be transferred files are stored in the local file system
234 :param transfer_timeout: Transfer timeout (in seconds) - dummy
235
236 :raises DestinationNotAccessible: if the destination storage was not accessible.
237 :raises ServiceUnavailable: if some generic error occured in the library.
238 :raises SourceNotFound: if the source file was not found on the referred storage.
239 """
240 full_name = source_dir + '/' + source if source_dir else source
241 try:
242 bucket, key = self.get_bucket_key(target, validate=False)
243 if key is None:
244 raise exception.DestinationNotAccessible('Cannot get the destionation key from S3')
245 key.set_contents_from_filename(full_name)
246 except boto.exception.S3ResponseError as e:
247 if e.status == 404 and 'NoSuchBucket' in str(e):
248 try:
249 bucket, key = self.get_bucket_key(target, create=True)
250 key.set_contents_from_filename(full_name)
251 except Exception as e:
252 raise exception.ServiceUnavailable(e)
253 else:
254 raise exception.ServiceUnavailable(e)
255 except exception.SourceNotFound as e:
256 raise exception.SourceNotFound(e)
257 except Exception as e:
258 if 'No such file' in str(e):
259 raise exception.SourceNotFound(e)
260 else:
261 raise exception.ServiceUnavailable(e)
262
263 def delete(self, pfn):
264 """
265 Deletes a file from the connected RSE.
266
267 :param path: path to the to be deleted file
268
269 :raises ServiceUnavailable: if some generic error occured in the library.
270 :raises SourceNotFound: if the source file was not found on the referred storage.
271 """
272 try:
273 bucket, key = self.get_bucket_key(pfn)
274 if key is None:
275 raise exception.SourceNotFound('Cannot get the key from S3')
276 key.delete()
277 except exception.SourceNotFound as e:
278 raise exception.SourceNotFound(e)
279 except Exception as e:
280 raise exception.ServiceUnavailable(e)
281
282 def rename(self, pfn, new_pfn):
283 """ Allows to rename a file stored inside the connected RSE.
284
285 :param path: path to the current file on the storage
286 :param new_path: path to the new file on the storage
287
288 :raises DestinationNotAccessible: if the destination storage was not accessible.
289 :raises ServiceUnavailable: if some generic error occured in the library.
290 :raises SourceNotFound: if the source file was not found on the referred storage.
291 """
292 try:
293 bucket, key = self.get_bucket_key(pfn)
294 if key is None:
295 raise exception.SourceNotFound('Cannot get the source key from S3')
296 bucket_name, key_name = self.get_bucket_key_name(new_pfn)
297 key.copy(bucket_name, key_name)
298 key.delete()
299 except exception.SourceNotFound as e:
300 raise exception.SourceNotFound(e)
301 except boto.exception.S3ResponseError as e:
302 if e.status in [404, 403]:
303 raise exception.DestinationNotAccessible(e)
304 else:
305 raise exception.ServiceUnavailable(e)
306 except Exception as e:
307 raise exception.ServiceUnavailable(e)
308
309 def stat(self, pfn):
310 """ Determines the file size in bytes of the provided file.
311
312 :param pfn: The PFN the file.
313
314 :returns: a dict containing the key filesize.
315 """
316 try:
317 bucket, key = self.get_bucket_key(pfn)
318 if key is None:
319 raise exception.SourceNotFound('Cannot get the key from S3')
320 return {'filesize': int(key.size)}
321 except exception.SourceNotFound as e:
322 raise exception.SourceNotFound(e)
323 except Exception as e:
324 raise exception.ServiceUnavailable(e)
325
326 def list(self):
327 try:
328 prefix = self.attributes.get('prefix')
329 prefix = prefix.replace('/', '')
330 bucket = self.__conn.get_bucket(prefix, validate=True)
331 except boto.exception.S3ResponseError as e:
332 raise e
333 return bucket.list()
334
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/rucio/rse/protocols/s3boto.py b/lib/rucio/rse/protocols/s3boto.py
--- a/lib/rucio/rse/protocols/s3boto.py
+++ b/lib/rucio/rse/protocols/s3boto.py
@@ -1,17 +1,26 @@
-# Copyright European Organization for Nuclear Research (CERN)
+# -*- coding: utf-8 -*-
+# Copyright 2014-2021 CERN
#
# Licensed under the Apache License, Version 2.0 (the "License");
-# You may not use this file except in compliance with the License.
+# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
-# http://www.apache.org/licenses/LICENSE-2.0
#
-# Authors:
-# - Wen Guan, <[email protected]>, 2014-2017
-# - Vincent Garonne, <[email protected]>, 2016-2017
-# - Nicolo Magini, <[email protected]>, 2018
-# - Hannes Hansen, <[email protected]>, 2019
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
#
-# PY3K COMPATIBLE
+# Authors:
+# - Wen Guan <[email protected]>, 2014-2017
+# - Vincent Garonne <[email protected]>, 2016-2017
+# - Martin Barisits <[email protected]>, 2016-2017
+# - Nicolo Magini <[email protected]>, 2018
+# - Hannes Hansen <[email protected]>, 2019
+# - Tomas Javurek <[email protected]>, 2020
+# - Mario Lassnig <[email protected]>, 2021
import os
try:
@@ -244,7 +253,7 @@
raise exception.DestinationNotAccessible('Cannot get the destionation key from S3')
key.set_contents_from_filename(full_name)
except boto.exception.S3ResponseError as e:
- if e.status == 404 and 'NoSuchBucket' in str(e):
+ if e.status == 404 and 'NoSuchBucket' in e.error_code:
try:
bucket, key = self.get_bucket_key(target, create=True)
key.set_contents_from_filename(full_name)
| {"golden_diff": "diff --git a/lib/rucio/rse/protocols/s3boto.py b/lib/rucio/rse/protocols/s3boto.py\n--- a/lib/rucio/rse/protocols/s3boto.py\n+++ b/lib/rucio/rse/protocols/s3boto.py\n@@ -1,17 +1,26 @@\n-# Copyright European Organization for Nuclear Research (CERN)\n+# -*- coding: utf-8 -*-\n+# Copyright 2014-2021 CERN\n #\n # Licensed under the Apache License, Version 2.0 (the \"License\");\n-# You may not use this file except in compliance with the License.\n+# you may not use this file except in compliance with the License.\n # You may obtain a copy of the License at\n-# http://www.apache.org/licenses/LICENSE-2.0\n #\n-# Authors:\n-# - Wen Guan, <[email protected]>, 2014-2017\n-# - Vincent Garonne, <[email protected]>, 2016-2017\n-# - Nicolo Magini, <[email protected]>, 2018\n-# - Hannes Hansen, <[email protected]>, 2019\n+# http://www.apache.org/licenses/LICENSE-2.0\n+#\n+# Unless required by applicable law or agreed to in writing, software\n+# distributed under the License is distributed on an \"AS IS\" BASIS,\n+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+# See the License for the specific language governing permissions and\n+# limitations under the License.\n #\n-# PY3K COMPATIBLE\n+# Authors:\n+# - Wen Guan <[email protected]>, 2014-2017\n+# - Vincent Garonne <[email protected]>, 2016-2017\n+# - Martin Barisits <[email protected]>, 2016-2017\n+# - Nicolo Magini <[email protected]>, 2018\n+# - Hannes Hansen <[email protected]>, 2019\n+# - Tomas Javurek <[email protected]>, 2020\n+# - Mario Lassnig <[email protected]>, 2021\n \n import os\n try:\n@@ -244,7 +253,7 @@\n raise exception.DestinationNotAccessible('Cannot get the destionation key from S3')\n key.set_contents_from_filename(full_name)\n except boto.exception.S3ResponseError as e:\n- if e.status == 404 and 'NoSuchBucket' in str(e):\n+ if e.status == 404 and 'NoSuchBucket' in e.error_code:\n try:\n bucket, key = self.get_bucket_key(target, create=True)\n key.set_contents_from_filename(full_name)\n", "issue": "Error message 'NoSuchBucket' not matched via stringification\nThe following test uses `str(e)` to find out whether the string contains 'NoSuchBucket', however (at least in boto 2.49.0) that message is contained in `e.message` and the default error stringification (inherited form BotoServerError):\r\nhttps://github.com/boto/boto/blob/91ba037e54ef521c379263b0ac769c66182527d7/boto/exception.py#L154-L156\r\ndoes not include that instance attribute. \r\n\r\nhttps://github.com/rucio/rucio/blob/59af37a14e343f938131863969d130f5336e4a28/lib/rucio/rse/protocols/s3boto.py#L247\n", "before_files": [{"content": "# Copyright European Organization for Nuclear Research (CERN)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# You may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Authors:\n# - Wen Guan, <[email protected]>, 2014-2017\n# - Vincent Garonne, <[email protected]>, 2016-2017\n# - Nicolo Magini, <[email protected]>, 2018\n# - Hannes Hansen, <[email protected]>, 2019\n#\n# PY3K COMPATIBLE\n\nimport os\ntry:\n # PY2\n import urlparse\nexcept ImportError:\n # PY3\n import urllib.parse as urlparse\nimport logging\n\nimport boto\nfrom boto import connect_s3\nfrom boto.s3.connection import OrdinaryCallingFormat\nfrom boto.s3.key import Key\n\nfrom rucio.common import exception\nfrom rucio.common.config import get_rse_credentials\n\nfrom rucio.rse.protocols import protocol\n\nlogging.getLogger('boto').setLevel(logging.INFO)\n\n\nclass Default(protocol.RSEProtocol):\n \"\"\" Implementing access to RSEs using the S3 protocol.\"\"\"\n\n def __init__(self, protocol_attr, rse_settings, logger=None):\n super(Default, self).__init__(protocol_attr, rse_settings, logger=logger)\n if 'determinism_type' in self.attributes:\n self.attributes['determinism_type'] = 's3'\n self.__conn = None\n self.renaming = False\n self.overwrite = True\n self.http_proxy = os.environ.get(\"http_proxy\")\n self.https_proxy = os.environ.get(\"https_proxy\")\n\n def _disable_http_proxy(self):\n \"\"\"\n Disable http and https proxy if exists.\n \"\"\"\n if self.http_proxy:\n del os.environ['http_proxy']\n if self.https_proxy:\n del os.environ['https_proxy']\n\n def _reset_http_proxy(self):\n \"\"\"\n Reset http and https proxy if exists.\n \"\"\"\n if self.http_proxy:\n os.environ['http_proxy'] = self.http_proxy\n if self.https_proxy:\n os.environ['https_proxy'] = self.https_proxy\n\n def get_bucket_key_name(self, pfn):\n \"\"\"\n Gets boto key for a pfn\n\n :param pfn: Physical file name\n\n :returns: bucket name and key name as string\n \"\"\"\n try:\n parsed = urlparse.urlparse(pfn)\n hash_path = parsed.path.strip(\"/\")\n\n pos = hash_path.index(\"/\")\n bucket_name = hash_path[:pos]\n key_name = hash_path[pos + 1:]\n\n return bucket_name, key_name\n except Exception as e:\n raise exception.RucioException(str(e))\n\n def get_bucket_key(self, pfn, create=False, validate=True):\n \"\"\"\n Gets boto key for a pfn\n\n :param pfn: Physical file name\n :param create: True if needs to create the key, False if not\n\n :returns: boto bucket and key object\n \"\"\"\n try:\n bucket_name, key_name = self.get_bucket_key_name(pfn)\n\n if create:\n try:\n bucket = self.__conn.get_bucket(bucket_name, validate=True)\n except boto.exception.S3ResponseError as e:\n if e.status == 404: # bucket not found\n bucket = self.__conn.create_bucket(bucket_name)\n else:\n raise e\n key = Key(bucket, key_name)\n else:\n bucket = self.__conn.get_bucket(bucket_name, validate=False)\n key = bucket.get_key(key_name, validate=validate)\n return bucket, key\n except boto.exception.S3ResponseError as e:\n if e.status == 404:\n raise exception.SourceNotFound(str(e))\n else:\n raise exception.ServiceUnavailable(e)\n\n def exists(self, path):\n \"\"\"\n Checks if the requested file is known by the referred RSE.\n\n :param path: Physical file name\n\n :returns: True if the file exists, False if it doesn't\n\n :raises SourceNotFound: if the source file was not found on the referred storage.\n \"\"\"\n try:\n bucket, key = self.get_bucket_key(path)\n if key:\n return True\n else:\n return False\n except exception.SourceNotFound:\n return False\n except Exception as e:\n raise exception.ServiceUnavailable(e)\n\n def connect(self):\n \"\"\"\n Establishes the actual connection to the referred RSE.\n\n :param: credentials needed to establish a connection with the stroage.\n\n :raises RSEAccessDenied: if no connection could be established.\n \"\"\"\n try:\n scheme, prefix = self.attributes.get('scheme'), self.attributes.get('prefix')\n netloc, port = self.attributes['hostname'], self.attributes.get('port', 80)\n service_url = '%(scheme)s://%(netloc)s:%(port)s' % locals()\n\n access_key, secret_key, is_secure = None, None, None\n if 'S3_ACCESS_KEY' in os.environ:\n access_key = os.environ['S3_ACCESS_KEY']\n if 'S3_SECRET_KEY' in os.environ:\n secret_key = os.environ['S3_SECRET_KEY']\n if 'S3_IS_SECURE' in os.environ:\n if str(os.environ['S3_IS_SECURE']).lower() == 'true':\n is_secure = True\n elif str(os.environ['S3_IS_SECURE']).lower() == 'false':\n is_secure = False\n\n if is_secure is None or access_key is None or secret_key is None:\n credentials = get_rse_credentials()\n self.rse['credentials'] = credentials.get(self.rse['rse'])\n\n if not access_key:\n access_key = self.rse['credentials']['access_key']\n if not secret_key:\n secret_key = self.rse['credentials']['secret_key']\n if not is_secure:\n is_secure = self.rse['credentials'].get('is_secure', {}).\\\n get(service_url, False)\n\n self._disable_http_proxy()\n self.__conn = connect_s3(host=self.attributes['hostname'],\n port=int(port),\n aws_access_key_id=access_key,\n aws_secret_access_key=secret_key,\n is_secure=is_secure,\n calling_format=OrdinaryCallingFormat())\n self._reset_http_proxy()\n except Exception as e:\n self._reset_http_proxy()\n raise exception.RSEAccessDenied(e)\n\n def close(self):\n \"\"\" Closes the connection to RSE.\"\"\"\n pass\n\n def get(self, pfn, dest, transfer_timeout=None):\n \"\"\"\n Provides access to files stored inside connected the RSE.\n\n :param path: Physical file name of requested file\n :param dest: Name and path of the files when stored at the client\n :param transfer_timeout: Transfer timeout (in seconds) - dummy\n\n :raises DestinationNotAccessible: if the destination storage was not accessible.\n :raises ServiceUnavailable: if some generic error occured in the library.\n :raises SourceNotFound: if the source file was not found on the referred storage.\n \"\"\"\n try:\n bucket, key = self.get_bucket_key(pfn, validate=False)\n if key is None:\n raise exception.SourceNotFound('Cannot get the source key from S3')\n key.get_contents_to_filename(dest)\n except IOError as e:\n if e.errno == 2:\n raise exception.DestinationNotAccessible(e)\n else:\n raise exception.ServiceUnavailable(e)\n except boto.exception.S3ResponseError as e:\n if e.status == 404:\n raise exception.SourceNotFound(str(e))\n else:\n raise exception.ServiceUnavailable(e)\n except exception.SourceNotFound as e:\n raise exception.SourceNotFound(e)\n except Exception as e:\n if os.path.exists(dest):\n os.remove(dest)\n raise exception.ServiceUnavailable(e)\n\n def put(self, source, target, source_dir=None, transfer_timeout=None):\n \"\"\"\n Allows to store files inside the referred RSE.\n\n :param source: path to the source file on the client file system\n :param target: path to the destination file on the storage\n :param source_dir: Path where the to be transferred files are stored in the local file system\n :param transfer_timeout: Transfer timeout (in seconds) - dummy\n\n :raises DestinationNotAccessible: if the destination storage was not accessible.\n :raises ServiceUnavailable: if some generic error occured in the library.\n :raises SourceNotFound: if the source file was not found on the referred storage.\n \"\"\"\n full_name = source_dir + '/' + source if source_dir else source\n try:\n bucket, key = self.get_bucket_key(target, validate=False)\n if key is None:\n raise exception.DestinationNotAccessible('Cannot get the destionation key from S3')\n key.set_contents_from_filename(full_name)\n except boto.exception.S3ResponseError as e:\n if e.status == 404 and 'NoSuchBucket' in str(e):\n try:\n bucket, key = self.get_bucket_key(target, create=True)\n key.set_contents_from_filename(full_name)\n except Exception as e:\n raise exception.ServiceUnavailable(e)\n else:\n raise exception.ServiceUnavailable(e)\n except exception.SourceNotFound as e:\n raise exception.SourceNotFound(e)\n except Exception as e:\n if 'No such file' in str(e):\n raise exception.SourceNotFound(e)\n else:\n raise exception.ServiceUnavailable(e)\n\n def delete(self, pfn):\n \"\"\"\n Deletes a file from the connected RSE.\n\n :param path: path to the to be deleted file\n\n :raises ServiceUnavailable: if some generic error occured in the library.\n :raises SourceNotFound: if the source file was not found on the referred storage.\n \"\"\"\n try:\n bucket, key = self.get_bucket_key(pfn)\n if key is None:\n raise exception.SourceNotFound('Cannot get the key from S3')\n key.delete()\n except exception.SourceNotFound as e:\n raise exception.SourceNotFound(e)\n except Exception as e:\n raise exception.ServiceUnavailable(e)\n\n def rename(self, pfn, new_pfn):\n \"\"\" Allows to rename a file stored inside the connected RSE.\n\n :param path: path to the current file on the storage\n :param new_path: path to the new file on the storage\n\n :raises DestinationNotAccessible: if the destination storage was not accessible.\n :raises ServiceUnavailable: if some generic error occured in the library.\n :raises SourceNotFound: if the source file was not found on the referred storage.\n \"\"\"\n try:\n bucket, key = self.get_bucket_key(pfn)\n if key is None:\n raise exception.SourceNotFound('Cannot get the source key from S3')\n bucket_name, key_name = self.get_bucket_key_name(new_pfn)\n key.copy(bucket_name, key_name)\n key.delete()\n except exception.SourceNotFound as e:\n raise exception.SourceNotFound(e)\n except boto.exception.S3ResponseError as e:\n if e.status in [404, 403]:\n raise exception.DestinationNotAccessible(e)\n else:\n raise exception.ServiceUnavailable(e)\n except Exception as e:\n raise exception.ServiceUnavailable(e)\n\n def stat(self, pfn):\n \"\"\" Determines the file size in bytes of the provided file.\n\n :param pfn: The PFN the file.\n\n :returns: a dict containing the key filesize.\n \"\"\"\n try:\n bucket, key = self.get_bucket_key(pfn)\n if key is None:\n raise exception.SourceNotFound('Cannot get the key from S3')\n return {'filesize': int(key.size)}\n except exception.SourceNotFound as e:\n raise exception.SourceNotFound(e)\n except Exception as e:\n raise exception.ServiceUnavailable(e)\n\n def list(self):\n try:\n prefix = self.attributes.get('prefix')\n prefix = prefix.replace('/', '')\n bucket = self.__conn.get_bucket(prefix, validate=True)\n except boto.exception.S3ResponseError as e:\n raise e\n return bucket.list()\n", "path": "lib/rucio/rse/protocols/s3boto.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2014-2021 CERN\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# Authors:\n# - Wen Guan <[email protected]>, 2014-2017\n# - Vincent Garonne <[email protected]>, 2016-2017\n# - Martin Barisits <[email protected]>, 2016-2017\n# - Nicolo Magini <[email protected]>, 2018\n# - Hannes Hansen <[email protected]>, 2019\n# - Tomas Javurek <[email protected]>, 2020\n# - Mario Lassnig <[email protected]>, 2021\n\nimport os\ntry:\n # PY2\n import urlparse\nexcept ImportError:\n # PY3\n import urllib.parse as urlparse\nimport logging\n\nimport boto\nfrom boto import connect_s3\nfrom boto.s3.connection import OrdinaryCallingFormat\nfrom boto.s3.key import Key\n\nfrom rucio.common import exception\nfrom rucio.common.config import get_rse_credentials\n\nfrom rucio.rse.protocols import protocol\n\nlogging.getLogger('boto').setLevel(logging.INFO)\n\n\nclass Default(protocol.RSEProtocol):\n \"\"\" Implementing access to RSEs using the S3 protocol.\"\"\"\n\n def __init__(self, protocol_attr, rse_settings, logger=None):\n super(Default, self).__init__(protocol_attr, rse_settings, logger=logger)\n if 'determinism_type' in self.attributes:\n self.attributes['determinism_type'] = 's3'\n self.__conn = None\n self.renaming = False\n self.overwrite = True\n self.http_proxy = os.environ.get(\"http_proxy\")\n self.https_proxy = os.environ.get(\"https_proxy\")\n\n def _disable_http_proxy(self):\n \"\"\"\n Disable http and https proxy if exists.\n \"\"\"\n if self.http_proxy:\n del os.environ['http_proxy']\n if self.https_proxy:\n del os.environ['https_proxy']\n\n def _reset_http_proxy(self):\n \"\"\"\n Reset http and https proxy if exists.\n \"\"\"\n if self.http_proxy:\n os.environ['http_proxy'] = self.http_proxy\n if self.https_proxy:\n os.environ['https_proxy'] = self.https_proxy\n\n def get_bucket_key_name(self, pfn):\n \"\"\"\n Gets boto key for a pfn\n\n :param pfn: Physical file name\n\n :returns: bucket name and key name as string\n \"\"\"\n try:\n parsed = urlparse.urlparse(pfn)\n hash_path = parsed.path.strip(\"/\")\n\n pos = hash_path.index(\"/\")\n bucket_name = hash_path[:pos]\n key_name = hash_path[pos + 1:]\n\n return bucket_name, key_name\n except Exception as e:\n raise exception.RucioException(str(e))\n\n def get_bucket_key(self, pfn, create=False, validate=True):\n \"\"\"\n Gets boto key for a pfn\n\n :param pfn: Physical file name\n :param create: True if needs to create the key, False if not\n\n :returns: boto bucket and key object\n \"\"\"\n try:\n bucket_name, key_name = self.get_bucket_key_name(pfn)\n\n if create:\n try:\n bucket = self.__conn.get_bucket(bucket_name, validate=True)\n except boto.exception.S3ResponseError as e:\n if e.status == 404: # bucket not found\n bucket = self.__conn.create_bucket(bucket_name)\n else:\n raise e\n key = Key(bucket, key_name)\n else:\n bucket = self.__conn.get_bucket(bucket_name, validate=False)\n key = bucket.get_key(key_name, validate=validate)\n return bucket, key\n except boto.exception.S3ResponseError as e:\n if e.status == 404:\n raise exception.SourceNotFound(str(e))\n else:\n raise exception.ServiceUnavailable(e)\n\n def exists(self, path):\n \"\"\"\n Checks if the requested file is known by the referred RSE.\n\n :param path: Physical file name\n\n :returns: True if the file exists, False if it doesn't\n\n :raises SourceNotFound: if the source file was not found on the referred storage.\n \"\"\"\n try:\n bucket, key = self.get_bucket_key(path)\n if key:\n return True\n else:\n return False\n except exception.SourceNotFound:\n return False\n except Exception as e:\n raise exception.ServiceUnavailable(e)\n\n def connect(self):\n \"\"\"\n Establishes the actual connection to the referred RSE.\n\n :param: credentials needed to establish a connection with the stroage.\n\n :raises RSEAccessDenied: if no connection could be established.\n \"\"\"\n try:\n scheme, prefix = self.attributes.get('scheme'), self.attributes.get('prefix')\n netloc, port = self.attributes['hostname'], self.attributes.get('port', 80)\n service_url = '%(scheme)s://%(netloc)s:%(port)s' % locals()\n\n access_key, secret_key, is_secure = None, None, None\n if 'S3_ACCESS_KEY' in os.environ:\n access_key = os.environ['S3_ACCESS_KEY']\n if 'S3_SECRET_KEY' in os.environ:\n secret_key = os.environ['S3_SECRET_KEY']\n if 'S3_IS_SECURE' in os.environ:\n if str(os.environ['S3_IS_SECURE']).lower() == 'true':\n is_secure = True\n elif str(os.environ['S3_IS_SECURE']).lower() == 'false':\n is_secure = False\n\n if is_secure is None or access_key is None or secret_key is None:\n credentials = get_rse_credentials()\n self.rse['credentials'] = credentials.get(self.rse['rse'])\n\n if not access_key:\n access_key = self.rse['credentials']['access_key']\n if not secret_key:\n secret_key = self.rse['credentials']['secret_key']\n if not is_secure:\n is_secure = self.rse['credentials'].get('is_secure', {}).\\\n get(service_url, False)\n\n self._disable_http_proxy()\n self.__conn = connect_s3(host=self.attributes['hostname'],\n port=int(port),\n aws_access_key_id=access_key,\n aws_secret_access_key=secret_key,\n is_secure=is_secure,\n calling_format=OrdinaryCallingFormat())\n self._reset_http_proxy()\n except Exception as e:\n self._reset_http_proxy()\n raise exception.RSEAccessDenied(e)\n\n def close(self):\n \"\"\" Closes the connection to RSE.\"\"\"\n pass\n\n def get(self, pfn, dest, transfer_timeout=None):\n \"\"\"\n Provides access to files stored inside connected the RSE.\n\n :param path: Physical file name of requested file\n :param dest: Name and path of the files when stored at the client\n :param transfer_timeout: Transfer timeout (in seconds) - dummy\n\n :raises DestinationNotAccessible: if the destination storage was not accessible.\n :raises ServiceUnavailable: if some generic error occured in the library.\n :raises SourceNotFound: if the source file was not found on the referred storage.\n \"\"\"\n try:\n bucket, key = self.get_bucket_key(pfn, validate=False)\n if key is None:\n raise exception.SourceNotFound('Cannot get the source key from S3')\n key.get_contents_to_filename(dest)\n except IOError as e:\n if e.errno == 2:\n raise exception.DestinationNotAccessible(e)\n else:\n raise exception.ServiceUnavailable(e)\n except boto.exception.S3ResponseError as e:\n if e.status == 404:\n raise exception.SourceNotFound(str(e))\n else:\n raise exception.ServiceUnavailable(e)\n except exception.SourceNotFound as e:\n raise exception.SourceNotFound(e)\n except Exception as e:\n if os.path.exists(dest):\n os.remove(dest)\n raise exception.ServiceUnavailable(e)\n\n def put(self, source, target, source_dir=None, transfer_timeout=None):\n \"\"\"\n Allows to store files inside the referred RSE.\n\n :param source: path to the source file on the client file system\n :param target: path to the destination file on the storage\n :param source_dir: Path where the to be transferred files are stored in the local file system\n :param transfer_timeout: Transfer timeout (in seconds) - dummy\n\n :raises DestinationNotAccessible: if the destination storage was not accessible.\n :raises ServiceUnavailable: if some generic error occured in the library.\n :raises SourceNotFound: if the source file was not found on the referred storage.\n \"\"\"\n full_name = source_dir + '/' + source if source_dir else source\n try:\n bucket, key = self.get_bucket_key(target, validate=False)\n if key is None:\n raise exception.DestinationNotAccessible('Cannot get the destionation key from S3')\n key.set_contents_from_filename(full_name)\n except boto.exception.S3ResponseError as e:\n if e.status == 404 and 'NoSuchBucket' in e.error_code:\n try:\n bucket, key = self.get_bucket_key(target, create=True)\n key.set_contents_from_filename(full_name)\n except Exception as e:\n raise exception.ServiceUnavailable(e)\n else:\n raise exception.ServiceUnavailable(e)\n except exception.SourceNotFound as e:\n raise exception.SourceNotFound(e)\n except Exception as e:\n if 'No such file' in str(e):\n raise exception.SourceNotFound(e)\n else:\n raise exception.ServiceUnavailable(e)\n\n def delete(self, pfn):\n \"\"\"\n Deletes a file from the connected RSE.\n\n :param path: path to the to be deleted file\n\n :raises ServiceUnavailable: if some generic error occured in the library.\n :raises SourceNotFound: if the source file was not found on the referred storage.\n \"\"\"\n try:\n bucket, key = self.get_bucket_key(pfn)\n if key is None:\n raise exception.SourceNotFound('Cannot get the key from S3')\n key.delete()\n except exception.SourceNotFound as e:\n raise exception.SourceNotFound(e)\n except Exception as e:\n raise exception.ServiceUnavailable(e)\n\n def rename(self, pfn, new_pfn):\n \"\"\" Allows to rename a file stored inside the connected RSE.\n\n :param path: path to the current file on the storage\n :param new_path: path to the new file on the storage\n\n :raises DestinationNotAccessible: if the destination storage was not accessible.\n :raises ServiceUnavailable: if some generic error occured in the library.\n :raises SourceNotFound: if the source file was not found on the referred storage.\n \"\"\"\n try:\n bucket, key = self.get_bucket_key(pfn)\n if key is None:\n raise exception.SourceNotFound('Cannot get the source key from S3')\n bucket_name, key_name = self.get_bucket_key_name(new_pfn)\n key.copy(bucket_name, key_name)\n key.delete()\n except exception.SourceNotFound as e:\n raise exception.SourceNotFound(e)\n except boto.exception.S3ResponseError as e:\n if e.status in [404, 403]:\n raise exception.DestinationNotAccessible(e)\n else:\n raise exception.ServiceUnavailable(e)\n except Exception as e:\n raise exception.ServiceUnavailable(e)\n\n def stat(self, pfn):\n \"\"\" Determines the file size in bytes of the provided file.\n\n :param pfn: The PFN the file.\n\n :returns: a dict containing the key filesize.\n \"\"\"\n try:\n bucket, key = self.get_bucket_key(pfn)\n if key is None:\n raise exception.SourceNotFound('Cannot get the key from S3')\n return {'filesize': int(key.size)}\n except exception.SourceNotFound as e:\n raise exception.SourceNotFound(e)\n except Exception as e:\n raise exception.ServiceUnavailable(e)\n\n def list(self):\n try:\n prefix = self.attributes.get('prefix')\n prefix = prefix.replace('/', '')\n bucket = self.__conn.get_bucket(prefix, validate=True)\n except boto.exception.S3ResponseError as e:\n raise e\n return bucket.list()\n", "path": "lib/rucio/rse/protocols/s3boto.py"}]} | 4,034 | 671 |
gh_patches_debug_7791 | rasdani/github-patches | git_diff | RedHatInsights__insights-core-2477 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Imprprovement for bond parser
Currently, the bond parser is not supporting to check if the primary slave is configured with "primary_reselect=failure".
Parser PR: https://github.com/RedHatInsights/insights-core/pull/2477
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `insights/parsers/bond.py`
Content:
```
1 """
2 Bond - file ``/proc/net/bonding``
3 =================================
4
5 Provides plugins access to the network bonding information gathered from
6 all the files starteing with "bond." located in the
7 ``/proc/net/bonding`` directory.
8
9 Typical content of ``bond.*`` file is::
10
11 Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
12
13 Bonding Mode: IEEE 802.3ad Dynamic link aggregation
14 Transmit Hash Policy: layer2 (0)
15 MII Status: up
16 MII Polling Interval (ms): 500
17 Up Delay (ms): 0
18 Down Delay (ms): 0
19
20 802.3ad info
21 LACP rate: slow
22 Active Aggregator Info:
23 Aggregator ID: 3
24 Number of ports: 1
25 Actor Key: 17
26 Partner Key: 1
27 Partner Mac Address: 00:00:00:00:00:00
28
29 Slave Interface: eth1
30 MII Status: up
31 Link Failure Count: 0
32 Permanent HW addr: 00:16:35:5e:42:fc
33 Aggregator ID: 3
34
35 Slave Interface: eth2
36 MII Status: up
37 Link Failure Count: 0
38 Permanent HW addr: 00:16:35:5e:02:7e
39 Aggregator ID: 2
40
41 Data is modeled as an array of ``Bond`` objects (``bond`` being a
42 pattern file specification gathering data from files located in
43 ``/proc/net/bonding``.
44
45 Examples:
46 >>> type(bond_info)
47 <class 'insights.parsers.bond.Bond'>
48 >>> bond_info.bond_mode
49 '4'
50 >>> bond_info.partner_mac_address
51 '00:00:00:00:00:00'
52 >>> bond_info.slave_interface
53 ['eth1', 'eth2']
54 >>> bond_info.aggregator_id
55 ['3', '3', '2']
56 >>> bond_info.xmit_hash_policy
57 'layer2'
58 >>> bond_info.active_slave
59 >>> bond_info.slave_duplex
60 ['full', 'full']
61 >>> bond_info.slave_speed
62 ['1000 Mbps', '1000 Mbps']
63 """
64
65 from insights import Parser, parser, get_active_lines
66 from insights.specs import Specs
67 from insights.parsers import ParseException
68
69
70 """dict: bonding mode parameter string linked to bond type index."""
71 BOND_PREFIX_MAP = {
72 'load balancing (round-robin)': '0',
73 'fault-tolerance (active-backup)': '1',
74 'fault-tolerance (active-backup) (fail_over_mac active)': '1',
75 'load balancing (xor)': '2',
76 'fault-tolerance (broadcast)': '3',
77 'IEEE 802.3ad Dynamic link aggregation': '4',
78 'transmit load balancing': '5',
79 'adaptive load balancing': '6'
80 }
81
82
83 @parser(Specs.bond)
84 class Bond(Parser):
85 """
86 Models the ``/proc/net/bonding`` file.
87
88 Currently used information from ``/proc/net/bonding`` includes
89 the "bond mode" and "partner mac address".
90 """
91
92 def parse_content(self, content):
93 self._bond_mode = None
94 self._partner_mac_address = None
95 self._active_slave = None
96 self.xmit_hash_policy = None
97 self._arp_polling_interval = None
98 self._arp_ip_target = None
99 self._slave_interface = []
100 self._aggregator_id = []
101 self._mii_status = []
102 self._slave_link_failure_count = []
103 self._slave_speed = []
104 self._slave_duplex = []
105 self._primary_slave = None
106
107 for line in get_active_lines(content):
108 if line.startswith("Bonding Mode: "):
109 raw_mode = line.split(":", 1)[1].strip()
110 self._bond_mode = raw_mode
111 if raw_mode in BOND_PREFIX_MAP:
112 self._bond_mode = BOND_PREFIX_MAP[raw_mode]
113 else:
114 raise ParseException("Unrecognised bonding mode '{b}'".format(b=raw_mode))
115 elif line.startswith("Partner Mac Address: "):
116 self._partner_mac_address = line.split(":", 1)[1].strip()
117 elif line.startswith("Slave Interface: "):
118 self._slave_interface.append(line.split(":", 1)[1].strip())
119 elif line.strip().startswith("Aggregator ID: "):
120 self._aggregator_id.append(line.strip().split(':', 1)[1].strip())
121 elif line.strip().startswith("Transmit Hash Policy"):
122 # No need of values in bracket:
123 # Integer notification (0), (1), (2) of layer2, layer3+4, layer2+3 resp
124 self.xmit_hash_policy = line.split(":", 1)[1].split()[0]
125 elif line.strip().startswith("Currently Active Slave"):
126 self._active_slave = line.split(":", 1)[1].split()[0]
127 elif line.strip().startswith("MII Status: "):
128 self._mii_status.append(line.strip().split(':', 1)[1].strip())
129 elif line.strip().startswith("Link Failure Count: "):
130 self._slave_link_failure_count.append(line.strip().split(':', 1)[1].strip())
131 elif line.strip().startswith("Speed: "):
132 self._slave_speed.append(line.strip().split(':', 1)[1].strip())
133 elif line.strip().startswith("Duplex: "):
134 self._slave_duplex.append(line.strip().split(':', 1)[1].strip())
135 elif line.strip().startswith("ARP Polling Interval (ms):"):
136 self._arp_polling_interval = line.strip().split(':', 1)[1].strip()
137 elif line.strip().startswith("ARP IP target/s (n.n.n.n form):"):
138 self._arp_ip_target = line.strip().split(':', 1)[1].strip()
139 elif line.strip().startswith("Primary Slave"):
140 self._primary_slave = line.split(":", 1)[1].split()[0]
141
142 @property
143 def bond_mode(self):
144 """Returns the bond mode number as a string, or if there is no
145 known mapping to a number, the raw "Bonding Mode" value.
146 ``None`` is returned if no "Bonding Mode" key is found.
147 """
148 return self._bond_mode
149
150 @property
151 def partner_mac_address(self):
152 """Returns the value of the "Partner Mac Address" in the bond
153 file if the key/value exists. If the key is not in the bond
154 file, ``None`` is returned.
155 """
156 return self._partner_mac_address
157
158 @property
159 def slave_interface(self):
160 """Returns all the slave interfaces of in the bond file wrapped
161 a list if the key/value exists. If the key is not in the
162 bond file, ``[]`` is returned.
163 """
164 return self._slave_interface
165
166 @property
167 def aggregator_id(self):
168 """Returns all the aggregator id of in the bond file wrapped
169 a list if the key/value exists. If the key is not in the
170 bond file, ``[]`` is returned.
171 """
172 return self._aggregator_id
173
174 @property
175 def active_slave(self):
176 """Returns the active slave of the "Currently Active Slave" in the bond
177 file if key/value exists. If the key is not in the bond file, ``None``
178 is returned.
179 """
180 return self._active_slave
181
182 @property
183 def mii_status(self):
184 """Returns the master and all the slaves "MII Status" value in the bond file wrapped
185 a list if the key/value exists. If the key is not in the
186 bond file, ``[]`` is returned.
187 """
188 return self._mii_status
189
190 @property
191 def slave_link_failure_count(self):
192 """Returns all the slaves "Link Failure Count" value in the bond file wrapped
193 a list if the key/value exists. If the key is not in the
194 bond file, ``[]`` is returned.
195 """
196 return self._slave_link_failure_count
197
198 @property
199 def slave_speed(self):
200 """Returns all the slaves "Speed" value in the bond file wrapped
201 a list if the key/value exists. If the key is not in the
202 bond file, ``[]`` is returned.
203 """
204 return self._slave_speed
205
206 @property
207 def slave_duplex(self):
208 """Returns all the slave "Duplex" value in the bond file wrapped
209 a list if the key/value exists. If the key is not in the
210 bond file, ``[]`` is returned.
211 """
212 return self._slave_duplex
213
214 @property
215 def arp_polling_interval(self):
216 """Returns the arp polling interval as a string. ``None`` is returned
217 if no "ARP Polling Interval (ms)" key is found.
218 """
219 return self._arp_polling_interval
220
221 @property
222 def arp_ip_target(self):
223 """Returns the arp ip target as a string. ``None`` is returned
224 if no "ARP IP target/s (n.n.n.n form)" key is found.
225 """
226 return self._arp_ip_target
227
228 @property
229 def primary_slave(self):
230 """Returns the "Primary Slave" in the bond file if key/value exists.
231 If the key is not in the bond file, ``None`` is returned.
232 """
233 return self._primary_slave
234
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/insights/parsers/bond.py b/insights/parsers/bond.py
--- a/insights/parsers/bond.py
+++ b/insights/parsers/bond.py
@@ -137,7 +137,7 @@
elif line.strip().startswith("ARP IP target/s (n.n.n.n form):"):
self._arp_ip_target = line.strip().split(':', 1)[1].strip()
elif line.strip().startswith("Primary Slave"):
- self._primary_slave = line.split(":", 1)[1].split()[0]
+ self._primary_slave = line.split(":", 1)[1].strip()
@property
def bond_mode(self):
| {"golden_diff": "diff --git a/insights/parsers/bond.py b/insights/parsers/bond.py\n--- a/insights/parsers/bond.py\n+++ b/insights/parsers/bond.py\n@@ -137,7 +137,7 @@\n elif line.strip().startswith(\"ARP IP target/s (n.n.n.n form):\"):\n self._arp_ip_target = line.strip().split(':', 1)[1].strip()\n elif line.strip().startswith(\"Primary Slave\"):\n- self._primary_slave = line.split(\":\", 1)[1].split()[0]\n+ self._primary_slave = line.split(\":\", 1)[1].strip()\n \n @property\n def bond_mode(self):\n", "issue": "Imprprovement for bond parser\nCurrently, the bond parser is not supporting to check if the primary slave is configured with \"primary_reselect=failure\".\r\n\r\nParser PR: https://github.com/RedHatInsights/insights-core/pull/2477\r\n\n", "before_files": [{"content": "\"\"\"\nBond - file ``/proc/net/bonding``\n=================================\n\nProvides plugins access to the network bonding information gathered from\nall the files starteing with \"bond.\" located in the\n``/proc/net/bonding`` directory.\n\nTypical content of ``bond.*`` file is::\n\n Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)\n\n Bonding Mode: IEEE 802.3ad Dynamic link aggregation\n Transmit Hash Policy: layer2 (0)\n MII Status: up\n MII Polling Interval (ms): 500\n Up Delay (ms): 0\n Down Delay (ms): 0\n\n 802.3ad info\n LACP rate: slow\n Active Aggregator Info:\n Aggregator ID: 3\n Number of ports: 1\n Actor Key: 17\n Partner Key: 1\n Partner Mac Address: 00:00:00:00:00:00\n\n Slave Interface: eth1\n MII Status: up\n Link Failure Count: 0\n Permanent HW addr: 00:16:35:5e:42:fc\n Aggregator ID: 3\n\n Slave Interface: eth2\n MII Status: up\n Link Failure Count: 0\n Permanent HW addr: 00:16:35:5e:02:7e\n Aggregator ID: 2\n\nData is modeled as an array of ``Bond`` objects (``bond`` being a\npattern file specification gathering data from files located in\n``/proc/net/bonding``.\n\nExamples:\n >>> type(bond_info)\n <class 'insights.parsers.bond.Bond'>\n >>> bond_info.bond_mode\n '4'\n >>> bond_info.partner_mac_address\n '00:00:00:00:00:00'\n >>> bond_info.slave_interface\n ['eth1', 'eth2']\n >>> bond_info.aggregator_id\n ['3', '3', '2']\n >>> bond_info.xmit_hash_policy\n 'layer2'\n >>> bond_info.active_slave\n >>> bond_info.slave_duplex\n ['full', 'full']\n >>> bond_info.slave_speed\n ['1000 Mbps', '1000 Mbps']\n\"\"\"\n\nfrom insights import Parser, parser, get_active_lines\nfrom insights.specs import Specs\nfrom insights.parsers import ParseException\n\n\n\"\"\"dict: bonding mode parameter string linked to bond type index.\"\"\"\nBOND_PREFIX_MAP = {\n 'load balancing (round-robin)': '0',\n 'fault-tolerance (active-backup)': '1',\n 'fault-tolerance (active-backup) (fail_over_mac active)': '1',\n 'load balancing (xor)': '2',\n 'fault-tolerance (broadcast)': '3',\n 'IEEE 802.3ad Dynamic link aggregation': '4',\n 'transmit load balancing': '5',\n 'adaptive load balancing': '6'\n}\n\n\n@parser(Specs.bond)\nclass Bond(Parser):\n \"\"\"\n Models the ``/proc/net/bonding`` file.\n\n Currently used information from ``/proc/net/bonding`` includes\n the \"bond mode\" and \"partner mac address\".\n \"\"\"\n\n def parse_content(self, content):\n self._bond_mode = None\n self._partner_mac_address = None\n self._active_slave = None\n self.xmit_hash_policy = None\n self._arp_polling_interval = None\n self._arp_ip_target = None\n self._slave_interface = []\n self._aggregator_id = []\n self._mii_status = []\n self._slave_link_failure_count = []\n self._slave_speed = []\n self._slave_duplex = []\n self._primary_slave = None\n\n for line in get_active_lines(content):\n if line.startswith(\"Bonding Mode: \"):\n raw_mode = line.split(\":\", 1)[1].strip()\n self._bond_mode = raw_mode\n if raw_mode in BOND_PREFIX_MAP:\n self._bond_mode = BOND_PREFIX_MAP[raw_mode]\n else:\n raise ParseException(\"Unrecognised bonding mode '{b}'\".format(b=raw_mode))\n elif line.startswith(\"Partner Mac Address: \"):\n self._partner_mac_address = line.split(\":\", 1)[1].strip()\n elif line.startswith(\"Slave Interface: \"):\n self._slave_interface.append(line.split(\":\", 1)[1].strip())\n elif line.strip().startswith(\"Aggregator ID: \"):\n self._aggregator_id.append(line.strip().split(':', 1)[1].strip())\n elif line.strip().startswith(\"Transmit Hash Policy\"):\n # No need of values in bracket:\n # Integer notification (0), (1), (2) of layer2, layer3+4, layer2+3 resp\n self.xmit_hash_policy = line.split(\":\", 1)[1].split()[0]\n elif line.strip().startswith(\"Currently Active Slave\"):\n self._active_slave = line.split(\":\", 1)[1].split()[0]\n elif line.strip().startswith(\"MII Status: \"):\n self._mii_status.append(line.strip().split(':', 1)[1].strip())\n elif line.strip().startswith(\"Link Failure Count: \"):\n self._slave_link_failure_count.append(line.strip().split(':', 1)[1].strip())\n elif line.strip().startswith(\"Speed: \"):\n self._slave_speed.append(line.strip().split(':', 1)[1].strip())\n elif line.strip().startswith(\"Duplex: \"):\n self._slave_duplex.append(line.strip().split(':', 1)[1].strip())\n elif line.strip().startswith(\"ARP Polling Interval (ms):\"):\n self._arp_polling_interval = line.strip().split(':', 1)[1].strip()\n elif line.strip().startswith(\"ARP IP target/s (n.n.n.n form):\"):\n self._arp_ip_target = line.strip().split(':', 1)[1].strip()\n elif line.strip().startswith(\"Primary Slave\"):\n self._primary_slave = line.split(\":\", 1)[1].split()[0]\n\n @property\n def bond_mode(self):\n \"\"\"Returns the bond mode number as a string, or if there is no\n known mapping to a number, the raw \"Bonding Mode\" value.\n ``None`` is returned if no \"Bonding Mode\" key is found.\n \"\"\"\n return self._bond_mode\n\n @property\n def partner_mac_address(self):\n \"\"\"Returns the value of the \"Partner Mac Address\" in the bond\n file if the key/value exists. If the key is not in the bond\n file, ``None`` is returned.\n \"\"\"\n return self._partner_mac_address\n\n @property\n def slave_interface(self):\n \"\"\"Returns all the slave interfaces of in the bond file wrapped\n a list if the key/value exists. If the key is not in the\n bond file, ``[]`` is returned.\n \"\"\"\n return self._slave_interface\n\n @property\n def aggregator_id(self):\n \"\"\"Returns all the aggregator id of in the bond file wrapped\n a list if the key/value exists. If the key is not in the\n bond file, ``[]`` is returned.\n \"\"\"\n return self._aggregator_id\n\n @property\n def active_slave(self):\n \"\"\"Returns the active slave of the \"Currently Active Slave\" in the bond\n file if key/value exists. If the key is not in the bond file, ``None``\n is returned.\n \"\"\"\n return self._active_slave\n\n @property\n def mii_status(self):\n \"\"\"Returns the master and all the slaves \"MII Status\" value in the bond file wrapped\n a list if the key/value exists. If the key is not in the\n bond file, ``[]`` is returned.\n \"\"\"\n return self._mii_status\n\n @property\n def slave_link_failure_count(self):\n \"\"\"Returns all the slaves \"Link Failure Count\" value in the bond file wrapped\n a list if the key/value exists. If the key is not in the\n bond file, ``[]`` is returned.\n \"\"\"\n return self._slave_link_failure_count\n\n @property\n def slave_speed(self):\n \"\"\"Returns all the slaves \"Speed\" value in the bond file wrapped\n a list if the key/value exists. If the key is not in the\n bond file, ``[]`` is returned.\n \"\"\"\n return self._slave_speed\n\n @property\n def slave_duplex(self):\n \"\"\"Returns all the slave \"Duplex\" value in the bond file wrapped\n a list if the key/value exists. If the key is not in the\n bond file, ``[]`` is returned.\n \"\"\"\n return self._slave_duplex\n\n @property\n def arp_polling_interval(self):\n \"\"\"Returns the arp polling interval as a string. ``None`` is returned\n if no \"ARP Polling Interval (ms)\" key is found.\n \"\"\"\n return self._arp_polling_interval\n\n @property\n def arp_ip_target(self):\n \"\"\"Returns the arp ip target as a string. ``None`` is returned\n if no \"ARP IP target/s (n.n.n.n form)\" key is found.\n \"\"\"\n return self._arp_ip_target\n\n @property\n def primary_slave(self):\n \"\"\"Returns the \"Primary Slave\" in the bond file if key/value exists.\n If the key is not in the bond file, ``None`` is returned.\n \"\"\"\n return self._primary_slave\n", "path": "insights/parsers/bond.py"}], "after_files": [{"content": "\"\"\"\nBond - file ``/proc/net/bonding``\n=================================\n\nProvides plugins access to the network bonding information gathered from\nall the files starteing with \"bond.\" located in the\n``/proc/net/bonding`` directory.\n\nTypical content of ``bond.*`` file is::\n\n Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)\n\n Bonding Mode: IEEE 802.3ad Dynamic link aggregation\n Transmit Hash Policy: layer2 (0)\n MII Status: up\n MII Polling Interval (ms): 500\n Up Delay (ms): 0\n Down Delay (ms): 0\n\n 802.3ad info\n LACP rate: slow\n Active Aggregator Info:\n Aggregator ID: 3\n Number of ports: 1\n Actor Key: 17\n Partner Key: 1\n Partner Mac Address: 00:00:00:00:00:00\n\n Slave Interface: eth1\n MII Status: up\n Link Failure Count: 0\n Permanent HW addr: 00:16:35:5e:42:fc\n Aggregator ID: 3\n\n Slave Interface: eth2\n MII Status: up\n Link Failure Count: 0\n Permanent HW addr: 00:16:35:5e:02:7e\n Aggregator ID: 2\n\nData is modeled as an array of ``Bond`` objects (``bond`` being a\npattern file specification gathering data from files located in\n``/proc/net/bonding``.\n\nExamples:\n >>> type(bond_info)\n <class 'insights.parsers.bond.Bond'>\n >>> bond_info.bond_mode\n '4'\n >>> bond_info.partner_mac_address\n '00:00:00:00:00:00'\n >>> bond_info.slave_interface\n ['eth1', 'eth2']\n >>> bond_info.aggregator_id\n ['3', '3', '2']\n >>> bond_info.xmit_hash_policy\n 'layer2'\n >>> bond_info.active_slave\n >>> bond_info.slave_duplex\n ['full', 'full']\n >>> bond_info.slave_speed\n ['1000 Mbps', '1000 Mbps']\n\"\"\"\n\nfrom insights import Parser, parser, get_active_lines\nfrom insights.specs import Specs\nfrom insights.parsers import ParseException\n\n\n\"\"\"dict: bonding mode parameter string linked to bond type index.\"\"\"\nBOND_PREFIX_MAP = {\n 'load balancing (round-robin)': '0',\n 'fault-tolerance (active-backup)': '1',\n 'fault-tolerance (active-backup) (fail_over_mac active)': '1',\n 'load balancing (xor)': '2',\n 'fault-tolerance (broadcast)': '3',\n 'IEEE 802.3ad Dynamic link aggregation': '4',\n 'transmit load balancing': '5',\n 'adaptive load balancing': '6'\n}\n\n\n@parser(Specs.bond)\nclass Bond(Parser):\n \"\"\"\n Models the ``/proc/net/bonding`` file.\n\n Currently used information from ``/proc/net/bonding`` includes\n the \"bond mode\" and \"partner mac address\".\n \"\"\"\n\n def parse_content(self, content):\n self._bond_mode = None\n self._partner_mac_address = None\n self._active_slave = None\n self.xmit_hash_policy = None\n self._arp_polling_interval = None\n self._arp_ip_target = None\n self._slave_interface = []\n self._aggregator_id = []\n self._mii_status = []\n self._slave_link_failure_count = []\n self._slave_speed = []\n self._slave_duplex = []\n self._primary_slave = None\n\n for line in get_active_lines(content):\n if line.startswith(\"Bonding Mode: \"):\n raw_mode = line.split(\":\", 1)[1].strip()\n self._bond_mode = raw_mode\n if raw_mode in BOND_PREFIX_MAP:\n self._bond_mode = BOND_PREFIX_MAP[raw_mode]\n else:\n raise ParseException(\"Unrecognised bonding mode '{b}'\".format(b=raw_mode))\n elif line.startswith(\"Partner Mac Address: \"):\n self._partner_mac_address = line.split(\":\", 1)[1].strip()\n elif line.startswith(\"Slave Interface: \"):\n self._slave_interface.append(line.split(\":\", 1)[1].strip())\n elif line.strip().startswith(\"Aggregator ID: \"):\n self._aggregator_id.append(line.strip().split(':', 1)[1].strip())\n elif line.strip().startswith(\"Transmit Hash Policy\"):\n # No need of values in bracket:\n # Integer notification (0), (1), (2) of layer2, layer3+4, layer2+3 resp\n self.xmit_hash_policy = line.split(\":\", 1)[1].split()[0]\n elif line.strip().startswith(\"Currently Active Slave\"):\n self._active_slave = line.split(\":\", 1)[1].split()[0]\n elif line.strip().startswith(\"MII Status: \"):\n self._mii_status.append(line.strip().split(':', 1)[1].strip())\n elif line.strip().startswith(\"Link Failure Count: \"):\n self._slave_link_failure_count.append(line.strip().split(':', 1)[1].strip())\n elif line.strip().startswith(\"Speed: \"):\n self._slave_speed.append(line.strip().split(':', 1)[1].strip())\n elif line.strip().startswith(\"Duplex: \"):\n self._slave_duplex.append(line.strip().split(':', 1)[1].strip())\n elif line.strip().startswith(\"ARP Polling Interval (ms):\"):\n self._arp_polling_interval = line.strip().split(':', 1)[1].strip()\n elif line.strip().startswith(\"ARP IP target/s (n.n.n.n form):\"):\n self._arp_ip_target = line.strip().split(':', 1)[1].strip()\n elif line.strip().startswith(\"Primary Slave\"):\n self._primary_slave = line.split(\":\", 1)[1].strip()\n\n @property\n def bond_mode(self):\n \"\"\"Returns the bond mode number as a string, or if there is no\n known mapping to a number, the raw \"Bonding Mode\" value.\n ``None`` is returned if no \"Bonding Mode\" key is found.\n \"\"\"\n return self._bond_mode\n\n @property\n def partner_mac_address(self):\n \"\"\"Returns the value of the \"Partner Mac Address\" in the bond\n file if the key/value exists. If the key is not in the bond\n file, ``None`` is returned.\n \"\"\"\n return self._partner_mac_address\n\n @property\n def slave_interface(self):\n \"\"\"Returns all the slave interfaces of in the bond file wrapped\n a list if the key/value exists. If the key is not in the\n bond file, ``[]`` is returned.\n \"\"\"\n return self._slave_interface\n\n @property\n def aggregator_id(self):\n \"\"\"Returns all the aggregator id of in the bond file wrapped\n a list if the key/value exists. If the key is not in the\n bond file, ``[]`` is returned.\n \"\"\"\n return self._aggregator_id\n\n @property\n def active_slave(self):\n \"\"\"Returns the active slave of the \"Currently Active Slave\" in the bond\n file if key/value exists. If the key is not in the bond file, ``None``\n is returned.\n \"\"\"\n return self._active_slave\n\n @property\n def mii_status(self):\n \"\"\"Returns the master and all the slaves \"MII Status\" value in the bond file wrapped\n a list if the key/value exists. If the key is not in the\n bond file, ``[]`` is returned.\n \"\"\"\n return self._mii_status\n\n @property\n def slave_link_failure_count(self):\n \"\"\"Returns all the slaves \"Link Failure Count\" value in the bond file wrapped\n a list if the key/value exists. If the key is not in the\n bond file, ``[]`` is returned.\n \"\"\"\n return self._slave_link_failure_count\n\n @property\n def slave_speed(self):\n \"\"\"Returns all the slaves \"Speed\" value in the bond file wrapped\n a list if the key/value exists. If the key is not in the\n bond file, ``[]`` is returned.\n \"\"\"\n return self._slave_speed\n\n @property\n def slave_duplex(self):\n \"\"\"Returns all the slave \"Duplex\" value in the bond file wrapped\n a list if the key/value exists. If the key is not in the\n bond file, ``[]`` is returned.\n \"\"\"\n return self._slave_duplex\n\n @property\n def arp_polling_interval(self):\n \"\"\"Returns the arp polling interval as a string. ``None`` is returned\n if no \"ARP Polling Interval (ms)\" key is found.\n \"\"\"\n return self._arp_polling_interval\n\n @property\n def arp_ip_target(self):\n \"\"\"Returns the arp ip target as a string. ``None`` is returned\n if no \"ARP IP target/s (n.n.n.n form)\" key is found.\n \"\"\"\n return self._arp_ip_target\n\n @property\n def primary_slave(self):\n \"\"\"Returns the \"Primary Slave\" in the bond file if key/value exists.\n If the key is not in the bond file, ``None`` is returned.\n \"\"\"\n return self._primary_slave\n", "path": "insights/parsers/bond.py"}]} | 3,040 | 155 |
gh_patches_debug_34396 | rasdani/github-patches | git_diff | Project-MONAI__MONAI-1341 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CheckpointSaver: incorrect labelling of final model
**Describe the bug**
When using a `CheckpointSaver` among the validation handlers, the argument `save_final=True` results in every epoch being stored under `net_final_iteration=1.pt` when `final_filename=None`. I interpret the docs (`default to checkpoint_final_iteration=N.pt`) that the file will be renamed after each completed epoch.
**To Reproduce**
add `save_final=True` to the `CheckpointHandler` in the Lung Segmentation Tutorial
**Expected behavior**
`net_final_iteration=1.pt` -> `net_final_iteration=2.pt` -> ... -> `net_final_iteration=MAX_EPOCHS.pt`
**Screenshots**
The logging function also prints incorrect iteration numbers:
```
INFO:ignite.engine.engine.SupervisedEvaluator:Train completed, saved final checkpoint: net_final_iteration=1.pt
...
INFO:ignite.engine.engine.SupervisedEvaluator:Train completed, saved final checkpoint: net_final_iteration=1.pt
...
INFO:ignite.engine.engine.SupervisedEvaluator:Train completed, saved final checkpoint: net_final_iteration=1.pt
```
**Environment**
Ensuring you use the relevant python executable, please paste the output of:
```
MONAI version: 0.3.0+95.g535561e
Python version: 3.8.6 | packaged by conda-forge | (default, Oct 7 2020, 18:42:56) [Clang 10.0.1 ]
OS version: Darwin (19.6.0)
Numpy version: 1.19.2
Pytorch version: 1.7.0
MONAI flags: HAS_EXT = False, USE_COMPILED = False
Optional dependencies:
Pytorch Ignite version: 0.4.2
Nibabel version: 3.2.0
scikit-image version: NOT INSTALLED or UNKNOWN VERSION.
Pillow version: 8.0.1
Tensorboard version: 2.4.0
gdown version: NOT INSTALLED or UNKNOWN VERSION.
TorchVision version: 0.8.1
ITK version: NOT INSTALLED or UNKNOWN VERSION.
tqdm version: 4.53.0
lmdb version: NOT INSTALLED or UNKNOWN VERSION.
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
```
**Additional context**
Using `SupervisedEvaluator` and `SupervisedTrainer`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `monai/handlers/checkpoint_saver.py`
Content:
```
1 # Copyright 2020 MONAI Consortium
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 # http://www.apache.org/licenses/LICENSE-2.0
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11
12 import logging
13 from typing import TYPE_CHECKING, Dict, Optional
14
15 from monai.utils import exact_version, optional_import
16
17 Events, _ = optional_import("ignite.engine", "0.4.2", exact_version, "Events")
18 Checkpoint, _ = optional_import("ignite.handlers", "0.4.2", exact_version, "Checkpoint")
19 BaseSaveHandler, _ = optional_import("ignite.handlers.checkpoint", "0.4.2", exact_version, "BaseSaveHandler")
20
21 if TYPE_CHECKING:
22 from ignite.engine import Engine
23 from ignite.handlers import DiskSaver
24 else:
25 Engine, _ = optional_import("ignite.engine", "0.4.2", exact_version, "Engine")
26 DiskSaver, _ = optional_import("ignite.handlers", "0.4.2", exact_version, "DiskSaver")
27
28
29 class CheckpointSaver:
30 """
31 CheckpointSaver acts as an Ignite handler to save checkpoint data into files.
32 It supports to save according to metrics result, epoch number, iteration number
33 and last model or exception.
34
35 Args:
36 save_dir: the target directory to save the checkpoints.
37 save_dict: source objects that save to the checkpoint. examples::
38
39 {'network': net, 'optimizer': optimizer, 'lr_scheduler': lr_scheduler}
40
41 name: identifier of logging.logger to use, if None, defaulting to ``engine.logger``.
42 file_prefix: prefix for the filenames to which objects will be saved.
43 save_final: whether to save checkpoint or session at final iteration or exception.
44 If checkpoints are to be saved when an exception is raised, put this handler before
45 `StatsHandler` in the handler list, because the logic with Ignite can only trigger
46 the first attached handler for `EXCEPTION_RAISED` event.
47 final_filename: set a fixed filename to save the final model if `save_final=True`.
48 If None, default to `checkpoint_final_iteration=N.pt`.
49 save_key_metric: whether to save checkpoint or session when the value of key_metric is
50 higher than all the previous values during training.keep 4 decimal places of metric,
51 checkpoint name is: {file_prefix}_key_metric=0.XXXX.pth.
52 key_metric_name: the name of key_metric in ignite metrics dictionary.
53 If None, use `engine.state.key_metric` instead.
54 key_metric_n_saved: save top N checkpoints or sessions, sorted by the value of key
55 metric in descending order.
56 key_metric_filename: set a fixed filename to set the best metric model, if not None,
57 `key_metric_n_saved` should be 1 and only keep the best metric model.
58 epoch_level: save checkpoint during training for every N epochs or every N iterations.
59 `True` is epoch level, `False` is iteration level.
60 save_interval: save checkpoint every N epochs, default is 0 to save no checkpoint.
61 n_saved: save latest N checkpoints of epoch level or iteration level, 'None' is to save all.
62
63 Note:
64 CheckpointHandler can be used during training, validation or evaluation.
65 example of saved files:
66
67 - checkpoint_iteration=400.pt
68 - checkpoint_iteration=800.pt
69 - checkpoint_epoch=1.pt
70 - checkpoint_final_iteration=1000.pt
71 - checkpoint_key_metric=0.9387.pt
72
73 """
74
75 def __init__(
76 self,
77 save_dir: str,
78 save_dict: Dict,
79 name: Optional[str] = None,
80 file_prefix: str = "",
81 save_final: bool = False,
82 final_filename: Optional[str] = None,
83 save_key_metric: bool = False,
84 key_metric_name: Optional[str] = None,
85 key_metric_n_saved: int = 1,
86 key_metric_filename: Optional[str] = None,
87 epoch_level: bool = True,
88 save_interval: int = 0,
89 n_saved: Optional[int] = None,
90 ) -> None:
91 assert save_dir is not None, "must provide directory to save the checkpoints."
92 self.save_dir = save_dir
93 assert save_dict is not None and len(save_dict) > 0, "must provide source objects to save."
94 self.save_dict = save_dict
95 self.logger = logging.getLogger(name)
96 self.epoch_level = epoch_level
97 self.save_interval = save_interval
98 self._final_checkpoint = self._key_metric_checkpoint = self._interval_checkpoint = None
99 self._name = name
100
101 class _DiskSaver(DiskSaver):
102 """
103 Enhance the DiskSaver to support fixed filename.
104
105 """
106
107 def __init__(self, dirname: str, filename: Optional[str] = None):
108 super().__init__(dirname=dirname, require_empty=False)
109 self.filename = filename
110
111 def __call__(self, checkpoint: Dict, filename: str, metadata: Optional[Dict] = None) -> None:
112 if self.filename is not None:
113 filename = self.filename
114 super().__call__(checkpoint=checkpoint, filename=filename, metadata=metadata)
115
116 def remove(self, filename: str) -> None:
117 if self.filename is not None:
118 filename = self.filename
119 super().remove(filename=filename)
120
121 if save_final:
122
123 def _final_func(engine: Engine):
124 return engine.state.iteration
125
126 self._final_checkpoint = Checkpoint(
127 to_save=self.save_dict,
128 save_handler=_DiskSaver(dirname=self.save_dir, filename=final_filename),
129 filename_prefix=file_prefix,
130 score_function=_final_func,
131 score_name="final_iteration",
132 )
133
134 if save_key_metric:
135
136 def _score_func(engine: Engine):
137 if isinstance(key_metric_name, str):
138 metric_name = key_metric_name
139 elif hasattr(engine.state, "key_metric_name") and isinstance(engine.state.key_metric_name, str):
140 metric_name = engine.state.key_metric_name
141 else:
142 raise ValueError(
143 f"Incompatible values: save_key_metric=True and key_metric_name={key_metric_name}."
144 )
145 return round(engine.state.metrics[metric_name], 4)
146
147 if key_metric_filename is not None and key_metric_n_saved > 1:
148 raise ValueError("if using fixed filename to save the best metric model, we should only save 1 model.")
149
150 self._key_metric_checkpoint = Checkpoint(
151 to_save=self.save_dict,
152 save_handler=_DiskSaver(dirname=self.save_dir, filename=key_metric_filename),
153 filename_prefix=file_prefix,
154 score_function=_score_func,
155 score_name="key_metric",
156 n_saved=key_metric_n_saved,
157 )
158
159 if save_interval > 0:
160
161 def _interval_func(engine: Engine):
162 return engine.state.epoch if self.epoch_level else engine.state.iteration
163
164 self._interval_checkpoint = Checkpoint(
165 to_save=self.save_dict,
166 save_handler=_DiskSaver(dirname=self.save_dir),
167 filename_prefix=file_prefix,
168 score_function=_interval_func,
169 score_name="epoch" if self.epoch_level else "iteration",
170 n_saved=n_saved,
171 )
172
173 def attach(self, engine: Engine) -> None:
174 """
175 Args:
176 engine: Ignite Engine, it can be a trainer, validator or evaluator.
177 """
178 if self._name is None:
179 self.logger = engine.logger
180 if self._final_checkpoint is not None:
181 engine.add_event_handler(Events.COMPLETED, self.completed)
182 engine.add_event_handler(Events.EXCEPTION_RAISED, self.exception_raised)
183 if self._key_metric_checkpoint is not None:
184 engine.add_event_handler(Events.EPOCH_COMPLETED, self.metrics_completed)
185 if self._interval_checkpoint is not None:
186 if self.epoch_level:
187 engine.add_event_handler(Events.EPOCH_COMPLETED(every=self.save_interval), self.interval_completed)
188 else:
189 engine.add_event_handler(Events.ITERATION_COMPLETED(every=self.save_interval), self.interval_completed)
190
191 def completed(self, engine: Engine) -> None:
192 """Callback for train or validation/evaluation completed Event.
193 Save final checkpoint if configure save_final is True.
194
195 Args:
196 engine: Ignite Engine, it can be a trainer, validator or evaluator.
197 """
198 assert callable(self._final_checkpoint), "Error: _final_checkpoint function not specified."
199 self._final_checkpoint(engine)
200 assert self.logger is not None
201 assert hasattr(self.logger, "info"), "Error, provided logger has not info attribute."
202 self.logger.info(f"Train completed, saved final checkpoint: {self._final_checkpoint.last_checkpoint}")
203
204 def exception_raised(self, engine: Engine, e: Exception) -> None:
205 """Callback for train or validation/evaluation exception raised Event.
206 Save current data as final checkpoint if configure save_final is True. This callback may be skipped
207 because the logic with Ignite can only trigger the first attached handler for `EXCEPTION_RAISED` event.
208
209 Args:
210 engine: Ignite Engine, it can be a trainer, validator or evaluator.
211 e: the exception caught in Ignite during engine.run().
212 """
213 assert callable(self._final_checkpoint), "Error: _final_checkpoint function not specified."
214 self._final_checkpoint(engine)
215 assert self.logger is not None
216 assert hasattr(self.logger, "info"), "Error, provided logger has not info attribute."
217 self.logger.info(f"Exception_raised, saved exception checkpoint: {self._final_checkpoint.last_checkpoint}")
218 raise e
219
220 def metrics_completed(self, engine: Engine) -> None:
221 """Callback to compare metrics and save models in train or validation when epoch completed.
222
223 Args:
224 engine: Ignite Engine, it can be a trainer, validator or evaluator.
225 """
226 assert callable(self._key_metric_checkpoint), "Error: _key_metric_checkpoint function not specified."
227 self._key_metric_checkpoint(engine)
228
229 def interval_completed(self, engine: Engine) -> None:
230 """Callback for train epoch/iteration completed Event.
231 Save checkpoint if configure save_interval = N
232
233 Args:
234 engine: Ignite Engine, it can be a trainer, validator or evaluator.
235 """
236 assert callable(self._interval_checkpoint), "Error: _interval_checkpoint function not specified."
237 self._interval_checkpoint(engine)
238 assert self.logger is not None
239 assert hasattr(self.logger, "info"), "Error, provided logger has not info attribute."
240 if self.epoch_level:
241 self.logger.info(f"Saved checkpoint at epoch: {engine.state.epoch}")
242 else:
243 self.logger.info(f"Saved checkpoint at iteration: {engine.state.iteration}")
244
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/monai/handlers/checkpoint_saver.py b/monai/handlers/checkpoint_saver.py
--- a/monai/handlers/checkpoint_saver.py
+++ b/monai/handlers/checkpoint_saver.py
@@ -188,6 +188,13 @@
else:
engine.add_event_handler(Events.ITERATION_COMPLETED(every=self.save_interval), self.interval_completed)
+ def _delete_previous_final_ckpt(self):
+ saved = self._final_checkpoint._saved
+ if len(saved) > 0:
+ item = saved.pop(0)
+ self._final_checkpoint.save_handler.remove(item.filename)
+ self.logger.info(f"Deleted previous saved final checkpoint: {item.filename}")
+
def completed(self, engine: Engine) -> None:
"""Callback for train or validation/evaluation completed Event.
Save final checkpoint if configure save_final is True.
@@ -196,6 +203,8 @@
engine: Ignite Engine, it can be a trainer, validator or evaluator.
"""
assert callable(self._final_checkpoint), "Error: _final_checkpoint function not specified."
+ # delete previous saved final checkpoint if existing
+ self._delete_previous_final_ckpt()
self._final_checkpoint(engine)
assert self.logger is not None
assert hasattr(self.logger, "info"), "Error, provided logger has not info attribute."
@@ -211,6 +220,8 @@
e: the exception caught in Ignite during engine.run().
"""
assert callable(self._final_checkpoint), "Error: _final_checkpoint function not specified."
+ # delete previous saved final checkpoint if existing
+ self._delete_previous_final_ckpt()
self._final_checkpoint(engine)
assert self.logger is not None
assert hasattr(self.logger, "info"), "Error, provided logger has not info attribute."
| {"golden_diff": "diff --git a/monai/handlers/checkpoint_saver.py b/monai/handlers/checkpoint_saver.py\n--- a/monai/handlers/checkpoint_saver.py\n+++ b/monai/handlers/checkpoint_saver.py\n@@ -188,6 +188,13 @@\n else:\n engine.add_event_handler(Events.ITERATION_COMPLETED(every=self.save_interval), self.interval_completed)\n \n+ def _delete_previous_final_ckpt(self):\n+ saved = self._final_checkpoint._saved\n+ if len(saved) > 0:\n+ item = saved.pop(0)\n+ self._final_checkpoint.save_handler.remove(item.filename)\n+ self.logger.info(f\"Deleted previous saved final checkpoint: {item.filename}\")\n+\n def completed(self, engine: Engine) -> None:\n \"\"\"Callback for train or validation/evaluation completed Event.\n Save final checkpoint if configure save_final is True.\n@@ -196,6 +203,8 @@\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n assert callable(self._final_checkpoint), \"Error: _final_checkpoint function not specified.\"\n+ # delete previous saved final checkpoint if existing\n+ self._delete_previous_final_ckpt()\n self._final_checkpoint(engine)\n assert self.logger is not None\n assert hasattr(self.logger, \"info\"), \"Error, provided logger has not info attribute.\"\n@@ -211,6 +220,8 @@\n e: the exception caught in Ignite during engine.run().\n \"\"\"\n assert callable(self._final_checkpoint), \"Error: _final_checkpoint function not specified.\"\n+ # delete previous saved final checkpoint if existing\n+ self._delete_previous_final_ckpt()\n self._final_checkpoint(engine)\n assert self.logger is not None\n assert hasattr(self.logger, \"info\"), \"Error, provided logger has not info attribute.\"\n", "issue": "CheckpointSaver: incorrect labelling of final model\n**Describe the bug**\r\nWhen using a `CheckpointSaver` among the validation handlers, the argument `save_final=True` results in every epoch being stored under `net_final_iteration=1.pt` when `final_filename=None`. I interpret the docs (`default to checkpoint_final_iteration=N.pt`) that the file will be renamed after each completed epoch.\r\n\r\n**To Reproduce**\r\nadd `save_final=True` to the `CheckpointHandler` in the Lung Segmentation Tutorial\r\n\r\n**Expected behavior**\r\n`net_final_iteration=1.pt` -> `net_final_iteration=2.pt` -> ... -> `net_final_iteration=MAX_EPOCHS.pt`\r\n\r\n**Screenshots**\r\nThe logging function also prints incorrect iteration numbers:\r\n\r\n```\r\nINFO:ignite.engine.engine.SupervisedEvaluator:Train completed, saved final checkpoint: net_final_iteration=1.pt\r\n...\r\nINFO:ignite.engine.engine.SupervisedEvaluator:Train completed, saved final checkpoint: net_final_iteration=1.pt\r\n...\r\nINFO:ignite.engine.engine.SupervisedEvaluator:Train completed, saved final checkpoint: net_final_iteration=1.pt\r\n```\r\n\r\n**Environment**\r\n\r\nEnsuring you use the relevant python executable, please paste the output of:\r\n\r\n```\r\nMONAI version: 0.3.0+95.g535561e\r\nPython version: 3.8.6 | packaged by conda-forge | (default, Oct 7 2020, 18:42:56) [Clang 10.0.1 ]\r\nOS version: Darwin (19.6.0)\r\nNumpy version: 1.19.2\r\nPytorch version: 1.7.0\r\nMONAI flags: HAS_EXT = False, USE_COMPILED = False\r\n\r\nOptional dependencies:\r\nPytorch Ignite version: 0.4.2\r\nNibabel version: 3.2.0\r\nscikit-image version: NOT INSTALLED or UNKNOWN VERSION.\r\nPillow version: 8.0.1\r\nTensorboard version: 2.4.0\r\ngdown version: NOT INSTALLED or UNKNOWN VERSION.\r\nTorchVision version: 0.8.1\r\nITK version: NOT INSTALLED or UNKNOWN VERSION.\r\ntqdm version: 4.53.0\r\nlmdb version: NOT INSTALLED or UNKNOWN VERSION.\r\n\r\nFor details about installing the optional dependencies, please visit:\r\n https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies\r\n```\r\n\r\n**Additional context**\r\nUsing `SupervisedEvaluator` and `SupervisedTrainer`.\n", "before_files": [{"content": "# Copyright 2020 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nfrom typing import TYPE_CHECKING, Dict, Optional\n\nfrom monai.utils import exact_version, optional_import\n\nEvents, _ = optional_import(\"ignite.engine\", \"0.4.2\", exact_version, \"Events\")\nCheckpoint, _ = optional_import(\"ignite.handlers\", \"0.4.2\", exact_version, \"Checkpoint\")\nBaseSaveHandler, _ = optional_import(\"ignite.handlers.checkpoint\", \"0.4.2\", exact_version, \"BaseSaveHandler\")\n\nif TYPE_CHECKING:\n from ignite.engine import Engine\n from ignite.handlers import DiskSaver\nelse:\n Engine, _ = optional_import(\"ignite.engine\", \"0.4.2\", exact_version, \"Engine\")\n DiskSaver, _ = optional_import(\"ignite.handlers\", \"0.4.2\", exact_version, \"DiskSaver\")\n\n\nclass CheckpointSaver:\n \"\"\"\n CheckpointSaver acts as an Ignite handler to save checkpoint data into files.\n It supports to save according to metrics result, epoch number, iteration number\n and last model or exception.\n\n Args:\n save_dir: the target directory to save the checkpoints.\n save_dict: source objects that save to the checkpoint. examples::\n\n {'network': net, 'optimizer': optimizer, 'lr_scheduler': lr_scheduler}\n\n name: identifier of logging.logger to use, if None, defaulting to ``engine.logger``.\n file_prefix: prefix for the filenames to which objects will be saved.\n save_final: whether to save checkpoint or session at final iteration or exception.\n If checkpoints are to be saved when an exception is raised, put this handler before\n `StatsHandler` in the handler list, because the logic with Ignite can only trigger\n the first attached handler for `EXCEPTION_RAISED` event.\n final_filename: set a fixed filename to save the final model if `save_final=True`.\n If None, default to `checkpoint_final_iteration=N.pt`.\n save_key_metric: whether to save checkpoint or session when the value of key_metric is\n higher than all the previous values during training.keep 4 decimal places of metric,\n checkpoint name is: {file_prefix}_key_metric=0.XXXX.pth.\n key_metric_name: the name of key_metric in ignite metrics dictionary.\n If None, use `engine.state.key_metric` instead.\n key_metric_n_saved: save top N checkpoints or sessions, sorted by the value of key\n metric in descending order.\n key_metric_filename: set a fixed filename to set the best metric model, if not None,\n `key_metric_n_saved` should be 1 and only keep the best metric model.\n epoch_level: save checkpoint during training for every N epochs or every N iterations.\n `True` is epoch level, `False` is iteration level.\n save_interval: save checkpoint every N epochs, default is 0 to save no checkpoint.\n n_saved: save latest N checkpoints of epoch level or iteration level, 'None' is to save all.\n\n Note:\n CheckpointHandler can be used during training, validation or evaluation.\n example of saved files:\n\n - checkpoint_iteration=400.pt\n - checkpoint_iteration=800.pt\n - checkpoint_epoch=1.pt\n - checkpoint_final_iteration=1000.pt\n - checkpoint_key_metric=0.9387.pt\n\n \"\"\"\n\n def __init__(\n self,\n save_dir: str,\n save_dict: Dict,\n name: Optional[str] = None,\n file_prefix: str = \"\",\n save_final: bool = False,\n final_filename: Optional[str] = None,\n save_key_metric: bool = False,\n key_metric_name: Optional[str] = None,\n key_metric_n_saved: int = 1,\n key_metric_filename: Optional[str] = None,\n epoch_level: bool = True,\n save_interval: int = 0,\n n_saved: Optional[int] = None,\n ) -> None:\n assert save_dir is not None, \"must provide directory to save the checkpoints.\"\n self.save_dir = save_dir\n assert save_dict is not None and len(save_dict) > 0, \"must provide source objects to save.\"\n self.save_dict = save_dict\n self.logger = logging.getLogger(name)\n self.epoch_level = epoch_level\n self.save_interval = save_interval\n self._final_checkpoint = self._key_metric_checkpoint = self._interval_checkpoint = None\n self._name = name\n\n class _DiskSaver(DiskSaver):\n \"\"\"\n Enhance the DiskSaver to support fixed filename.\n\n \"\"\"\n\n def __init__(self, dirname: str, filename: Optional[str] = None):\n super().__init__(dirname=dirname, require_empty=False)\n self.filename = filename\n\n def __call__(self, checkpoint: Dict, filename: str, metadata: Optional[Dict] = None) -> None:\n if self.filename is not None:\n filename = self.filename\n super().__call__(checkpoint=checkpoint, filename=filename, metadata=metadata)\n\n def remove(self, filename: str) -> None:\n if self.filename is not None:\n filename = self.filename\n super().remove(filename=filename)\n\n if save_final:\n\n def _final_func(engine: Engine):\n return engine.state.iteration\n\n self._final_checkpoint = Checkpoint(\n to_save=self.save_dict,\n save_handler=_DiskSaver(dirname=self.save_dir, filename=final_filename),\n filename_prefix=file_prefix,\n score_function=_final_func,\n score_name=\"final_iteration\",\n )\n\n if save_key_metric:\n\n def _score_func(engine: Engine):\n if isinstance(key_metric_name, str):\n metric_name = key_metric_name\n elif hasattr(engine.state, \"key_metric_name\") and isinstance(engine.state.key_metric_name, str):\n metric_name = engine.state.key_metric_name\n else:\n raise ValueError(\n f\"Incompatible values: save_key_metric=True and key_metric_name={key_metric_name}.\"\n )\n return round(engine.state.metrics[metric_name], 4)\n\n if key_metric_filename is not None and key_metric_n_saved > 1:\n raise ValueError(\"if using fixed filename to save the best metric model, we should only save 1 model.\")\n\n self._key_metric_checkpoint = Checkpoint(\n to_save=self.save_dict,\n save_handler=_DiskSaver(dirname=self.save_dir, filename=key_metric_filename),\n filename_prefix=file_prefix,\n score_function=_score_func,\n score_name=\"key_metric\",\n n_saved=key_metric_n_saved,\n )\n\n if save_interval > 0:\n\n def _interval_func(engine: Engine):\n return engine.state.epoch if self.epoch_level else engine.state.iteration\n\n self._interval_checkpoint = Checkpoint(\n to_save=self.save_dict,\n save_handler=_DiskSaver(dirname=self.save_dir),\n filename_prefix=file_prefix,\n score_function=_interval_func,\n score_name=\"epoch\" if self.epoch_level else \"iteration\",\n n_saved=n_saved,\n )\n\n def attach(self, engine: Engine) -> None:\n \"\"\"\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n if self._name is None:\n self.logger = engine.logger\n if self._final_checkpoint is not None:\n engine.add_event_handler(Events.COMPLETED, self.completed)\n engine.add_event_handler(Events.EXCEPTION_RAISED, self.exception_raised)\n if self._key_metric_checkpoint is not None:\n engine.add_event_handler(Events.EPOCH_COMPLETED, self.metrics_completed)\n if self._interval_checkpoint is not None:\n if self.epoch_level:\n engine.add_event_handler(Events.EPOCH_COMPLETED(every=self.save_interval), self.interval_completed)\n else:\n engine.add_event_handler(Events.ITERATION_COMPLETED(every=self.save_interval), self.interval_completed)\n\n def completed(self, engine: Engine) -> None:\n \"\"\"Callback for train or validation/evaluation completed Event.\n Save final checkpoint if configure save_final is True.\n\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n assert callable(self._final_checkpoint), \"Error: _final_checkpoint function not specified.\"\n self._final_checkpoint(engine)\n assert self.logger is not None\n assert hasattr(self.logger, \"info\"), \"Error, provided logger has not info attribute.\"\n self.logger.info(f\"Train completed, saved final checkpoint: {self._final_checkpoint.last_checkpoint}\")\n\n def exception_raised(self, engine: Engine, e: Exception) -> None:\n \"\"\"Callback for train or validation/evaluation exception raised Event.\n Save current data as final checkpoint if configure save_final is True. This callback may be skipped\n because the logic with Ignite can only trigger the first attached handler for `EXCEPTION_RAISED` event.\n\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n e: the exception caught in Ignite during engine.run().\n \"\"\"\n assert callable(self._final_checkpoint), \"Error: _final_checkpoint function not specified.\"\n self._final_checkpoint(engine)\n assert self.logger is not None\n assert hasattr(self.logger, \"info\"), \"Error, provided logger has not info attribute.\"\n self.logger.info(f\"Exception_raised, saved exception checkpoint: {self._final_checkpoint.last_checkpoint}\")\n raise e\n\n def metrics_completed(self, engine: Engine) -> None:\n \"\"\"Callback to compare metrics and save models in train or validation when epoch completed.\n\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n assert callable(self._key_metric_checkpoint), \"Error: _key_metric_checkpoint function not specified.\"\n self._key_metric_checkpoint(engine)\n\n def interval_completed(self, engine: Engine) -> None:\n \"\"\"Callback for train epoch/iteration completed Event.\n Save checkpoint if configure save_interval = N\n\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n assert callable(self._interval_checkpoint), \"Error: _interval_checkpoint function not specified.\"\n self._interval_checkpoint(engine)\n assert self.logger is not None\n assert hasattr(self.logger, \"info\"), \"Error, provided logger has not info attribute.\"\n if self.epoch_level:\n self.logger.info(f\"Saved checkpoint at epoch: {engine.state.epoch}\")\n else:\n self.logger.info(f\"Saved checkpoint at iteration: {engine.state.iteration}\")\n", "path": "monai/handlers/checkpoint_saver.py"}], "after_files": [{"content": "# Copyright 2020 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nfrom typing import TYPE_CHECKING, Dict, Optional\n\nfrom monai.utils import exact_version, optional_import\n\nEvents, _ = optional_import(\"ignite.engine\", \"0.4.2\", exact_version, \"Events\")\nCheckpoint, _ = optional_import(\"ignite.handlers\", \"0.4.2\", exact_version, \"Checkpoint\")\nBaseSaveHandler, _ = optional_import(\"ignite.handlers.checkpoint\", \"0.4.2\", exact_version, \"BaseSaveHandler\")\n\nif TYPE_CHECKING:\n from ignite.engine import Engine\n from ignite.handlers import DiskSaver\nelse:\n Engine, _ = optional_import(\"ignite.engine\", \"0.4.2\", exact_version, \"Engine\")\n DiskSaver, _ = optional_import(\"ignite.handlers\", \"0.4.2\", exact_version, \"DiskSaver\")\n\n\nclass CheckpointSaver:\n \"\"\"\n CheckpointSaver acts as an Ignite handler to save checkpoint data into files.\n It supports to save according to metrics result, epoch number, iteration number\n and last model or exception.\n\n Args:\n save_dir: the target directory to save the checkpoints.\n save_dict: source objects that save to the checkpoint. examples::\n\n {'network': net, 'optimizer': optimizer, 'lr_scheduler': lr_scheduler}\n\n name: identifier of logging.logger to use, if None, defaulting to ``engine.logger``.\n file_prefix: prefix for the filenames to which objects will be saved.\n save_final: whether to save checkpoint or session at final iteration or exception.\n If checkpoints are to be saved when an exception is raised, put this handler before\n `StatsHandler` in the handler list, because the logic with Ignite can only trigger\n the first attached handler for `EXCEPTION_RAISED` event.\n final_filename: set a fixed filename to save the final model if `save_final=True`.\n If None, default to `checkpoint_final_iteration=N.pt`.\n save_key_metric: whether to save checkpoint or session when the value of key_metric is\n higher than all the previous values during training.keep 4 decimal places of metric,\n checkpoint name is: {file_prefix}_key_metric=0.XXXX.pth.\n key_metric_name: the name of key_metric in ignite metrics dictionary.\n If None, use `engine.state.key_metric` instead.\n key_metric_n_saved: save top N checkpoints or sessions, sorted by the value of key\n metric in descending order.\n key_metric_filename: set a fixed filename to set the best metric model, if not None,\n `key_metric_n_saved` should be 1 and only keep the best metric model.\n epoch_level: save checkpoint during training for every N epochs or every N iterations.\n `True` is epoch level, `False` is iteration level.\n save_interval: save checkpoint every N epochs, default is 0 to save no checkpoint.\n n_saved: save latest N checkpoints of epoch level or iteration level, 'None' is to save all.\n\n Note:\n CheckpointHandler can be used during training, validation or evaluation.\n example of saved files:\n\n - checkpoint_iteration=400.pt\n - checkpoint_iteration=800.pt\n - checkpoint_epoch=1.pt\n - checkpoint_final_iteration=1000.pt\n - checkpoint_key_metric=0.9387.pt\n\n \"\"\"\n\n def __init__(\n self,\n save_dir: str,\n save_dict: Dict,\n name: Optional[str] = None,\n file_prefix: str = \"\",\n save_final: bool = False,\n final_filename: Optional[str] = None,\n save_key_metric: bool = False,\n key_metric_name: Optional[str] = None,\n key_metric_n_saved: int = 1,\n key_metric_filename: Optional[str] = None,\n epoch_level: bool = True,\n save_interval: int = 0,\n n_saved: Optional[int] = None,\n ) -> None:\n assert save_dir is not None, \"must provide directory to save the checkpoints.\"\n self.save_dir = save_dir\n assert save_dict is not None and len(save_dict) > 0, \"must provide source objects to save.\"\n self.save_dict = save_dict\n self.logger = logging.getLogger(name)\n self.epoch_level = epoch_level\n self.save_interval = save_interval\n self._final_checkpoint = self._key_metric_checkpoint = self._interval_checkpoint = None\n self._name = name\n\n class _DiskSaver(DiskSaver):\n \"\"\"\n Enhance the DiskSaver to support fixed filename.\n\n \"\"\"\n\n def __init__(self, dirname: str, filename: Optional[str] = None):\n super().__init__(dirname=dirname, require_empty=False)\n self.filename = filename\n\n def __call__(self, checkpoint: Dict, filename: str, metadata: Optional[Dict] = None) -> None:\n if self.filename is not None:\n filename = self.filename\n super().__call__(checkpoint=checkpoint, filename=filename, metadata=metadata)\n\n def remove(self, filename: str) -> None:\n if self.filename is not None:\n filename = self.filename\n super().remove(filename=filename)\n\n if save_final:\n\n def _final_func(engine: Engine):\n return engine.state.iteration\n\n self._final_checkpoint = Checkpoint(\n to_save=self.save_dict,\n save_handler=_DiskSaver(dirname=self.save_dir, filename=final_filename),\n filename_prefix=file_prefix,\n score_function=_final_func,\n score_name=\"final_iteration\",\n )\n\n if save_key_metric:\n\n def _score_func(engine: Engine):\n if isinstance(key_metric_name, str):\n metric_name = key_metric_name\n elif hasattr(engine.state, \"key_metric_name\") and isinstance(engine.state.key_metric_name, str):\n metric_name = engine.state.key_metric_name\n else:\n raise ValueError(\n f\"Incompatible values: save_key_metric=True and key_metric_name={key_metric_name}.\"\n )\n return round(engine.state.metrics[metric_name], 4)\n\n if key_metric_filename is not None and key_metric_n_saved > 1:\n raise ValueError(\"if using fixed filename to save the best metric model, we should only save 1 model.\")\n\n self._key_metric_checkpoint = Checkpoint(\n to_save=self.save_dict,\n save_handler=_DiskSaver(dirname=self.save_dir, filename=key_metric_filename),\n filename_prefix=file_prefix,\n score_function=_score_func,\n score_name=\"key_metric\",\n n_saved=key_metric_n_saved,\n )\n\n if save_interval > 0:\n\n def _interval_func(engine: Engine):\n return engine.state.epoch if self.epoch_level else engine.state.iteration\n\n self._interval_checkpoint = Checkpoint(\n to_save=self.save_dict,\n save_handler=_DiskSaver(dirname=self.save_dir),\n filename_prefix=file_prefix,\n score_function=_interval_func,\n score_name=\"epoch\" if self.epoch_level else \"iteration\",\n n_saved=n_saved,\n )\n\n def attach(self, engine: Engine) -> None:\n \"\"\"\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n if self._name is None:\n self.logger = engine.logger\n if self._final_checkpoint is not None:\n engine.add_event_handler(Events.COMPLETED, self.completed)\n engine.add_event_handler(Events.EXCEPTION_RAISED, self.exception_raised)\n if self._key_metric_checkpoint is not None:\n engine.add_event_handler(Events.EPOCH_COMPLETED, self.metrics_completed)\n if self._interval_checkpoint is not None:\n if self.epoch_level:\n engine.add_event_handler(Events.EPOCH_COMPLETED(every=self.save_interval), self.interval_completed)\n else:\n engine.add_event_handler(Events.ITERATION_COMPLETED(every=self.save_interval), self.interval_completed)\n\n def _delete_previous_final_ckpt(self):\n saved = self._final_checkpoint._saved\n if len(saved) > 0:\n item = saved.pop(0)\n self._final_checkpoint.save_handler.remove(item.filename)\n self.logger.info(f\"Deleted previous saved final checkpoint: {item.filename}\")\n\n def completed(self, engine: Engine) -> None:\n \"\"\"Callback for train or validation/evaluation completed Event.\n Save final checkpoint if configure save_final is True.\n\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n assert callable(self._final_checkpoint), \"Error: _final_checkpoint function not specified.\"\n # delete previous saved final checkpoint if existing\n self._delete_previous_final_ckpt()\n self._final_checkpoint(engine)\n assert self.logger is not None\n assert hasattr(self.logger, \"info\"), \"Error, provided logger has not info attribute.\"\n self.logger.info(f\"Train completed, saved final checkpoint: {self._final_checkpoint.last_checkpoint}\")\n\n def exception_raised(self, engine: Engine, e: Exception) -> None:\n \"\"\"Callback for train or validation/evaluation exception raised Event.\n Save current data as final checkpoint if configure save_final is True. This callback may be skipped\n because the logic with Ignite can only trigger the first attached handler for `EXCEPTION_RAISED` event.\n\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n e: the exception caught in Ignite during engine.run().\n \"\"\"\n assert callable(self._final_checkpoint), \"Error: _final_checkpoint function not specified.\"\n # delete previous saved final checkpoint if existing\n self._delete_previous_final_ckpt()\n self._final_checkpoint(engine)\n assert self.logger is not None\n assert hasattr(self.logger, \"info\"), \"Error, provided logger has not info attribute.\"\n self.logger.info(f\"Exception_raised, saved exception checkpoint: {self._final_checkpoint.last_checkpoint}\")\n raise e\n\n def metrics_completed(self, engine: Engine) -> None:\n \"\"\"Callback to compare metrics and save models in train or validation when epoch completed.\n\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n assert callable(self._key_metric_checkpoint), \"Error: _key_metric_checkpoint function not specified.\"\n self._key_metric_checkpoint(engine)\n\n def interval_completed(self, engine: Engine) -> None:\n \"\"\"Callback for train epoch/iteration completed Event.\n Save checkpoint if configure save_interval = N\n\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n \"\"\"\n assert callable(self._interval_checkpoint), \"Error: _interval_checkpoint function not specified.\"\n self._interval_checkpoint(engine)\n assert self.logger is not None\n assert hasattr(self.logger, \"info\"), \"Error, provided logger has not info attribute.\"\n if self.epoch_level:\n self.logger.info(f\"Saved checkpoint at epoch: {engine.state.epoch}\")\n else:\n self.logger.info(f\"Saved checkpoint at iteration: {engine.state.iteration}\")\n", "path": "monai/handlers/checkpoint_saver.py"}]} | 3,803 | 410 |
gh_patches_debug_19299 | rasdani/github-patches | git_diff | bentoml__BentoML-4685 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bug: module 'socket' has no attribute 'AF_UNIX'
### Describe the bug
Hello,
I'm trying to use Bentoml by playing with the quick start examples. When running the Iris classification example on a windows machine, I have this error message:
```
File "C:\Users\Path\lib\site-packages\uvicorn\server.py", line 140, in startup
sock = socket.fromfd(config.fd, socket.AF_UNIX, socket.SOCK_STREAM)
AttributeError: module 'socket' has no attribute 'AF_UNIX'
```
I tried to change the socket attribute to AF_INET, the error messages disappear but the client cannot connect to the bentoml server.
Thanks,
### To reproduce
_No response_
### Expected behavior
_No response_
### Environment
bentoml:1.2.12
python:3.9.18
uvicorn:0.29.0
Windows: 11 Pro 22H2
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/_bentoml_impl/worker/service.py`
Content:
```
1 from __future__ import annotations
2
3 import json
4 import os
5 import typing as t
6
7 import click
8
9
10 @click.command()
11 @click.argument("bento_identifier", type=click.STRING, required=False, default=".")
12 @click.option("--service-name", type=click.STRING, required=False, default="")
13 @click.option(
14 "--fd",
15 type=click.INT,
16 required=True,
17 help="File descriptor of the socket to listen on",
18 )
19 @click.option(
20 "--runner-map",
21 type=click.STRING,
22 envvar="BENTOML_RUNNER_MAP",
23 help="JSON string of runners map, default sets to envars `BENTOML_RUNNER_MAP`",
24 )
25 @click.option(
26 "--backlog", type=click.INT, default=2048, help="Backlog size for the socket"
27 )
28 @click.option(
29 "--prometheus-dir",
30 type=click.Path(exists=True),
31 help="Required by prometheus to pass the metrics in multi-process mode",
32 )
33 @click.option(
34 "--worker-env", type=click.STRING, default=None, help="Environment variables"
35 )
36 @click.option(
37 "--worker-id",
38 required=False,
39 type=click.INT,
40 default=None,
41 help="If set, start the server as a bare worker with the given worker ID. Otherwise start a standalone server with a supervisor process.",
42 )
43 @click.option(
44 "--ssl-certfile",
45 type=str,
46 default=None,
47 help="SSL certificate file",
48 )
49 @click.option(
50 "--ssl-keyfile",
51 type=str,
52 default=None,
53 help="SSL key file",
54 )
55 @click.option(
56 "--ssl-keyfile-password",
57 type=str,
58 default=None,
59 help="SSL keyfile password",
60 )
61 @click.option(
62 "--ssl-version",
63 type=int,
64 default=None,
65 help="SSL version to use (see stdlib 'ssl' module)",
66 )
67 @click.option(
68 "--ssl-cert-reqs",
69 type=int,
70 default=None,
71 help="Whether client certificate is required (see stdlib 'ssl' module)",
72 )
73 @click.option(
74 "--ssl-ca-certs",
75 type=str,
76 default=None,
77 help="CA certificates file",
78 )
79 @click.option(
80 "--ssl-ciphers",
81 type=str,
82 default=None,
83 help="Ciphers to use (see stdlib 'ssl' module)",
84 )
85 @click.option(
86 "--development-mode",
87 type=click.BOOL,
88 help="Run the API server in development mode",
89 is_flag=True,
90 default=False,
91 show_default=True,
92 )
93 @click.option(
94 "--timeout",
95 type=click.INT,
96 help="Specify the timeout for API server",
97 )
98 def main(
99 bento_identifier: str,
100 service_name: str,
101 fd: int,
102 runner_map: str | None,
103 backlog: int,
104 worker_env: str | None,
105 worker_id: int | None,
106 prometheus_dir: str | None,
107 ssl_certfile: str | None,
108 ssl_keyfile: str | None,
109 ssl_keyfile_password: str | None,
110 ssl_version: int | None,
111 ssl_cert_reqs: int | None,
112 ssl_ca_certs: str | None,
113 ssl_ciphers: str | None,
114 development_mode: bool,
115 timeout: int,
116 ):
117 """
118 Start a HTTP server worker for given service.
119 """
120 import psutil
121 import uvicorn
122
123 if worker_env:
124 env_list: list[dict[str, t.Any]] = json.loads(worker_env)
125 if worker_id is not None:
126 # worker id from circus starts from 1
127 worker_key = worker_id - 1
128 if worker_key >= len(env_list):
129 raise IndexError(
130 f"Worker ID {worker_id} is out of range, "
131 f"the maximum worker ID is {len(env_list)}"
132 )
133 os.environ.update(env_list[worker_key])
134
135 from _bentoml_impl.loader import import_service
136 from bentoml._internal.container import BentoMLContainer
137 from bentoml._internal.context import server_context
138 from bentoml._internal.log import configure_server_logging
139
140 if runner_map:
141 BentoMLContainer.remote_runner_mapping.set(
142 t.cast(t.Dict[str, str], json.loads(runner_map))
143 )
144
145 service = import_service(bento_identifier)
146
147 if service_name and service_name != service.name:
148 service = service.find_dependent(service_name)
149 server_context.service_type = "service"
150 else:
151 server_context.service_type = "entry_service"
152
153 if worker_id is not None:
154 server_context.worker_index = worker_id
155
156 configure_server_logging()
157 BentoMLContainer.development_mode.set(development_mode)
158
159 if prometheus_dir is not None:
160 BentoMLContainer.prometheus_multiproc_dir.set(prometheus_dir)
161 server_context.service_name = service.name
162
163 asgi_app = service.to_asgi(
164 is_main=server_context.service_type == "entry_service", init=False
165 )
166
167 uvicorn_extra_options: dict[str, t.Any] = {}
168 if ssl_version is not None:
169 uvicorn_extra_options["ssl_version"] = ssl_version
170 if ssl_cert_reqs is not None:
171 uvicorn_extra_options["ssl_cert_reqs"] = ssl_cert_reqs
172 if ssl_ciphers is not None:
173 uvicorn_extra_options["ssl_ciphers"] = ssl_ciphers
174
175 if psutil.WINDOWS:
176 # 1. uvloop is not supported on Windows
177 # 2. the default policy for Python > 3.8 on Windows is ProactorEventLoop, which doesn't
178 # support listen on a existing socket file descriptors
179 # See https://docs.python.org/3.8/library/asyncio-platforms.html#windows
180 uvicorn_extra_options["loop"] = "asyncio"
181 import asyncio
182
183 asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) # type: ignore
184
185 uvicorn.run(
186 app=asgi_app,
187 fd=fd,
188 backlog=backlog,
189 log_config=None,
190 workers=1,
191 ssl_certfile=ssl_certfile,
192 ssl_keyfile=ssl_keyfile,
193 ssl_keyfile_password=ssl_keyfile_password,
194 ssl_ca_certs=ssl_ca_certs,
195 server_header=False,
196 **uvicorn_extra_options,
197 )
198
199
200 if __name__ == "__main__":
201 main() # pylint: disable=no-value-for-parameter
202
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/_bentoml_impl/worker/service.py b/src/_bentoml_impl/worker/service.py
--- a/src/_bentoml_impl/worker/service.py
+++ b/src/_bentoml_impl/worker/service.py
@@ -117,6 +117,8 @@
"""
Start a HTTP server worker for given service.
"""
+ import socket
+
import psutil
import uvicorn
@@ -182,9 +184,8 @@
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) # type: ignore
- uvicorn.run(
+ config = uvicorn.Config(
app=asgi_app,
- fd=fd,
backlog=backlog,
log_config=None,
workers=1,
@@ -195,6 +196,8 @@
server_header=False,
**uvicorn_extra_options,
)
+ socket = socket.socket(fileno=fd)
+ uvicorn.Server(config).run(sockets=[socket])
if __name__ == "__main__":
| {"golden_diff": "diff --git a/src/_bentoml_impl/worker/service.py b/src/_bentoml_impl/worker/service.py\n--- a/src/_bentoml_impl/worker/service.py\n+++ b/src/_bentoml_impl/worker/service.py\n@@ -117,6 +117,8 @@\n \"\"\"\n Start a HTTP server worker for given service.\n \"\"\"\n+ import socket\n+\n import psutil\n import uvicorn\n \n@@ -182,9 +184,8 @@\n \n asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) # type: ignore\n \n- uvicorn.run(\n+ config = uvicorn.Config(\n app=asgi_app,\n- fd=fd,\n backlog=backlog,\n log_config=None,\n workers=1,\n@@ -195,6 +196,8 @@\n server_header=False,\n **uvicorn_extra_options,\n )\n+ socket = socket.socket(fileno=fd)\n+ uvicorn.Server(config).run(sockets=[socket])\n \n \n if __name__ == \"__main__\":\n", "issue": "bug: module 'socket' has no attribute 'AF_UNIX'\n### Describe the bug\r\n\r\nHello,\r\nI'm trying to use Bentoml by playing with the quick start examples. When running the Iris classification example on a windows machine, I have this error message:\r\n```\r\nFile \"C:\\Users\\Path\\lib\\site-packages\\uvicorn\\server.py\", line 140, in startup\r\n sock = socket.fromfd(config.fd, socket.AF_UNIX, socket.SOCK_STREAM)\r\nAttributeError: module 'socket' has no attribute 'AF_UNIX'\r\n```\r\n\r\nI tried to change the socket attribute to AF_INET, the error messages disappear but the client cannot connect to the bentoml server.\r\n\r\nThanks,\r\n\r\n\r\n### To reproduce\r\n\r\n_No response_\r\n\r\n### Expected behavior\r\n\r\n_No response_\r\n\r\n### Environment\r\n\r\nbentoml:1.2.12\r\npython:3.9.18\r\nuvicorn:0.29.0\r\nWindows: 11 Pro 22H2\n", "before_files": [{"content": "from __future__ import annotations\n\nimport json\nimport os\nimport typing as t\n\nimport click\n\n\[email protected]()\[email protected](\"bento_identifier\", type=click.STRING, required=False, default=\".\")\[email protected](\"--service-name\", type=click.STRING, required=False, default=\"\")\[email protected](\n \"--fd\",\n type=click.INT,\n required=True,\n help=\"File descriptor of the socket to listen on\",\n)\[email protected](\n \"--runner-map\",\n type=click.STRING,\n envvar=\"BENTOML_RUNNER_MAP\",\n help=\"JSON string of runners map, default sets to envars `BENTOML_RUNNER_MAP`\",\n)\[email protected](\n \"--backlog\", type=click.INT, default=2048, help=\"Backlog size for the socket\"\n)\[email protected](\n \"--prometheus-dir\",\n type=click.Path(exists=True),\n help=\"Required by prometheus to pass the metrics in multi-process mode\",\n)\[email protected](\n \"--worker-env\", type=click.STRING, default=None, help=\"Environment variables\"\n)\[email protected](\n \"--worker-id\",\n required=False,\n type=click.INT,\n default=None,\n help=\"If set, start the server as a bare worker with the given worker ID. Otherwise start a standalone server with a supervisor process.\",\n)\[email protected](\n \"--ssl-certfile\",\n type=str,\n default=None,\n help=\"SSL certificate file\",\n)\[email protected](\n \"--ssl-keyfile\",\n type=str,\n default=None,\n help=\"SSL key file\",\n)\[email protected](\n \"--ssl-keyfile-password\",\n type=str,\n default=None,\n help=\"SSL keyfile password\",\n)\[email protected](\n \"--ssl-version\",\n type=int,\n default=None,\n help=\"SSL version to use (see stdlib 'ssl' module)\",\n)\[email protected](\n \"--ssl-cert-reqs\",\n type=int,\n default=None,\n help=\"Whether client certificate is required (see stdlib 'ssl' module)\",\n)\[email protected](\n \"--ssl-ca-certs\",\n type=str,\n default=None,\n help=\"CA certificates file\",\n)\[email protected](\n \"--ssl-ciphers\",\n type=str,\n default=None,\n help=\"Ciphers to use (see stdlib 'ssl' module)\",\n)\[email protected](\n \"--development-mode\",\n type=click.BOOL,\n help=\"Run the API server in development mode\",\n is_flag=True,\n default=False,\n show_default=True,\n)\[email protected](\n \"--timeout\",\n type=click.INT,\n help=\"Specify the timeout for API server\",\n)\ndef main(\n bento_identifier: str,\n service_name: str,\n fd: int,\n runner_map: str | None,\n backlog: int,\n worker_env: str | None,\n worker_id: int | None,\n prometheus_dir: str | None,\n ssl_certfile: str | None,\n ssl_keyfile: str | None,\n ssl_keyfile_password: str | None,\n ssl_version: int | None,\n ssl_cert_reqs: int | None,\n ssl_ca_certs: str | None,\n ssl_ciphers: str | None,\n development_mode: bool,\n timeout: int,\n):\n \"\"\"\n Start a HTTP server worker for given service.\n \"\"\"\n import psutil\n import uvicorn\n\n if worker_env:\n env_list: list[dict[str, t.Any]] = json.loads(worker_env)\n if worker_id is not None:\n # worker id from circus starts from 1\n worker_key = worker_id - 1\n if worker_key >= len(env_list):\n raise IndexError(\n f\"Worker ID {worker_id} is out of range, \"\n f\"the maximum worker ID is {len(env_list)}\"\n )\n os.environ.update(env_list[worker_key])\n\n from _bentoml_impl.loader import import_service\n from bentoml._internal.container import BentoMLContainer\n from bentoml._internal.context import server_context\n from bentoml._internal.log import configure_server_logging\n\n if runner_map:\n BentoMLContainer.remote_runner_mapping.set(\n t.cast(t.Dict[str, str], json.loads(runner_map))\n )\n\n service = import_service(bento_identifier)\n\n if service_name and service_name != service.name:\n service = service.find_dependent(service_name)\n server_context.service_type = \"service\"\n else:\n server_context.service_type = \"entry_service\"\n\n if worker_id is not None:\n server_context.worker_index = worker_id\n\n configure_server_logging()\n BentoMLContainer.development_mode.set(development_mode)\n\n if prometheus_dir is not None:\n BentoMLContainer.prometheus_multiproc_dir.set(prometheus_dir)\n server_context.service_name = service.name\n\n asgi_app = service.to_asgi(\n is_main=server_context.service_type == \"entry_service\", init=False\n )\n\n uvicorn_extra_options: dict[str, t.Any] = {}\n if ssl_version is not None:\n uvicorn_extra_options[\"ssl_version\"] = ssl_version\n if ssl_cert_reqs is not None:\n uvicorn_extra_options[\"ssl_cert_reqs\"] = ssl_cert_reqs\n if ssl_ciphers is not None:\n uvicorn_extra_options[\"ssl_ciphers\"] = ssl_ciphers\n\n if psutil.WINDOWS:\n # 1. uvloop is not supported on Windows\n # 2. the default policy for Python > 3.8 on Windows is ProactorEventLoop, which doesn't\n # support listen on a existing socket file descriptors\n # See https://docs.python.org/3.8/library/asyncio-platforms.html#windows\n uvicorn_extra_options[\"loop\"] = \"asyncio\"\n import asyncio\n\n asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) # type: ignore\n\n uvicorn.run(\n app=asgi_app,\n fd=fd,\n backlog=backlog,\n log_config=None,\n workers=1,\n ssl_certfile=ssl_certfile,\n ssl_keyfile=ssl_keyfile,\n ssl_keyfile_password=ssl_keyfile_password,\n ssl_ca_certs=ssl_ca_certs,\n server_header=False,\n **uvicorn_extra_options,\n )\n\n\nif __name__ == \"__main__\":\n main() # pylint: disable=no-value-for-parameter\n", "path": "src/_bentoml_impl/worker/service.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport json\nimport os\nimport typing as t\n\nimport click\n\n\[email protected]()\[email protected](\"bento_identifier\", type=click.STRING, required=False, default=\".\")\[email protected](\"--service-name\", type=click.STRING, required=False, default=\"\")\[email protected](\n \"--fd\",\n type=click.INT,\n required=True,\n help=\"File descriptor of the socket to listen on\",\n)\[email protected](\n \"--runner-map\",\n type=click.STRING,\n envvar=\"BENTOML_RUNNER_MAP\",\n help=\"JSON string of runners map, default sets to envars `BENTOML_RUNNER_MAP`\",\n)\[email protected](\n \"--backlog\", type=click.INT, default=2048, help=\"Backlog size for the socket\"\n)\[email protected](\n \"--prometheus-dir\",\n type=click.Path(exists=True),\n help=\"Required by prometheus to pass the metrics in multi-process mode\",\n)\[email protected](\n \"--worker-env\", type=click.STRING, default=None, help=\"Environment variables\"\n)\[email protected](\n \"--worker-id\",\n required=False,\n type=click.INT,\n default=None,\n help=\"If set, start the server as a bare worker with the given worker ID. Otherwise start a standalone server with a supervisor process.\",\n)\[email protected](\n \"--ssl-certfile\",\n type=str,\n default=None,\n help=\"SSL certificate file\",\n)\[email protected](\n \"--ssl-keyfile\",\n type=str,\n default=None,\n help=\"SSL key file\",\n)\[email protected](\n \"--ssl-keyfile-password\",\n type=str,\n default=None,\n help=\"SSL keyfile password\",\n)\[email protected](\n \"--ssl-version\",\n type=int,\n default=None,\n help=\"SSL version to use (see stdlib 'ssl' module)\",\n)\[email protected](\n \"--ssl-cert-reqs\",\n type=int,\n default=None,\n help=\"Whether client certificate is required (see stdlib 'ssl' module)\",\n)\[email protected](\n \"--ssl-ca-certs\",\n type=str,\n default=None,\n help=\"CA certificates file\",\n)\[email protected](\n \"--ssl-ciphers\",\n type=str,\n default=None,\n help=\"Ciphers to use (see stdlib 'ssl' module)\",\n)\[email protected](\n \"--development-mode\",\n type=click.BOOL,\n help=\"Run the API server in development mode\",\n is_flag=True,\n default=False,\n show_default=True,\n)\[email protected](\n \"--timeout\",\n type=click.INT,\n help=\"Specify the timeout for API server\",\n)\ndef main(\n bento_identifier: str,\n service_name: str,\n fd: int,\n runner_map: str | None,\n backlog: int,\n worker_env: str | None,\n worker_id: int | None,\n prometheus_dir: str | None,\n ssl_certfile: str | None,\n ssl_keyfile: str | None,\n ssl_keyfile_password: str | None,\n ssl_version: int | None,\n ssl_cert_reqs: int | None,\n ssl_ca_certs: str | None,\n ssl_ciphers: str | None,\n development_mode: bool,\n timeout: int,\n):\n \"\"\"\n Start a HTTP server worker for given service.\n \"\"\"\n import socket\n\n import psutil\n import uvicorn\n\n if worker_env:\n env_list: list[dict[str, t.Any]] = json.loads(worker_env)\n if worker_id is not None:\n # worker id from circus starts from 1\n worker_key = worker_id - 1\n if worker_key >= len(env_list):\n raise IndexError(\n f\"Worker ID {worker_id} is out of range, \"\n f\"the maximum worker ID is {len(env_list)}\"\n )\n os.environ.update(env_list[worker_key])\n\n from _bentoml_impl.loader import import_service\n from bentoml._internal.container import BentoMLContainer\n from bentoml._internal.context import server_context\n from bentoml._internal.log import configure_server_logging\n\n if runner_map:\n BentoMLContainer.remote_runner_mapping.set(\n t.cast(t.Dict[str, str], json.loads(runner_map))\n )\n\n service = import_service(bento_identifier)\n\n if service_name and service_name != service.name:\n service = service.find_dependent(service_name)\n server_context.service_type = \"service\"\n else:\n server_context.service_type = \"entry_service\"\n\n if worker_id is not None:\n server_context.worker_index = worker_id\n\n configure_server_logging()\n BentoMLContainer.development_mode.set(development_mode)\n\n if prometheus_dir is not None:\n BentoMLContainer.prometheus_multiproc_dir.set(prometheus_dir)\n server_context.service_name = service.name\n\n asgi_app = service.to_asgi(\n is_main=server_context.service_type == \"entry_service\", init=False\n )\n\n uvicorn_extra_options: dict[str, t.Any] = {}\n if ssl_version is not None:\n uvicorn_extra_options[\"ssl_version\"] = ssl_version\n if ssl_cert_reqs is not None:\n uvicorn_extra_options[\"ssl_cert_reqs\"] = ssl_cert_reqs\n if ssl_ciphers is not None:\n uvicorn_extra_options[\"ssl_ciphers\"] = ssl_ciphers\n\n if psutil.WINDOWS:\n # 1. uvloop is not supported on Windows\n # 2. the default policy for Python > 3.8 on Windows is ProactorEventLoop, which doesn't\n # support listen on a existing socket file descriptors\n # See https://docs.python.org/3.8/library/asyncio-platforms.html#windows\n uvicorn_extra_options[\"loop\"] = \"asyncio\"\n import asyncio\n\n asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) # type: ignore\n\n config = uvicorn.Config(\n app=asgi_app,\n backlog=backlog,\n log_config=None,\n workers=1,\n ssl_certfile=ssl_certfile,\n ssl_keyfile=ssl_keyfile,\n ssl_keyfile_password=ssl_keyfile_password,\n ssl_ca_certs=ssl_ca_certs,\n server_header=False,\n **uvicorn_extra_options,\n )\n socket = socket.socket(fileno=fd)\n uvicorn.Server(config).run(sockets=[socket])\n\n\nif __name__ == \"__main__\":\n main() # pylint: disable=no-value-for-parameter\n", "path": "src/_bentoml_impl/worker/service.py"}]} | 2,365 | 238 |
gh_patches_debug_18185 | rasdani/github-patches | git_diff | mozilla__bugbug-214 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use the bug snapshot transform in the "uplift" model
Depends on #5.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bugbug/models/uplift.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import xgboost
7 from imblearn.under_sampling import RandomUnderSampler
8 from sklearn.compose import ColumnTransformer
9 from sklearn.feature_extraction import DictVectorizer
10 from sklearn.pipeline import Pipeline
11
12 from bugbug import bug_features
13 from bugbug import bugzilla
14 from bugbug.model import Model
15
16
17 class UpliftModel(Model):
18 def __init__(self, lemmatization=False):
19 Model.__init__(self, lemmatization)
20
21 self.sampler = RandomUnderSampler(random_state=0)
22
23 feature_extractors = [
24 bug_features.has_str(),
25 bug_features.has_regression_range(),
26 bug_features.severity(),
27 bug_features.keywords(),
28 bug_features.is_coverity_issue(),
29 bug_features.has_crash_signature(),
30 bug_features.has_url(),
31 bug_features.has_w3c_url(),
32 bug_features.has_github_url(),
33 bug_features.whiteboard(),
34 bug_features.patches(),
35 bug_features.landings(),
36 bug_features.title(),
37 ]
38
39 cleanup_functions = [
40 bug_features.cleanup_fileref,
41 bug_features.cleanup_url,
42 bug_features.cleanup_synonyms,
43 ]
44
45 self.extraction_pipeline = Pipeline([
46 ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions)),
47 ('union', ColumnTransformer([
48 ('data', DictVectorizer(), 'data'),
49
50 ('title', self.text_vectorizer(), 'title'),
51
52 ('comments', self.text_vectorizer(), 'comments'),
53 ])),
54 ])
55
56 self.clf = xgboost.XGBClassifier(n_jobs=16)
57 self.clf.set_params(predictor='cpu_predictor')
58
59 def get_labels(self):
60 classes = {}
61
62 for bug_data in bugzilla.get_bugs():
63 bug_id = int(bug_data['id'])
64
65 for attachment in bug_data['attachments']:
66 for flag in attachment['flags']:
67 if not flag['name'].startswith('approval-mozilla-') or flag['status'] not in ['+', '-']:
68 continue
69
70 if flag['status'] == '+':
71 classes[bug_id] = 1
72 elif flag['status'] == '-':
73 classes[bug_id] = 0
74
75 return classes
76
77 def get_feature_names(self):
78 return self.extraction_pipeline.named_steps['union'].get_feature_names()
79
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bugbug/models/uplift.py b/bugbug/models/uplift.py
--- a/bugbug/models/uplift.py
+++ b/bugbug/models/uplift.py
@@ -43,7 +43,7 @@
]
self.extraction_pipeline = Pipeline([
- ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions)),
+ ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions, rollback=True, rollback_when=self.rollback)),
('union', ColumnTransformer([
('data', DictVectorizer(), 'data'),
@@ -56,6 +56,9 @@
self.clf = xgboost.XGBClassifier(n_jobs=16)
self.clf.set_params(predictor='cpu_predictor')
+ def rollback(self, change):
+ return (change['field_name'] == 'flagtypes.name' and change['added'].startswith('approval-mozilla-') and (change['added'].endswith('+') or change['added'].endswith('-')))
+
def get_labels(self):
classes = {}
| {"golden_diff": "diff --git a/bugbug/models/uplift.py b/bugbug/models/uplift.py\n--- a/bugbug/models/uplift.py\n+++ b/bugbug/models/uplift.py\n@@ -43,7 +43,7 @@\n ]\n \n self.extraction_pipeline = Pipeline([\n- ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions)),\n+ ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions, rollback=True, rollback_when=self.rollback)),\n ('union', ColumnTransformer([\n ('data', DictVectorizer(), 'data'),\n \n@@ -56,6 +56,9 @@\n self.clf = xgboost.XGBClassifier(n_jobs=16)\n self.clf.set_params(predictor='cpu_predictor')\n \n+ def rollback(self, change):\n+ return (change['field_name'] == 'flagtypes.name' and change['added'].startswith('approval-mozilla-') and (change['added'].endswith('+') or change['added'].endswith('-')))\n+\n def get_labels(self):\n classes = {}\n", "issue": "Use the bug snapshot transform in the \"uplift\" model\nDepends on #5.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport xgboost\nfrom imblearn.under_sampling import RandomUnderSampler\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.feature_extraction import DictVectorizer\nfrom sklearn.pipeline import Pipeline\n\nfrom bugbug import bug_features\nfrom bugbug import bugzilla\nfrom bugbug.model import Model\n\n\nclass UpliftModel(Model):\n def __init__(self, lemmatization=False):\n Model.__init__(self, lemmatization)\n\n self.sampler = RandomUnderSampler(random_state=0)\n\n feature_extractors = [\n bug_features.has_str(),\n bug_features.has_regression_range(),\n bug_features.severity(),\n bug_features.keywords(),\n bug_features.is_coverity_issue(),\n bug_features.has_crash_signature(),\n bug_features.has_url(),\n bug_features.has_w3c_url(),\n bug_features.has_github_url(),\n bug_features.whiteboard(),\n bug_features.patches(),\n bug_features.landings(),\n bug_features.title(),\n ]\n\n cleanup_functions = [\n bug_features.cleanup_fileref,\n bug_features.cleanup_url,\n bug_features.cleanup_synonyms,\n ]\n\n self.extraction_pipeline = Pipeline([\n ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions)),\n ('union', ColumnTransformer([\n ('data', DictVectorizer(), 'data'),\n\n ('title', self.text_vectorizer(), 'title'),\n\n ('comments', self.text_vectorizer(), 'comments'),\n ])),\n ])\n\n self.clf = xgboost.XGBClassifier(n_jobs=16)\n self.clf.set_params(predictor='cpu_predictor')\n\n def get_labels(self):\n classes = {}\n\n for bug_data in bugzilla.get_bugs():\n bug_id = int(bug_data['id'])\n\n for attachment in bug_data['attachments']:\n for flag in attachment['flags']:\n if not flag['name'].startswith('approval-mozilla-') or flag['status'] not in ['+', '-']:\n continue\n\n if flag['status'] == '+':\n classes[bug_id] = 1\n elif flag['status'] == '-':\n classes[bug_id] = 0\n\n return classes\n\n def get_feature_names(self):\n return self.extraction_pipeline.named_steps['union'].get_feature_names()\n", "path": "bugbug/models/uplift.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport xgboost\nfrom imblearn.under_sampling import RandomUnderSampler\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.feature_extraction import DictVectorizer\nfrom sklearn.pipeline import Pipeline\n\nfrom bugbug import bug_features\nfrom bugbug import bugzilla\nfrom bugbug.model import Model\n\n\nclass UpliftModel(Model):\n def __init__(self, lemmatization=False):\n Model.__init__(self, lemmatization)\n\n self.sampler = RandomUnderSampler(random_state=0)\n\n feature_extractors = [\n bug_features.has_str(),\n bug_features.has_regression_range(),\n bug_features.severity(),\n bug_features.keywords(),\n bug_features.is_coverity_issue(),\n bug_features.has_crash_signature(),\n bug_features.has_url(),\n bug_features.has_w3c_url(),\n bug_features.has_github_url(),\n bug_features.whiteboard(),\n bug_features.patches(),\n bug_features.landings(),\n bug_features.title(),\n ]\n\n cleanup_functions = [\n bug_features.cleanup_fileref,\n bug_features.cleanup_url,\n bug_features.cleanup_synonyms,\n ]\n\n self.extraction_pipeline = Pipeline([\n ('bug_extractor', bug_features.BugExtractor(feature_extractors, cleanup_functions, rollback=True, rollback_when=self.rollback)),\n ('union', ColumnTransformer([\n ('data', DictVectorizer(), 'data'),\n\n ('title', self.text_vectorizer(), 'title'),\n\n ('comments', self.text_vectorizer(), 'comments'),\n ])),\n ])\n\n self.clf = xgboost.XGBClassifier(n_jobs=16)\n self.clf.set_params(predictor='cpu_predictor')\n\n def rollback(self, change):\n return (change['field_name'] == 'flagtypes.name' and change['added'].startswith('approval-mozilla-') and (change['added'].endswith('+') or change['added'].endswith('-')))\n\n def get_labels(self):\n classes = {}\n\n for bug_data in bugzilla.get_bugs():\n bug_id = int(bug_data['id'])\n\n for attachment in bug_data['attachments']:\n for flag in attachment['flags']:\n if not flag['name'].startswith('approval-mozilla-') or flag['status'] not in ['+', '-']:\n continue\n\n if flag['status'] == '+':\n classes[bug_id] = 1\n elif flag['status'] == '-':\n classes[bug_id] = 0\n\n return classes\n\n def get_feature_names(self):\n return self.extraction_pipeline.named_steps['union'].get_feature_names()\n", "path": "bugbug/models/uplift.py"}]} | 972 | 231 |
gh_patches_debug_35982 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-contrib-260 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
gRPC server instrumentation creates multiple traces on streaming requests
**Environment**
Current `master` code, basically the sample code in the documentation, testing with a unary request vs. a streaming request.
**Steps to reproduce**
Create a simple gRPC servicer with two RPCs, one which returns a single message (the unary response), and one which yields items in a list for a streaming response.
The key here is to make an instrumented request within the primary request handler (I'm using a simple HTTP get with the Requests instrumentation), so you get an _additional_ span which should be attached to the same trace.
**What is the expected behavior?**
A single trace with the main span, and a second child span for the HTTP request.
**What is the actual behavior?**
Two separate traces, each containing a single span.
**Additional context**
The problem _only_ occurs on streaming requests - I'm sure the reworking I did as part of https://github.com/open-telemetry/opentelemetry-python/pull/1171 is where the problem started, I didn't take into account the streaming case specifically with multiple spans, and naturally, there are no tests for anything streaming, only unary responses.
So as part of this, we'll need some useful tests as well. I'll see if I can write up my test case as an actual test case.
And again, I've got a vested interest in this working, so I'll have a PR up soon.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/_server.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # pylint:disable=relative-beyond-top-level
16 # pylint:disable=arguments-differ
17 # pylint:disable=no-member
18 # pylint:disable=signature-differs
19
20 """
21 Implementation of the service-side open-telemetry interceptor.
22 """
23
24 import logging
25 from contextlib import contextmanager
26
27 import grpc
28
29 from opentelemetry import propagators, trace
30 from opentelemetry.context import attach, detach
31 from opentelemetry.trace.propagation.textmap import DictGetter
32 from opentelemetry.trace.status import Status, StatusCode
33
34 logger = logging.getLogger(__name__)
35
36
37 # wrap an RPC call
38 # see https://github.com/grpc/grpc/issues/18191
39 def _wrap_rpc_behavior(handler, continuation):
40 if handler is None:
41 return None
42
43 if handler.request_streaming and handler.response_streaming:
44 behavior_fn = handler.stream_stream
45 handler_factory = grpc.stream_stream_rpc_method_handler
46 elif handler.request_streaming and not handler.response_streaming:
47 behavior_fn = handler.stream_unary
48 handler_factory = grpc.stream_unary_rpc_method_handler
49 elif not handler.request_streaming and handler.response_streaming:
50 behavior_fn = handler.unary_stream
51 handler_factory = grpc.unary_stream_rpc_method_handler
52 else:
53 behavior_fn = handler.unary_unary
54 handler_factory = grpc.unary_unary_rpc_method_handler
55
56 return handler_factory(
57 continuation(
58 behavior_fn, handler.request_streaming, handler.response_streaming
59 ),
60 request_deserializer=handler.request_deserializer,
61 response_serializer=handler.response_serializer,
62 )
63
64
65 # pylint:disable=abstract-method
66 class _OpenTelemetryServicerContext(grpc.ServicerContext):
67 def __init__(self, servicer_context, active_span):
68 self._servicer_context = servicer_context
69 self._active_span = active_span
70 self.code = grpc.StatusCode.OK
71 self.details = None
72 super().__init__()
73
74 def is_active(self, *args, **kwargs):
75 return self._servicer_context.is_active(*args, **kwargs)
76
77 def time_remaining(self, *args, **kwargs):
78 return self._servicer_context.time_remaining(*args, **kwargs)
79
80 def cancel(self, *args, **kwargs):
81 return self._servicer_context.cancel(*args, **kwargs)
82
83 def add_callback(self, *args, **kwargs):
84 return self._servicer_context.add_callback(*args, **kwargs)
85
86 def disable_next_message_compression(self):
87 return self._service_context.disable_next_message_compression()
88
89 def invocation_metadata(self, *args, **kwargs):
90 return self._servicer_context.invocation_metadata(*args, **kwargs)
91
92 def peer(self):
93 return self._servicer_context.peer()
94
95 def peer_identities(self):
96 return self._servicer_context.peer_identities()
97
98 def peer_identity_key(self):
99 return self._servicer_context.peer_identity_key()
100
101 def auth_context(self):
102 return self._servicer_context.auth_context()
103
104 def set_compression(self, compression):
105 return self._servicer_context.set_compression(compression)
106
107 def send_initial_metadata(self, *args, **kwargs):
108 return self._servicer_context.send_initial_metadata(*args, **kwargs)
109
110 def set_trailing_metadata(self, *args, **kwargs):
111 return self._servicer_context.set_trailing_metadata(*args, **kwargs)
112
113 def abort(self, code, details):
114 self.code = code
115 self.details = details
116 self._active_span.set_attribute("rpc.grpc.status_code", code.value[0])
117 self._active_span.set_status(
118 Status(
119 status_code=StatusCode.ERROR,
120 description="{}:{}".format(code, details),
121 )
122 )
123 return self._servicer_context.abort(code, details)
124
125 def abort_with_status(self, status):
126 return self._servicer_context.abort_with_status(status)
127
128 def set_code(self, code):
129 self.code = code
130 # use details if we already have it, otherwise the status description
131 details = self.details or code.value[1]
132 self._active_span.set_attribute("rpc.grpc.status_code", code.value[0])
133 if code != grpc.StatusCode.OK:
134 self._active_span.set_status(
135 Status(
136 status_code=StatusCode.ERROR,
137 description="{}:{}".format(code, details),
138 )
139 )
140 return self._servicer_context.set_code(code)
141
142 def set_details(self, details):
143 self.details = details
144 if self.code != grpc.StatusCode.OK:
145 self._active_span.set_status(
146 Status(
147 status_code=StatusCode.ERROR,
148 description="{}:{}".format(self.code, details),
149 )
150 )
151 return self._servicer_context.set_details(details)
152
153
154 # pylint:disable=abstract-method
155 # pylint:disable=no-self-use
156 # pylint:disable=unused-argument
157 class OpenTelemetryServerInterceptor(grpc.ServerInterceptor):
158 """
159 A gRPC server interceptor, to add OpenTelemetry.
160
161 Usage::
162
163 tracer = some OpenTelemetry tracer
164
165 interceptors = [
166 OpenTelemetryServerInterceptor(tracer),
167 ]
168
169 server = grpc.server(
170 futures.ThreadPoolExecutor(max_workers=concurrency),
171 interceptors = interceptors)
172
173 """
174
175 def __init__(self, tracer):
176 self._tracer = tracer
177 self._carrier_getter = DictGetter()
178
179 @contextmanager
180 def _set_remote_context(self, servicer_context):
181 metadata = servicer_context.invocation_metadata()
182 if metadata:
183 md_dict = {md.key: md.value for md in metadata}
184 ctx = propagators.extract(self._carrier_getter, md_dict)
185 token = attach(ctx)
186 try:
187 yield
188 finally:
189 detach(token)
190 else:
191 yield
192
193 def _start_span(self, handler_call_details, context):
194
195 # standard attributes
196 attributes = {
197 "rpc.system": "grpc",
198 "rpc.grpc.status_code": grpc.StatusCode.OK.value[0],
199 }
200
201 # if we have details about the call, split into service and method
202 if handler_call_details.method:
203 service, method = handler_call_details.method.lstrip("/").split(
204 "/", 1
205 )
206 attributes.update({"rpc.method": method, "rpc.service": service})
207
208 # add some attributes from the metadata
209 metadata = dict(context.invocation_metadata())
210 if "user-agent" in metadata:
211 attributes["rpc.user_agent"] = metadata["user-agent"]
212
213 # Split up the peer to keep with how other telemetry sources
214 # do it. This looks like:
215 # * ipv6:[::1]:57284
216 # * ipv4:127.0.0.1:57284
217 # * ipv4:10.2.1.1:57284,127.0.0.1:57284
218 #
219 try:
220 ip, port = (
221 context.peer().split(",")[0].split(":", 1)[1].rsplit(":", 1)
222 )
223 attributes.update({"net.peer.ip": ip, "net.peer.port": port})
224
225 # other telemetry sources add this, so we will too
226 if ip in ("[::1]", "127.0.0.1"):
227 attributes["net.peer.name"] = "localhost"
228
229 except IndexError:
230 logger.warning("Failed to parse peer address '%s'", context.peer())
231
232 return self._tracer.start_as_current_span(
233 name=handler_call_details.method,
234 kind=trace.SpanKind.SERVER,
235 attributes=attributes,
236 )
237
238 def intercept_service(self, continuation, handler_call_details):
239 def telemetry_wrapper(behavior, request_streaming, response_streaming):
240 def telemetry_interceptor(request_or_iterator, context):
241
242 with self._set_remote_context(context):
243 with self._start_span(
244 handler_call_details, context
245 ) as span:
246 # wrap the context
247 context = _OpenTelemetryServicerContext(context, span)
248
249 # And now we run the actual RPC.
250 try:
251 return behavior(request_or_iterator, context)
252 except Exception as error:
253 # Bare exceptions are likely to be gRPC aborts, which
254 # we handle in our context wrapper.
255 # Here, we're interested in uncaught exceptions.
256 # pylint:disable=unidiomatic-typecheck
257 if type(error) != Exception:
258 span.record_exception(error)
259 raise error
260
261 return telemetry_interceptor
262
263 return _wrap_rpc_behavior(
264 continuation(handler_call_details), telemetry_wrapper
265 )
266
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/_server.py b/instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/_server.py
--- a/instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/_server.py
+++ b/instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/_server.py
@@ -239,6 +239,15 @@
def telemetry_wrapper(behavior, request_streaming, response_streaming):
def telemetry_interceptor(request_or_iterator, context):
+ # handle streaming responses specially
+ if response_streaming:
+ return self._intercept_server_stream(
+ behavior,
+ handler_call_details,
+ request_or_iterator,
+ context,
+ )
+
with self._set_remote_context(context):
with self._start_span(
handler_call_details, context
@@ -249,6 +258,7 @@
# And now we run the actual RPC.
try:
return behavior(request_or_iterator, context)
+
except Exception as error:
# Bare exceptions are likely to be gRPC aborts, which
# we handle in our context wrapper.
@@ -263,3 +273,23 @@
return _wrap_rpc_behavior(
continuation(handler_call_details), telemetry_wrapper
)
+
+ # Handle streaming responses separately - we have to do this
+ # to return a *new* generator or various upstream things
+ # get confused, or we'll lose the consistent trace
+ def _intercept_server_stream(
+ self, behavior, handler_call_details, request_or_iterator, context
+ ):
+
+ with self._set_remote_context(context):
+ with self._start_span(handler_call_details, context) as span:
+ context = _OpenTelemetryServicerContext(context, span)
+
+ try:
+ yield from behavior(request_or_iterator, context)
+
+ except Exception as error:
+ # pylint:disable=unidiomatic-typecheck
+ if type(error) != Exception:
+ span.record_exception(error)
+ raise error
| {"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/_server.py b/instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/_server.py\n--- a/instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/_server.py\n+++ b/instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/_server.py\n@@ -239,6 +239,15 @@\n def telemetry_wrapper(behavior, request_streaming, response_streaming):\n def telemetry_interceptor(request_or_iterator, context):\n \n+ # handle streaming responses specially\n+ if response_streaming:\n+ return self._intercept_server_stream(\n+ behavior,\n+ handler_call_details,\n+ request_or_iterator,\n+ context,\n+ )\n+\n with self._set_remote_context(context):\n with self._start_span(\n handler_call_details, context\n@@ -249,6 +258,7 @@\n # And now we run the actual RPC.\n try:\n return behavior(request_or_iterator, context)\n+\n except Exception as error:\n # Bare exceptions are likely to be gRPC aborts, which\n # we handle in our context wrapper.\n@@ -263,3 +273,23 @@\n return _wrap_rpc_behavior(\n continuation(handler_call_details), telemetry_wrapper\n )\n+\n+ # Handle streaming responses separately - we have to do this\n+ # to return a *new* generator or various upstream things\n+ # get confused, or we'll lose the consistent trace\n+ def _intercept_server_stream(\n+ self, behavior, handler_call_details, request_or_iterator, context\n+ ):\n+\n+ with self._set_remote_context(context):\n+ with self._start_span(handler_call_details, context) as span:\n+ context = _OpenTelemetryServicerContext(context, span)\n+\n+ try:\n+ yield from behavior(request_or_iterator, context)\n+\n+ except Exception as error:\n+ # pylint:disable=unidiomatic-typecheck\n+ if type(error) != Exception:\n+ span.record_exception(error)\n+ raise error\n", "issue": "gRPC server instrumentation creates multiple traces on streaming requests\n**Environment**\r\nCurrent `master` code, basically the sample code in the documentation, testing with a unary request vs. a streaming request.\r\n\r\n**Steps to reproduce**\r\nCreate a simple gRPC servicer with two RPCs, one which returns a single message (the unary response), and one which yields items in a list for a streaming response.\r\n\r\nThe key here is to make an instrumented request within the primary request handler (I'm using a simple HTTP get with the Requests instrumentation), so you get an _additional_ span which should be attached to the same trace.\r\n\r\n**What is the expected behavior?**\r\nA single trace with the main span, and a second child span for the HTTP request.\r\n\r\n**What is the actual behavior?**\r\nTwo separate traces, each containing a single span.\r\n\r\n**Additional context**\r\nThe problem _only_ occurs on streaming requests - I'm sure the reworking I did as part of https://github.com/open-telemetry/opentelemetry-python/pull/1171 is where the problem started, I didn't take into account the streaming case specifically with multiple spans, and naturally, there are no tests for anything streaming, only unary responses.\r\n\r\nSo as part of this, we'll need some useful tests as well. I'll see if I can write up my test case as an actual test case.\r\n\r\nAnd again, I've got a vested interest in this working, so I'll have a PR up soon.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pylint:disable=relative-beyond-top-level\n# pylint:disable=arguments-differ\n# pylint:disable=no-member\n# pylint:disable=signature-differs\n\n\"\"\"\nImplementation of the service-side open-telemetry interceptor.\n\"\"\"\n\nimport logging\nfrom contextlib import contextmanager\n\nimport grpc\n\nfrom opentelemetry import propagators, trace\nfrom opentelemetry.context import attach, detach\nfrom opentelemetry.trace.propagation.textmap import DictGetter\nfrom opentelemetry.trace.status import Status, StatusCode\n\nlogger = logging.getLogger(__name__)\n\n\n# wrap an RPC call\n# see https://github.com/grpc/grpc/issues/18191\ndef _wrap_rpc_behavior(handler, continuation):\n if handler is None:\n return None\n\n if handler.request_streaming and handler.response_streaming:\n behavior_fn = handler.stream_stream\n handler_factory = grpc.stream_stream_rpc_method_handler\n elif handler.request_streaming and not handler.response_streaming:\n behavior_fn = handler.stream_unary\n handler_factory = grpc.stream_unary_rpc_method_handler\n elif not handler.request_streaming and handler.response_streaming:\n behavior_fn = handler.unary_stream\n handler_factory = grpc.unary_stream_rpc_method_handler\n else:\n behavior_fn = handler.unary_unary\n handler_factory = grpc.unary_unary_rpc_method_handler\n\n return handler_factory(\n continuation(\n behavior_fn, handler.request_streaming, handler.response_streaming\n ),\n request_deserializer=handler.request_deserializer,\n response_serializer=handler.response_serializer,\n )\n\n\n# pylint:disable=abstract-method\nclass _OpenTelemetryServicerContext(grpc.ServicerContext):\n def __init__(self, servicer_context, active_span):\n self._servicer_context = servicer_context\n self._active_span = active_span\n self.code = grpc.StatusCode.OK\n self.details = None\n super().__init__()\n\n def is_active(self, *args, **kwargs):\n return self._servicer_context.is_active(*args, **kwargs)\n\n def time_remaining(self, *args, **kwargs):\n return self._servicer_context.time_remaining(*args, **kwargs)\n\n def cancel(self, *args, **kwargs):\n return self._servicer_context.cancel(*args, **kwargs)\n\n def add_callback(self, *args, **kwargs):\n return self._servicer_context.add_callback(*args, **kwargs)\n\n def disable_next_message_compression(self):\n return self._service_context.disable_next_message_compression()\n\n def invocation_metadata(self, *args, **kwargs):\n return self._servicer_context.invocation_metadata(*args, **kwargs)\n\n def peer(self):\n return self._servicer_context.peer()\n\n def peer_identities(self):\n return self._servicer_context.peer_identities()\n\n def peer_identity_key(self):\n return self._servicer_context.peer_identity_key()\n\n def auth_context(self):\n return self._servicer_context.auth_context()\n\n def set_compression(self, compression):\n return self._servicer_context.set_compression(compression)\n\n def send_initial_metadata(self, *args, **kwargs):\n return self._servicer_context.send_initial_metadata(*args, **kwargs)\n\n def set_trailing_metadata(self, *args, **kwargs):\n return self._servicer_context.set_trailing_metadata(*args, **kwargs)\n\n def abort(self, code, details):\n self.code = code\n self.details = details\n self._active_span.set_attribute(\"rpc.grpc.status_code\", code.value[0])\n self._active_span.set_status(\n Status(\n status_code=StatusCode.ERROR,\n description=\"{}:{}\".format(code, details),\n )\n )\n return self._servicer_context.abort(code, details)\n\n def abort_with_status(self, status):\n return self._servicer_context.abort_with_status(status)\n\n def set_code(self, code):\n self.code = code\n # use details if we already have it, otherwise the status description\n details = self.details or code.value[1]\n self._active_span.set_attribute(\"rpc.grpc.status_code\", code.value[0])\n if code != grpc.StatusCode.OK:\n self._active_span.set_status(\n Status(\n status_code=StatusCode.ERROR,\n description=\"{}:{}\".format(code, details),\n )\n )\n return self._servicer_context.set_code(code)\n\n def set_details(self, details):\n self.details = details\n if self.code != grpc.StatusCode.OK:\n self._active_span.set_status(\n Status(\n status_code=StatusCode.ERROR,\n description=\"{}:{}\".format(self.code, details),\n )\n )\n return self._servicer_context.set_details(details)\n\n\n# pylint:disable=abstract-method\n# pylint:disable=no-self-use\n# pylint:disable=unused-argument\nclass OpenTelemetryServerInterceptor(grpc.ServerInterceptor):\n \"\"\"\n A gRPC server interceptor, to add OpenTelemetry.\n\n Usage::\n\n tracer = some OpenTelemetry tracer\n\n interceptors = [\n OpenTelemetryServerInterceptor(tracer),\n ]\n\n server = grpc.server(\n futures.ThreadPoolExecutor(max_workers=concurrency),\n interceptors = interceptors)\n\n \"\"\"\n\n def __init__(self, tracer):\n self._tracer = tracer\n self._carrier_getter = DictGetter()\n\n @contextmanager\n def _set_remote_context(self, servicer_context):\n metadata = servicer_context.invocation_metadata()\n if metadata:\n md_dict = {md.key: md.value for md in metadata}\n ctx = propagators.extract(self._carrier_getter, md_dict)\n token = attach(ctx)\n try:\n yield\n finally:\n detach(token)\n else:\n yield\n\n def _start_span(self, handler_call_details, context):\n\n # standard attributes\n attributes = {\n \"rpc.system\": \"grpc\",\n \"rpc.grpc.status_code\": grpc.StatusCode.OK.value[0],\n }\n\n # if we have details about the call, split into service and method\n if handler_call_details.method:\n service, method = handler_call_details.method.lstrip(\"/\").split(\n \"/\", 1\n )\n attributes.update({\"rpc.method\": method, \"rpc.service\": service})\n\n # add some attributes from the metadata\n metadata = dict(context.invocation_metadata())\n if \"user-agent\" in metadata:\n attributes[\"rpc.user_agent\"] = metadata[\"user-agent\"]\n\n # Split up the peer to keep with how other telemetry sources\n # do it. This looks like:\n # * ipv6:[::1]:57284\n # * ipv4:127.0.0.1:57284\n # * ipv4:10.2.1.1:57284,127.0.0.1:57284\n #\n try:\n ip, port = (\n context.peer().split(\",\")[0].split(\":\", 1)[1].rsplit(\":\", 1)\n )\n attributes.update({\"net.peer.ip\": ip, \"net.peer.port\": port})\n\n # other telemetry sources add this, so we will too\n if ip in (\"[::1]\", \"127.0.0.1\"):\n attributes[\"net.peer.name\"] = \"localhost\"\n\n except IndexError:\n logger.warning(\"Failed to parse peer address '%s'\", context.peer())\n\n return self._tracer.start_as_current_span(\n name=handler_call_details.method,\n kind=trace.SpanKind.SERVER,\n attributes=attributes,\n )\n\n def intercept_service(self, continuation, handler_call_details):\n def telemetry_wrapper(behavior, request_streaming, response_streaming):\n def telemetry_interceptor(request_or_iterator, context):\n\n with self._set_remote_context(context):\n with self._start_span(\n handler_call_details, context\n ) as span:\n # wrap the context\n context = _OpenTelemetryServicerContext(context, span)\n\n # And now we run the actual RPC.\n try:\n return behavior(request_or_iterator, context)\n except Exception as error:\n # Bare exceptions are likely to be gRPC aborts, which\n # we handle in our context wrapper.\n # Here, we're interested in uncaught exceptions.\n # pylint:disable=unidiomatic-typecheck\n if type(error) != Exception:\n span.record_exception(error)\n raise error\n\n return telemetry_interceptor\n\n return _wrap_rpc_behavior(\n continuation(handler_call_details), telemetry_wrapper\n )\n", "path": "instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/_server.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# pylint:disable=relative-beyond-top-level\n# pylint:disable=arguments-differ\n# pylint:disable=no-member\n# pylint:disable=signature-differs\n\n\"\"\"\nImplementation of the service-side open-telemetry interceptor.\n\"\"\"\n\nimport logging\nfrom contextlib import contextmanager\n\nimport grpc\n\nfrom opentelemetry import propagators, trace\nfrom opentelemetry.context import attach, detach\nfrom opentelemetry.trace.propagation.textmap import DictGetter\nfrom opentelemetry.trace.status import Status, StatusCode\n\nlogger = logging.getLogger(__name__)\n\n\n# wrap an RPC call\n# see https://github.com/grpc/grpc/issues/18191\ndef _wrap_rpc_behavior(handler, continuation):\n if handler is None:\n return None\n\n if handler.request_streaming and handler.response_streaming:\n behavior_fn = handler.stream_stream\n handler_factory = grpc.stream_stream_rpc_method_handler\n elif handler.request_streaming and not handler.response_streaming:\n behavior_fn = handler.stream_unary\n handler_factory = grpc.stream_unary_rpc_method_handler\n elif not handler.request_streaming and handler.response_streaming:\n behavior_fn = handler.unary_stream\n handler_factory = grpc.unary_stream_rpc_method_handler\n else:\n behavior_fn = handler.unary_unary\n handler_factory = grpc.unary_unary_rpc_method_handler\n\n return handler_factory(\n continuation(\n behavior_fn, handler.request_streaming, handler.response_streaming\n ),\n request_deserializer=handler.request_deserializer,\n response_serializer=handler.response_serializer,\n )\n\n\n# pylint:disable=abstract-method\nclass _OpenTelemetryServicerContext(grpc.ServicerContext):\n def __init__(self, servicer_context, active_span):\n self._servicer_context = servicer_context\n self._active_span = active_span\n self.code = grpc.StatusCode.OK\n self.details = None\n super().__init__()\n\n def is_active(self, *args, **kwargs):\n return self._servicer_context.is_active(*args, **kwargs)\n\n def time_remaining(self, *args, **kwargs):\n return self._servicer_context.time_remaining(*args, **kwargs)\n\n def cancel(self, *args, **kwargs):\n return self._servicer_context.cancel(*args, **kwargs)\n\n def add_callback(self, *args, **kwargs):\n return self._servicer_context.add_callback(*args, **kwargs)\n\n def disable_next_message_compression(self):\n return self._service_context.disable_next_message_compression()\n\n def invocation_metadata(self, *args, **kwargs):\n return self._servicer_context.invocation_metadata(*args, **kwargs)\n\n def peer(self):\n return self._servicer_context.peer()\n\n def peer_identities(self):\n return self._servicer_context.peer_identities()\n\n def peer_identity_key(self):\n return self._servicer_context.peer_identity_key()\n\n def auth_context(self):\n return self._servicer_context.auth_context()\n\n def set_compression(self, compression):\n return self._servicer_context.set_compression(compression)\n\n def send_initial_metadata(self, *args, **kwargs):\n return self._servicer_context.send_initial_metadata(*args, **kwargs)\n\n def set_trailing_metadata(self, *args, **kwargs):\n return self._servicer_context.set_trailing_metadata(*args, **kwargs)\n\n def abort(self, code, details):\n self.code = code\n self.details = details\n self._active_span.set_attribute(\"rpc.grpc.status_code\", code.value[0])\n self._active_span.set_status(\n Status(\n status_code=StatusCode.ERROR,\n description=\"{}:{}\".format(code, details),\n )\n )\n return self._servicer_context.abort(code, details)\n\n def abort_with_status(self, status):\n return self._servicer_context.abort_with_status(status)\n\n def set_code(self, code):\n self.code = code\n # use details if we already have it, otherwise the status description\n details = self.details or code.value[1]\n self._active_span.set_attribute(\"rpc.grpc.status_code\", code.value[0])\n if code != grpc.StatusCode.OK:\n self._active_span.set_status(\n Status(\n status_code=StatusCode.ERROR,\n description=\"{}:{}\".format(code, details),\n )\n )\n return self._servicer_context.set_code(code)\n\n def set_details(self, details):\n self.details = details\n if self.code != grpc.StatusCode.OK:\n self._active_span.set_status(\n Status(\n status_code=StatusCode.ERROR,\n description=\"{}:{}\".format(self.code, details),\n )\n )\n return self._servicer_context.set_details(details)\n\n\n# pylint:disable=abstract-method\n# pylint:disable=no-self-use\n# pylint:disable=unused-argument\nclass OpenTelemetryServerInterceptor(grpc.ServerInterceptor):\n \"\"\"\n A gRPC server interceptor, to add OpenTelemetry.\n\n Usage::\n\n tracer = some OpenTelemetry tracer\n\n interceptors = [\n OpenTelemetryServerInterceptor(tracer),\n ]\n\n server = grpc.server(\n futures.ThreadPoolExecutor(max_workers=concurrency),\n interceptors = interceptors)\n\n \"\"\"\n\n def __init__(self, tracer):\n self._tracer = tracer\n self._carrier_getter = DictGetter()\n\n @contextmanager\n def _set_remote_context(self, servicer_context):\n metadata = servicer_context.invocation_metadata()\n if metadata:\n md_dict = {md.key: md.value for md in metadata}\n ctx = propagators.extract(self._carrier_getter, md_dict)\n token = attach(ctx)\n try:\n yield\n finally:\n detach(token)\n else:\n yield\n\n def _start_span(self, handler_call_details, context):\n\n # standard attributes\n attributes = {\n \"rpc.system\": \"grpc\",\n \"rpc.grpc.status_code\": grpc.StatusCode.OK.value[0],\n }\n\n # if we have details about the call, split into service and method\n if handler_call_details.method:\n service, method = handler_call_details.method.lstrip(\"/\").split(\n \"/\", 1\n )\n attributes.update({\"rpc.method\": method, \"rpc.service\": service})\n\n # add some attributes from the metadata\n metadata = dict(context.invocation_metadata())\n if \"user-agent\" in metadata:\n attributes[\"rpc.user_agent\"] = metadata[\"user-agent\"]\n\n # Split up the peer to keep with how other telemetry sources\n # do it. This looks like:\n # * ipv6:[::1]:57284\n # * ipv4:127.0.0.1:57284\n # * ipv4:10.2.1.1:57284,127.0.0.1:57284\n #\n try:\n ip, port = (\n context.peer().split(\",\")[0].split(\":\", 1)[1].rsplit(\":\", 1)\n )\n attributes.update({\"net.peer.ip\": ip, \"net.peer.port\": port})\n\n # other telemetry sources add this, so we will too\n if ip in (\"[::1]\", \"127.0.0.1\"):\n attributes[\"net.peer.name\"] = \"localhost\"\n\n except IndexError:\n logger.warning(\"Failed to parse peer address '%s'\", context.peer())\n\n return self._tracer.start_as_current_span(\n name=handler_call_details.method,\n kind=trace.SpanKind.SERVER,\n attributes=attributes,\n )\n\n def intercept_service(self, continuation, handler_call_details):\n def telemetry_wrapper(behavior, request_streaming, response_streaming):\n def telemetry_interceptor(request_or_iterator, context):\n\n # handle streaming responses specially\n if response_streaming:\n return self._intercept_server_stream(\n behavior,\n handler_call_details,\n request_or_iterator,\n context,\n )\n\n with self._set_remote_context(context):\n with self._start_span(\n handler_call_details, context\n ) as span:\n # wrap the context\n context = _OpenTelemetryServicerContext(context, span)\n\n # And now we run the actual RPC.\n try:\n return behavior(request_or_iterator, context)\n\n except Exception as error:\n # Bare exceptions are likely to be gRPC aborts, which\n # we handle in our context wrapper.\n # Here, we're interested in uncaught exceptions.\n # pylint:disable=unidiomatic-typecheck\n if type(error) != Exception:\n span.record_exception(error)\n raise error\n\n return telemetry_interceptor\n\n return _wrap_rpc_behavior(\n continuation(handler_call_details), telemetry_wrapper\n )\n\n # Handle streaming responses separately - we have to do this\n # to return a *new* generator or various upstream things\n # get confused, or we'll lose the consistent trace\n def _intercept_server_stream(\n self, behavior, handler_call_details, request_or_iterator, context\n ):\n\n with self._set_remote_context(context):\n with self._start_span(handler_call_details, context) as span:\n context = _OpenTelemetryServicerContext(context, span)\n\n try:\n yield from behavior(request_or_iterator, context)\n\n except Exception as error:\n # pylint:disable=unidiomatic-typecheck\n if type(error) != Exception:\n span.record_exception(error)\n raise error\n", "path": "instrumentation/opentelemetry-instrumentation-grpc/src/opentelemetry/instrumentation/grpc/_server.py"}]} | 3,237 | 485 |
gh_patches_debug_349 | rasdani/github-patches | git_diff | google__turbinia-1070 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing sys module import in logger.py
Logger module is missing an import statement for 'sys'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `turbinia/config/logger.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright 2017 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Sets up logging."""
16
17 from __future__ import unicode_literals
18 import logging
19
20 import warnings
21 import logging.handlers
22 import os
23
24 from turbinia import config
25 from turbinia import TurbiniaException
26
27 # Environment variable to look for node name in
28 ENVNODENAME = 'NODE_NAME'
29
30
31 def setup(need_file_handler=True, need_stream_handler=True, log_file_path=None):
32 """Set up logging parameters.
33
34 This will also set the root logger, which is the default logger when a named
35 logger is not specified. We currently use 'turbinia' as the named logger,
36 however some external modules that are called by Turbinia can use the root
37 logger, so we want to be able to optionally configure that as well.
38 """
39 # Remove known warning about credentials
40 warnings.filterwarnings(
41 'ignore', 'Your application has authenticated using end user credentials')
42
43 logger = logging.getLogger('turbinia')
44 # Eliminate double logging from root logger
45 logger.propagate = False
46
47 # We only need a handler if one of that type doesn't exist already
48 if logger.handlers:
49 for handler in logger.handlers:
50 # Want to do strict type-checking here because is instance will include
51 # subclasses and so won't distinguish between StreamHandlers and
52 # FileHandlers.
53 # pylint: disable=unidiomatic-typecheck
54 if type(handler) == logging.FileHandler:
55 need_file_handler = False
56
57 # pylint: disable=unidiomatic-typecheck
58 if type(handler) == logging.StreamHandler:
59 need_stream_handler = False
60
61 if need_file_handler:
62 try:
63 config.LoadConfig()
64 except TurbiniaException as exception:
65 print(
66 'Could not load config file ({0!s}).\n{1:s}'.format(
67 exception, config.CONFIG_MSG))
68 sys.exit(1)
69
70 # Check if a user specified log path was provided else create default path
71 if not log_file_path:
72 log_name = os.uname().nodename
73 # Check if NODE_NAME available for GKE setups
74 if ENVNODENAME in os.environ:
75 log_name = log_name + '.{0!s}'.format(os.environ[ENVNODENAME])
76 log_file_path = os.path.join(config.LOG_DIR, log_name) + '.log'
77
78 file_handler = logging.FileHandler(log_file_path)
79 formatter = logging.Formatter('%(asctime)s:%(levelname)s:%(message)s')
80 file_handler.setFormatter(formatter)
81 file_handler.setLevel(logging.DEBUG)
82 logger.addHandler(file_handler)
83
84 console_handler = logging.StreamHandler()
85 formatter = logging.Formatter(
86 '%(asctime)s [%(levelname)s] %(message)s', "%Y-%m-%d %H:%M:%S")
87 console_handler.setFormatter(formatter)
88 if need_stream_handler:
89 logger.addHandler(console_handler)
90
91 # Configure the root logger to use exactly our handlers because other modules
92 # like PSQ use this, and we want to see log messages from it when executing
93 # from CLI.
94 root_log = logging.getLogger()
95 for handler in root_log.handlers:
96 root_log.removeHandler(handler)
97 root_log.addHandler(console_handler)
98 if need_file_handler:
99 root_log.addHandler(file_handler)
100
101 # Set filelock logging to ERROR due to log spam
102 logging.getLogger("filelock").setLevel(logging.ERROR)
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/turbinia/config/logger.py b/turbinia/config/logger.py
--- a/turbinia/config/logger.py
+++ b/turbinia/config/logger.py
@@ -20,6 +20,7 @@
import warnings
import logging.handlers
import os
+import sys
from turbinia import config
from turbinia import TurbiniaException
| {"golden_diff": "diff --git a/turbinia/config/logger.py b/turbinia/config/logger.py\n--- a/turbinia/config/logger.py\n+++ b/turbinia/config/logger.py\n@@ -20,6 +20,7 @@\n import warnings\n import logging.handlers\n import os\n+import sys\n \n from turbinia import config\n from turbinia import TurbiniaException\n", "issue": "Missing sys module import in logger.py\nLogger module is missing an import statement for 'sys'\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2017 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Sets up logging.\"\"\"\n\nfrom __future__ import unicode_literals\nimport logging\n\nimport warnings\nimport logging.handlers\nimport os\n\nfrom turbinia import config\nfrom turbinia import TurbiniaException\n\n# Environment variable to look for node name in\nENVNODENAME = 'NODE_NAME'\n\n\ndef setup(need_file_handler=True, need_stream_handler=True, log_file_path=None):\n \"\"\"Set up logging parameters.\n\n This will also set the root logger, which is the default logger when a named\n logger is not specified. We currently use 'turbinia' as the named logger,\n however some external modules that are called by Turbinia can use the root\n logger, so we want to be able to optionally configure that as well.\n \"\"\"\n # Remove known warning about credentials\n warnings.filterwarnings(\n 'ignore', 'Your application has authenticated using end user credentials')\n\n logger = logging.getLogger('turbinia')\n # Eliminate double logging from root logger\n logger.propagate = False\n\n # We only need a handler if one of that type doesn't exist already\n if logger.handlers:\n for handler in logger.handlers:\n # Want to do strict type-checking here because is instance will include\n # subclasses and so won't distinguish between StreamHandlers and\n # FileHandlers.\n # pylint: disable=unidiomatic-typecheck\n if type(handler) == logging.FileHandler:\n need_file_handler = False\n\n # pylint: disable=unidiomatic-typecheck\n if type(handler) == logging.StreamHandler:\n need_stream_handler = False\n\n if need_file_handler:\n try:\n config.LoadConfig()\n except TurbiniaException as exception:\n print(\n 'Could not load config file ({0!s}).\\n{1:s}'.format(\n exception, config.CONFIG_MSG))\n sys.exit(1)\n\n # Check if a user specified log path was provided else create default path\n if not log_file_path:\n log_name = os.uname().nodename\n # Check if NODE_NAME available for GKE setups\n if ENVNODENAME in os.environ:\n log_name = log_name + '.{0!s}'.format(os.environ[ENVNODENAME])\n log_file_path = os.path.join(config.LOG_DIR, log_name) + '.log'\n\n file_handler = logging.FileHandler(log_file_path)\n formatter = logging.Formatter('%(asctime)s:%(levelname)s:%(message)s')\n file_handler.setFormatter(formatter)\n file_handler.setLevel(logging.DEBUG)\n logger.addHandler(file_handler)\n\n console_handler = logging.StreamHandler()\n formatter = logging.Formatter(\n '%(asctime)s [%(levelname)s] %(message)s', \"%Y-%m-%d %H:%M:%S\")\n console_handler.setFormatter(formatter)\n if need_stream_handler:\n logger.addHandler(console_handler)\n\n # Configure the root logger to use exactly our handlers because other modules\n # like PSQ use this, and we want to see log messages from it when executing\n # from CLI.\n root_log = logging.getLogger()\n for handler in root_log.handlers:\n root_log.removeHandler(handler)\n root_log.addHandler(console_handler)\n if need_file_handler:\n root_log.addHandler(file_handler)\n\n # Set filelock logging to ERROR due to log spam\n logging.getLogger(\"filelock\").setLevel(logging.ERROR)\n", "path": "turbinia/config/logger.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2017 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Sets up logging.\"\"\"\n\nfrom __future__ import unicode_literals\nimport logging\n\nimport warnings\nimport logging.handlers\nimport os\nimport sys\n\nfrom turbinia import config\nfrom turbinia import TurbiniaException\n\n# Environment variable to look for node name in\nENVNODENAME = 'NODE_NAME'\n\n\ndef setup(need_file_handler=True, need_stream_handler=True, log_file_path=None):\n \"\"\"Set up logging parameters.\n\n This will also set the root logger, which is the default logger when a named\n logger is not specified. We currently use 'turbinia' as the named logger,\n however some external modules that are called by Turbinia can use the root\n logger, so we want to be able to optionally configure that as well.\n \"\"\"\n # Remove known warning about credentials\n warnings.filterwarnings(\n 'ignore', 'Your application has authenticated using end user credentials')\n\n logger = logging.getLogger('turbinia')\n # Eliminate double logging from root logger\n logger.propagate = False\n\n # We only need a handler if one of that type doesn't exist already\n if logger.handlers:\n for handler in logger.handlers:\n # Want to do strict type-checking here because is instance will include\n # subclasses and so won't distinguish between StreamHandlers and\n # FileHandlers.\n # pylint: disable=unidiomatic-typecheck\n if type(handler) == logging.FileHandler:\n need_file_handler = False\n\n # pylint: disable=unidiomatic-typecheck\n if type(handler) == logging.StreamHandler:\n need_stream_handler = False\n\n if need_file_handler:\n try:\n config.LoadConfig()\n except TurbiniaException as exception:\n print(\n 'Could not load config file ({0!s}).\\n{1:s}'.format(\n exception, config.CONFIG_MSG))\n sys.exit(1)\n\n # Check if a user specified log path was provided else create default path\n if not log_file_path:\n log_name = os.uname().nodename\n # Check if NODE_NAME available for GKE setups\n if ENVNODENAME in os.environ:\n log_name = log_name + '.{0!s}'.format(os.environ[ENVNODENAME])\n log_file_path = os.path.join(config.LOG_DIR, log_name) + '.log'\n\n file_handler = logging.FileHandler(log_file_path)\n formatter = logging.Formatter('%(asctime)s:%(levelname)s:%(message)s')\n file_handler.setFormatter(formatter)\n file_handler.setLevel(logging.DEBUG)\n logger.addHandler(file_handler)\n\n console_handler = logging.StreamHandler()\n formatter = logging.Formatter(\n '%(asctime)s [%(levelname)s] %(message)s', \"%Y-%m-%d %H:%M:%S\")\n console_handler.setFormatter(formatter)\n if need_stream_handler:\n logger.addHandler(console_handler)\n\n # Configure the root logger to use exactly our handlers because other modules\n # like PSQ use this, and we want to see log messages from it when executing\n # from CLI.\n root_log = logging.getLogger()\n for handler in root_log.handlers:\n root_log.removeHandler(handler)\n root_log.addHandler(console_handler)\n if need_file_handler:\n root_log.addHandler(file_handler)\n\n # Set filelock logging to ERROR due to log spam\n logging.getLogger(\"filelock\").setLevel(logging.ERROR)\n", "path": "turbinia/config/logger.py"}]} | 1,341 | 83 |
gh_patches_debug_26789 | rasdani/github-patches | git_diff | cloud-custodian__cloud-custodian-5796 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
aws.elasticsearch Error Scanning More Than 5 domains
**Describe the bug**
When running any elasticsearch policy on an account region with more than 5 elasticsearch domains the policy now bombs out with the error - ```error:An error occurred (ValidationException) when calling the DescribeElasticsearchDomains operation: Please provide a maximum of 5 Elasticsearch domain names to describe.```
**To Reproduce**
Create 6 es domains and run an elasticsearch c7n policy, error will occur
**Expected behavior**
It should chunk the calls into domains of 5 or less
**Background (please complete the following information):**
- OS: Ubuntu v20
- Python Version: 3.8
- Custodian Version: 0.9.2.0
- Tool Version: [if applicable]
- Cloud Provider: aws
- Policy: any policy which queries ES
- Traceback:
```
[ERROR] 2020-05-22T14:51:25.978Z 9ef7929b-b494-434e-9f9f-dfdfdfdfdfdfd Error while executing policy
Traceback (most recent call last):
File "/var/task/c7n/policy.py", line 291, in run
resources = self.policy.resource_manager.resources()
File "/var/task/c7n/query.py", line 466, in resources
resources = self.augment(resources)
File "/var/task/c7n/query.py", line 521, in augment
return self.source.augment(resources)
File "/var/task/c7n/resources/elasticsearch.py", line 48, in augment
return _augment(domains)
File "/var/task/c7n/resources/elasticsearch.py", line 39, in _augment
resources = self.manager.retry(
File "/var/task/c7n/utils.py", line 373, in _retry
return func(*args, **kw)
File "/var/task/botocore/client.py", line 316, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/task/botocore/client.py", line 635, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the DescribeElasticsearchDomains operation: Please provide a maximum of 5 Elasticsearch domain names to describe.
```
- `custodian version --debug` output
**Additional context**
Seems to be introduced with 0.9.2.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `c7n/resources/elasticsearch.py`
Content:
```
1 # Copyright 2016-2017 Capital One Services, LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import jmespath
15
16 from c7n.actions import Action, ModifyVpcSecurityGroupsAction
17 from c7n.filters import MetricsFilter
18 from c7n.filters.vpc import SecurityGroupFilter, SubnetFilter, VpcFilter
19 from c7n.manager import resources
20 from c7n.query import ConfigSource, DescribeSource, QueryResourceManager, TypeInfo
21 from c7n.utils import local_session, type_schema
22 from c7n.tags import Tag, RemoveTag, TagActionFilter, TagDelayedAction
23
24 from .securityhub import PostFinding
25
26
27 class DescribeDomain(DescribeSource):
28
29 def get_resources(self, resource_ids):
30 client = local_session(self.manager.session_factory).client('es')
31 return client.describe_elasticsearch_domains(
32 DomainNames=resource_ids)['DomainStatusList']
33
34 def augment(self, domains):
35 client = local_session(self.manager.session_factory).client('es')
36 model = self.manager.get_model()
37
38 def _augment(resource_set):
39 resources = self.manager.retry(
40 client.describe_elasticsearch_domains,
41 DomainNames=resource_set)['DomainStatusList']
42 for r in resources:
43 rarn = self.manager.generate_arn(r[model.id])
44 r['Tags'] = self.manager.retry(
45 client.list_tags, ARN=rarn).get('TagList', [])
46 return resources
47
48 return _augment(domains)
49
50
51 @resources.register('elasticsearch')
52 class ElasticSearchDomain(QueryResourceManager):
53
54 class resource_type(TypeInfo):
55 service = 'es'
56 arn = 'ARN'
57 arn_type = 'domain'
58 enum_spec = (
59 'list_domain_names', 'DomainNames[].DomainName', None)
60 id = 'DomainName'
61 name = 'Name'
62 dimension = "DomainName"
63 cfn_type = config_type = 'AWS::Elasticsearch::Domain'
64
65 source_mapping = {
66 'describe': DescribeDomain,
67 'config': ConfigSource
68 }
69
70
71 ElasticSearchDomain.filter_registry.register('marked-for-op', TagActionFilter)
72
73
74 @ElasticSearchDomain.filter_registry.register('subnet')
75 class Subnet(SubnetFilter):
76
77 RelatedIdsExpression = "VPCOptions.SubnetIds[]"
78
79
80 @ElasticSearchDomain.filter_registry.register('security-group')
81 class SecurityGroup(SecurityGroupFilter):
82
83 RelatedIdsExpression = "VPCOptions.SecurityGroupIds[]"
84
85
86 @ElasticSearchDomain.filter_registry.register('vpc')
87 class Vpc(VpcFilter):
88
89 RelatedIdsExpression = "VPCOptions.VPCId"
90
91
92 @ElasticSearchDomain.filter_registry.register('metrics')
93 class Metrics(MetricsFilter):
94
95 def get_dimensions(self, resource):
96 return [{'Name': 'ClientId',
97 'Value': self.manager.account_id},
98 {'Name': 'DomainName',
99 'Value': resource['DomainName']}]
100
101
102 @ElasticSearchDomain.action_registry.register('post-finding')
103 class ElasticSearchPostFinding(PostFinding):
104
105 resource_type = 'AwsElasticsearchDomain'
106
107 def format_resource(self, r):
108 envelope, payload = self.format_envelope(r)
109 payload.update(self.filter_empty({
110 'AccessPolicies': r.get('AccessPolicies'),
111 'DomainId': r['DomainId'],
112 'DomainName': r['DomainName'],
113 'Endpoint': r.get('Endpoint'),
114 'Endpoints': r.get('Endpoints'),
115 'DomainEndpointOptions': self.filter_empty({
116 'EnforceHTTPS': jmespath.search(
117 'DomainEndpointOptions.EnforceHTTPS', r),
118 'TLSSecurityPolicy': jmespath.search(
119 'DomainEndpointOptions.TLSSecurityPolicy', r)
120 }),
121 'ElasticsearchVersion': r['ElasticsearchVersion'],
122 'EncryptionAtRestOptions': self.filter_empty({
123 'Enabled': jmespath.search(
124 'EncryptionAtRestOptions.Enabled', r),
125 'KmsKeyId': jmespath.search(
126 'EncryptionAtRestOptions.KmsKeyId', r)
127 }),
128 'NodeToNodeEncryptionOptions': self.filter_empty({
129 'Enabled': jmespath.search(
130 'NodeToNodeEncryptionOptions.Enabled', r)
131 }),
132 'VPCOptions': self.filter_empty({
133 'AvailabilityZones': jmespath.search(
134 'VPCOptions.AvailabilityZones', r),
135 'SecurityGroupIds': jmespath.search(
136 'VPCOptions.SecurityGroupIds', r),
137 'SubnetIds': jmespath.search('VPCOptions.SubnetIds', r),
138 'VPCId': jmespath.search('VPCOptions.VPCId', r)
139 })
140 }))
141 return envelope
142
143
144 @ElasticSearchDomain.action_registry.register('modify-security-groups')
145 class ElasticSearchModifySG(ModifyVpcSecurityGroupsAction):
146 """Modify security groups on an Elasticsearch domain"""
147
148 permissions = ('es:UpdateElasticsearchDomainConfig',)
149
150 def process(self, domains):
151 groups = super(ElasticSearchModifySG, self).get_groups(domains)
152 client = local_session(self.manager.session_factory).client('es')
153
154 for dx, d in enumerate(domains):
155 client.update_elasticsearch_domain_config(
156 DomainName=d['DomainName'],
157 VPCOptions={
158 'SecurityGroupIds': groups[dx]})
159
160
161 @ElasticSearchDomain.action_registry.register('delete')
162 class Delete(Action):
163
164 schema = type_schema('delete')
165 permissions = ('es:DeleteElasticsearchDomain',)
166
167 def process(self, resources):
168 client = local_session(self.manager.session_factory).client('es')
169 for r in resources:
170 client.delete_elasticsearch_domain(DomainName=r['DomainName'])
171
172
173 @ElasticSearchDomain.action_registry.register('tag')
174 class ElasticSearchAddTag(Tag):
175 """Action to create tag(s) on an existing elasticsearch domain
176
177 :example:
178
179 .. code-block:: yaml
180
181 policies:
182 - name: es-add-tag
183 resource: elasticsearch
184 filters:
185 - "tag:DesiredTag": absent
186 actions:
187 - type: tag
188 key: DesiredTag
189 value: DesiredValue
190 """
191 permissions = ('es:AddTags',)
192
193 def process_resource_set(self, client, domains, tags):
194 for d in domains:
195 try:
196 client.add_tags(ARN=d['ARN'], TagList=tags)
197 except client.exceptions.ResourceNotFoundExecption:
198 continue
199
200
201 @ElasticSearchDomain.action_registry.register('remove-tag')
202 class ElasticSearchRemoveTag(RemoveTag):
203 """Removes tag(s) on an existing elasticsearch domain
204
205 :example:
206
207 .. code-block:: yaml
208
209 policies:
210 - name: es-remove-tag
211 resource: elasticsearch
212 filters:
213 - "tag:ExpiredTag": present
214 actions:
215 - type: remove-tag
216 tags: ['ExpiredTag']
217 """
218 permissions = ('es:RemoveTags',)
219
220 def process_resource_set(self, client, domains, tags):
221 for d in domains:
222 try:
223 client.remove_tags(ARN=d['ARN'], TagKeys=tags)
224 except client.exceptions.ResourceNotFoundExecption:
225 continue
226
227
228 @ElasticSearchDomain.action_registry.register('mark-for-op')
229 class ElasticSearchMarkForOp(TagDelayedAction):
230 """Tag an elasticsearch domain for action later
231
232 :example:
233
234 .. code-block:: yaml
235
236 policies:
237 - name: es-delete-missing
238 resource: elasticsearch
239 filters:
240 - "tag:DesiredTag": absent
241 actions:
242 - type: mark-for-op
243 days: 7
244 op: delete
245 tag: c7n_es_delete
246 """
247
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/c7n/resources/elasticsearch.py b/c7n/resources/elasticsearch.py
--- a/c7n/resources/elasticsearch.py
+++ b/c7n/resources/elasticsearch.py
@@ -18,7 +18,7 @@
from c7n.filters.vpc import SecurityGroupFilter, SubnetFilter, VpcFilter
from c7n.manager import resources
from c7n.query import ConfigSource, DescribeSource, QueryResourceManager, TypeInfo
-from c7n.utils import local_session, type_schema
+from c7n.utils import chunks, local_session, type_schema
from c7n.tags import Tag, RemoveTag, TagActionFilter, TagDelayedAction
from .securityhub import PostFinding
@@ -34,6 +34,7 @@
def augment(self, domains):
client = local_session(self.manager.session_factory).client('es')
model = self.manager.get_model()
+ results = []
def _augment(resource_set):
resources = self.manager.retry(
@@ -45,7 +46,10 @@
client.list_tags, ARN=rarn).get('TagList', [])
return resources
- return _augment(domains)
+ for resource_set in chunks(domains, 5):
+ results.extend(_augment(resource_set))
+
+ return results
@resources.register('elasticsearch')
| {"golden_diff": "diff --git a/c7n/resources/elasticsearch.py b/c7n/resources/elasticsearch.py\n--- a/c7n/resources/elasticsearch.py\n+++ b/c7n/resources/elasticsearch.py\n@@ -18,7 +18,7 @@\n from c7n.filters.vpc import SecurityGroupFilter, SubnetFilter, VpcFilter\n from c7n.manager import resources\n from c7n.query import ConfigSource, DescribeSource, QueryResourceManager, TypeInfo\n-from c7n.utils import local_session, type_schema\n+from c7n.utils import chunks, local_session, type_schema\n from c7n.tags import Tag, RemoveTag, TagActionFilter, TagDelayedAction\n \n from .securityhub import PostFinding\n@@ -34,6 +34,7 @@\n def augment(self, domains):\n client = local_session(self.manager.session_factory).client('es')\n model = self.manager.get_model()\n+ results = []\n \n def _augment(resource_set):\n resources = self.manager.retry(\n@@ -45,7 +46,10 @@\n client.list_tags, ARN=rarn).get('TagList', [])\n return resources\n \n- return _augment(domains)\n+ for resource_set in chunks(domains, 5):\n+ results.extend(_augment(resource_set))\n+\n+ return results\n \n \n @resources.register('elasticsearch')\n", "issue": "aws.elasticsearch Error Scanning More Than 5 domains\n**Describe the bug**\r\nWhen running any elasticsearch policy on an account region with more than 5 elasticsearch domains the policy now bombs out with the error - ```error:An error occurred (ValidationException) when calling the DescribeElasticsearchDomains operation: Please provide a maximum of 5 Elasticsearch domain names to describe.```\r\n\r\n**To Reproduce**\r\nCreate 6 es domains and run an elasticsearch c7n policy, error will occur\r\n\r\n**Expected behavior**\r\nIt should chunk the calls into domains of 5 or less\r\n\r\n\r\n**Background (please complete the following information):**\r\n - OS: Ubuntu v20\r\n - Python Version: 3.8\r\n - Custodian Version: 0.9.2.0\r\n - Tool Version: [if applicable]\r\n - Cloud Provider: aws\r\n - Policy: any policy which queries ES\r\n - Traceback: \r\n```\r\n[ERROR]\t2020-05-22T14:51:25.978Z\t9ef7929b-b494-434e-9f9f-dfdfdfdfdfdfd Error while executing policy\r\nTraceback (most recent call last):\r\n File \"/var/task/c7n/policy.py\", line 291, in run\r\n resources = self.policy.resource_manager.resources()\r\n File \"/var/task/c7n/query.py\", line 466, in resources\r\n resources = self.augment(resources)\r\n File \"/var/task/c7n/query.py\", line 521, in augment\r\n return self.source.augment(resources)\r\n File \"/var/task/c7n/resources/elasticsearch.py\", line 48, in augment\r\n return _augment(domains)\r\n File \"/var/task/c7n/resources/elasticsearch.py\", line 39, in _augment\r\n resources = self.manager.retry(\r\n File \"/var/task/c7n/utils.py\", line 373, in _retry\r\n return func(*args, **kw)\r\n File \"/var/task/botocore/client.py\", line 316, in _api_call\r\n return self._make_api_call(operation_name, kwargs)\r\n File \"/var/task/botocore/client.py\", line 635, in _make_api_call\r\n raise error_class(parsed_response, operation_name)\r\nbotocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the DescribeElasticsearchDomains operation: Please provide a maximum of 5 Elasticsearch domain names to describe.\r\n```\r\n - `custodian version --debug` output\r\n\r\n**Additional context**\r\nSeems to be introduced with 0.9.2.0\r\n\n", "before_files": [{"content": "# Copyright 2016-2017 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport jmespath\n\nfrom c7n.actions import Action, ModifyVpcSecurityGroupsAction\nfrom c7n.filters import MetricsFilter\nfrom c7n.filters.vpc import SecurityGroupFilter, SubnetFilter, VpcFilter\nfrom c7n.manager import resources\nfrom c7n.query import ConfigSource, DescribeSource, QueryResourceManager, TypeInfo\nfrom c7n.utils import local_session, type_schema\nfrom c7n.tags import Tag, RemoveTag, TagActionFilter, TagDelayedAction\n\nfrom .securityhub import PostFinding\n\n\nclass DescribeDomain(DescribeSource):\n\n def get_resources(self, resource_ids):\n client = local_session(self.manager.session_factory).client('es')\n return client.describe_elasticsearch_domains(\n DomainNames=resource_ids)['DomainStatusList']\n\n def augment(self, domains):\n client = local_session(self.manager.session_factory).client('es')\n model = self.manager.get_model()\n\n def _augment(resource_set):\n resources = self.manager.retry(\n client.describe_elasticsearch_domains,\n DomainNames=resource_set)['DomainStatusList']\n for r in resources:\n rarn = self.manager.generate_arn(r[model.id])\n r['Tags'] = self.manager.retry(\n client.list_tags, ARN=rarn).get('TagList', [])\n return resources\n\n return _augment(domains)\n\n\[email protected]('elasticsearch')\nclass ElasticSearchDomain(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = 'es'\n arn = 'ARN'\n arn_type = 'domain'\n enum_spec = (\n 'list_domain_names', 'DomainNames[].DomainName', None)\n id = 'DomainName'\n name = 'Name'\n dimension = \"DomainName\"\n cfn_type = config_type = 'AWS::Elasticsearch::Domain'\n\n source_mapping = {\n 'describe': DescribeDomain,\n 'config': ConfigSource\n }\n\n\nElasticSearchDomain.filter_registry.register('marked-for-op', TagActionFilter)\n\n\[email protected]_registry.register('subnet')\nclass Subnet(SubnetFilter):\n\n RelatedIdsExpression = \"VPCOptions.SubnetIds[]\"\n\n\[email protected]_registry.register('security-group')\nclass SecurityGroup(SecurityGroupFilter):\n\n RelatedIdsExpression = \"VPCOptions.SecurityGroupIds[]\"\n\n\[email protected]_registry.register('vpc')\nclass Vpc(VpcFilter):\n\n RelatedIdsExpression = \"VPCOptions.VPCId\"\n\n\[email protected]_registry.register('metrics')\nclass Metrics(MetricsFilter):\n\n def get_dimensions(self, resource):\n return [{'Name': 'ClientId',\n 'Value': self.manager.account_id},\n {'Name': 'DomainName',\n 'Value': resource['DomainName']}]\n\n\[email protected]_registry.register('post-finding')\nclass ElasticSearchPostFinding(PostFinding):\n\n resource_type = 'AwsElasticsearchDomain'\n\n def format_resource(self, r):\n envelope, payload = self.format_envelope(r)\n payload.update(self.filter_empty({\n 'AccessPolicies': r.get('AccessPolicies'),\n 'DomainId': r['DomainId'],\n 'DomainName': r['DomainName'],\n 'Endpoint': r.get('Endpoint'),\n 'Endpoints': r.get('Endpoints'),\n 'DomainEndpointOptions': self.filter_empty({\n 'EnforceHTTPS': jmespath.search(\n 'DomainEndpointOptions.EnforceHTTPS', r),\n 'TLSSecurityPolicy': jmespath.search(\n 'DomainEndpointOptions.TLSSecurityPolicy', r)\n }),\n 'ElasticsearchVersion': r['ElasticsearchVersion'],\n 'EncryptionAtRestOptions': self.filter_empty({\n 'Enabled': jmespath.search(\n 'EncryptionAtRestOptions.Enabled', r),\n 'KmsKeyId': jmespath.search(\n 'EncryptionAtRestOptions.KmsKeyId', r)\n }),\n 'NodeToNodeEncryptionOptions': self.filter_empty({\n 'Enabled': jmespath.search(\n 'NodeToNodeEncryptionOptions.Enabled', r)\n }),\n 'VPCOptions': self.filter_empty({\n 'AvailabilityZones': jmespath.search(\n 'VPCOptions.AvailabilityZones', r),\n 'SecurityGroupIds': jmespath.search(\n 'VPCOptions.SecurityGroupIds', r),\n 'SubnetIds': jmespath.search('VPCOptions.SubnetIds', r),\n 'VPCId': jmespath.search('VPCOptions.VPCId', r)\n })\n }))\n return envelope\n\n\[email protected]_registry.register('modify-security-groups')\nclass ElasticSearchModifySG(ModifyVpcSecurityGroupsAction):\n \"\"\"Modify security groups on an Elasticsearch domain\"\"\"\n\n permissions = ('es:UpdateElasticsearchDomainConfig',)\n\n def process(self, domains):\n groups = super(ElasticSearchModifySG, self).get_groups(domains)\n client = local_session(self.manager.session_factory).client('es')\n\n for dx, d in enumerate(domains):\n client.update_elasticsearch_domain_config(\n DomainName=d['DomainName'],\n VPCOptions={\n 'SecurityGroupIds': groups[dx]})\n\n\[email protected]_registry.register('delete')\nclass Delete(Action):\n\n schema = type_schema('delete')\n permissions = ('es:DeleteElasticsearchDomain',)\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('es')\n for r in resources:\n client.delete_elasticsearch_domain(DomainName=r['DomainName'])\n\n\[email protected]_registry.register('tag')\nclass ElasticSearchAddTag(Tag):\n \"\"\"Action to create tag(s) on an existing elasticsearch domain\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: es-add-tag\n resource: elasticsearch\n filters:\n - \"tag:DesiredTag\": absent\n actions:\n - type: tag\n key: DesiredTag\n value: DesiredValue\n \"\"\"\n permissions = ('es:AddTags',)\n\n def process_resource_set(self, client, domains, tags):\n for d in domains:\n try:\n client.add_tags(ARN=d['ARN'], TagList=tags)\n except client.exceptions.ResourceNotFoundExecption:\n continue\n\n\[email protected]_registry.register('remove-tag')\nclass ElasticSearchRemoveTag(RemoveTag):\n \"\"\"Removes tag(s) on an existing elasticsearch domain\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: es-remove-tag\n resource: elasticsearch\n filters:\n - \"tag:ExpiredTag\": present\n actions:\n - type: remove-tag\n tags: ['ExpiredTag']\n \"\"\"\n permissions = ('es:RemoveTags',)\n\n def process_resource_set(self, client, domains, tags):\n for d in domains:\n try:\n client.remove_tags(ARN=d['ARN'], TagKeys=tags)\n except client.exceptions.ResourceNotFoundExecption:\n continue\n\n\[email protected]_registry.register('mark-for-op')\nclass ElasticSearchMarkForOp(TagDelayedAction):\n \"\"\"Tag an elasticsearch domain for action later\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: es-delete-missing\n resource: elasticsearch\n filters:\n - \"tag:DesiredTag\": absent\n actions:\n - type: mark-for-op\n days: 7\n op: delete\n tag: c7n_es_delete\n \"\"\"\n", "path": "c7n/resources/elasticsearch.py"}], "after_files": [{"content": "# Copyright 2016-2017 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport jmespath\n\nfrom c7n.actions import Action, ModifyVpcSecurityGroupsAction\nfrom c7n.filters import MetricsFilter\nfrom c7n.filters.vpc import SecurityGroupFilter, SubnetFilter, VpcFilter\nfrom c7n.manager import resources\nfrom c7n.query import ConfigSource, DescribeSource, QueryResourceManager, TypeInfo\nfrom c7n.utils import chunks, local_session, type_schema\nfrom c7n.tags import Tag, RemoveTag, TagActionFilter, TagDelayedAction\n\nfrom .securityhub import PostFinding\n\n\nclass DescribeDomain(DescribeSource):\n\n def get_resources(self, resource_ids):\n client = local_session(self.manager.session_factory).client('es')\n return client.describe_elasticsearch_domains(\n DomainNames=resource_ids)['DomainStatusList']\n\n def augment(self, domains):\n client = local_session(self.manager.session_factory).client('es')\n model = self.manager.get_model()\n results = []\n\n def _augment(resource_set):\n resources = self.manager.retry(\n client.describe_elasticsearch_domains,\n DomainNames=resource_set)['DomainStatusList']\n for r in resources:\n rarn = self.manager.generate_arn(r[model.id])\n r['Tags'] = self.manager.retry(\n client.list_tags, ARN=rarn).get('TagList', [])\n return resources\n\n for resource_set in chunks(domains, 5):\n results.extend(_augment(resource_set))\n\n return results\n\n\[email protected]('elasticsearch')\nclass ElasticSearchDomain(QueryResourceManager):\n\n class resource_type(TypeInfo):\n service = 'es'\n arn = 'ARN'\n arn_type = 'domain'\n enum_spec = (\n 'list_domain_names', 'DomainNames[].DomainName', None)\n id = 'DomainName'\n name = 'Name'\n dimension = \"DomainName\"\n cfn_type = config_type = 'AWS::Elasticsearch::Domain'\n\n source_mapping = {\n 'describe': DescribeDomain,\n 'config': ConfigSource\n }\n\n\nElasticSearchDomain.filter_registry.register('marked-for-op', TagActionFilter)\n\n\[email protected]_registry.register('subnet')\nclass Subnet(SubnetFilter):\n\n RelatedIdsExpression = \"VPCOptions.SubnetIds[]\"\n\n\[email protected]_registry.register('security-group')\nclass SecurityGroup(SecurityGroupFilter):\n\n RelatedIdsExpression = \"VPCOptions.SecurityGroupIds[]\"\n\n\[email protected]_registry.register('vpc')\nclass Vpc(VpcFilter):\n\n RelatedIdsExpression = \"VPCOptions.VPCId\"\n\n\[email protected]_registry.register('metrics')\nclass Metrics(MetricsFilter):\n\n def get_dimensions(self, resource):\n return [{'Name': 'ClientId',\n 'Value': self.manager.account_id},\n {'Name': 'DomainName',\n 'Value': resource['DomainName']}]\n\n\[email protected]_registry.register('post-finding')\nclass ElasticSearchPostFinding(PostFinding):\n\n resource_type = 'AwsElasticsearchDomain'\n\n def format_resource(self, r):\n envelope, payload = self.format_envelope(r)\n payload.update(self.filter_empty({\n 'AccessPolicies': r.get('AccessPolicies'),\n 'DomainId': r['DomainId'],\n 'DomainName': r['DomainName'],\n 'Endpoint': r.get('Endpoint'),\n 'Endpoints': r.get('Endpoints'),\n 'DomainEndpointOptions': self.filter_empty({\n 'EnforceHTTPS': jmespath.search(\n 'DomainEndpointOptions.EnforceHTTPS', r),\n 'TLSSecurityPolicy': jmespath.search(\n 'DomainEndpointOptions.TLSSecurityPolicy', r)\n }),\n 'ElasticsearchVersion': r['ElasticsearchVersion'],\n 'EncryptionAtRestOptions': self.filter_empty({\n 'Enabled': jmespath.search(\n 'EncryptionAtRestOptions.Enabled', r),\n 'KmsKeyId': jmespath.search(\n 'EncryptionAtRestOptions.KmsKeyId', r)\n }),\n 'NodeToNodeEncryptionOptions': self.filter_empty({\n 'Enabled': jmespath.search(\n 'NodeToNodeEncryptionOptions.Enabled', r)\n }),\n 'VPCOptions': self.filter_empty({\n 'AvailabilityZones': jmespath.search(\n 'VPCOptions.AvailabilityZones', r),\n 'SecurityGroupIds': jmespath.search(\n 'VPCOptions.SecurityGroupIds', r),\n 'SubnetIds': jmespath.search('VPCOptions.SubnetIds', r),\n 'VPCId': jmespath.search('VPCOptions.VPCId', r)\n })\n }))\n return envelope\n\n\[email protected]_registry.register('modify-security-groups')\nclass ElasticSearchModifySG(ModifyVpcSecurityGroupsAction):\n \"\"\"Modify security groups on an Elasticsearch domain\"\"\"\n\n permissions = ('es:UpdateElasticsearchDomainConfig',)\n\n def process(self, domains):\n groups = super(ElasticSearchModifySG, self).get_groups(domains)\n client = local_session(self.manager.session_factory).client('es')\n\n for dx, d in enumerate(domains):\n client.update_elasticsearch_domain_config(\n DomainName=d['DomainName'],\n VPCOptions={\n 'SecurityGroupIds': groups[dx]})\n\n\[email protected]_registry.register('delete')\nclass Delete(Action):\n\n schema = type_schema('delete')\n permissions = ('es:DeleteElasticsearchDomain',)\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('es')\n for r in resources:\n client.delete_elasticsearch_domain(DomainName=r['DomainName'])\n\n\[email protected]_registry.register('tag')\nclass ElasticSearchAddTag(Tag):\n \"\"\"Action to create tag(s) on an existing elasticsearch domain\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: es-add-tag\n resource: elasticsearch\n filters:\n - \"tag:DesiredTag\": absent\n actions:\n - type: tag\n key: DesiredTag\n value: DesiredValue\n \"\"\"\n permissions = ('es:AddTags',)\n\n def process_resource_set(self, client, domains, tags):\n for d in domains:\n try:\n client.add_tags(ARN=d['ARN'], TagList=tags)\n except client.exceptions.ResourceNotFoundExecption:\n continue\n\n\[email protected]_registry.register('remove-tag')\nclass ElasticSearchRemoveTag(RemoveTag):\n \"\"\"Removes tag(s) on an existing elasticsearch domain\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: es-remove-tag\n resource: elasticsearch\n filters:\n - \"tag:ExpiredTag\": present\n actions:\n - type: remove-tag\n tags: ['ExpiredTag']\n \"\"\"\n permissions = ('es:RemoveTags',)\n\n def process_resource_set(self, client, domains, tags):\n for d in domains:\n try:\n client.remove_tags(ARN=d['ARN'], TagKeys=tags)\n except client.exceptions.ResourceNotFoundExecption:\n continue\n\n\[email protected]_registry.register('mark-for-op')\nclass ElasticSearchMarkForOp(TagDelayedAction):\n \"\"\"Tag an elasticsearch domain for action later\n\n :example:\n\n .. code-block:: yaml\n\n policies:\n - name: es-delete-missing\n resource: elasticsearch\n filters:\n - \"tag:DesiredTag\": absent\n actions:\n - type: mark-for-op\n days: 7\n op: delete\n tag: c7n_es_delete\n \"\"\"\n", "path": "c7n/resources/elasticsearch.py"}]} | 3,234 | 292 |
gh_patches_debug_29382 | rasdani/github-patches | git_diff | uclapi__uclapi-568 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove opbeat
Opbeat is no longer a thing, so we should remove it and replace it with something else? Maybe double down on Sentry and integrate that more tightly
https://github.com/uclapi/uclapi/blob/81b90305f9316b020664b32c2436e27ab957e8a7/backend/uclapi/requirements.txt#L33
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/uclapi/uclapi/celery.py`
Content:
```
1 from __future__ import absolute_import, unicode_literals
2
3 import celery
4 import os
5 import raven
6
7 from django.conf import settings
8 from raven.contrib.celery import register_signal as raven_register_signal, \
9 register_logger_signal as raven_register_logger_signal
10
11 from common.helpers import read_dotenv
12
13 read_dotenv(os.path.join(os.path.dirname(os.path.dirname(__file__)), '.env'))
14
15 os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'uclapi.settings')
16
17
18 class Celery(celery.Celery):
19 def on_configure(self):
20 if os.environ.get("SENTRY_DSN") is not None:
21 client = raven.Client(os.environ.get("SENTRY_DSN"))
22 raven_register_logger_signal(client)
23 raven_register_signal(client)
24
25
26 app = Celery('uclapi')
27
28 app.config_from_object('django.conf.settings', namespace='CELERY')
29
30
31 from opbeat.contrib.django.models import \
32 register_handlers as opbeat_register_handlers, \
33 logger as opbeat_logger # noqa: E402#
34
35 from opbeat.contrib.celery import \
36 register_signal as opbeat_register_signal # noqa: E402
37
38
39 try:
40 opbeat_register_signal(app)
41 except Exception as e:
42 opbeat_logger.exception('Failed installing celery hook: %s' % e)
43
44 if 'opbeat.contrib.django' in settings.INSTALLED_APPS:
45 opbeat_register_handlers()
46
47 app.autodiscover_tasks()
48
49
50 @app.task(bind=True)
51 def task(self):
52 print('Request: {0!r}'.format(self.request))
53
```
Path: `backend/uclapi/uclapi/settings.py`
Content:
```
1 """
2 Django settings for uclapi project.
3
4 Generated by 'django-admin startproject' using Django 1.10.4.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/1.10/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/1.10/ref/settings/
11 """
12
13 import os
14 import requests
15 from distutils.util import strtobool
16
17 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
18 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
19
20
21 # Quick-start development settings - unsuitable for production
22 # See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/
23
24 # SECURITY WARNING: keep the secret key used in production secret!
25 SECRET_KEY = os.environ.get("SECRET_KEY")
26
27 # SECURITY WARNING: don't run with debug turned on in production!
28 # This value should be set by the UCLAPI_PRODUCTION environment
29 # variable anyway. If in production, debug should be false.
30 DEBUG = not strtobool(os.environ.get("UCLAPI_PRODUCTION"))
31
32 ALLOWED_HOSTS = ["localhost"]
33
34 # If a domain is specified then make this an allowed host
35 if os.environ.get("UCLAPI_DOMAIN"):
36 ALLOWED_HOSTS.append(os.environ.get("UCLAPI_DOMAIN"))
37
38 # If we are running under the AWS Elastic Load Balancer then enable internal
39 # requests so that the ELB and Health Checks work
40 if strtobool(os.environ.get("UCLAPI_RUNNING_ON_AWS_ELB")):
41 EC2_PRIVATE_IP = None
42 try:
43 EC2_PRIVATE_IP = requests.get(
44 "http://169.254.169.254/latest/meta-data/local-ipv4",
45 timeout=0.01
46 ).text
47 except requests.exceptions.RequestException:
48 pass
49
50 if EC2_PRIVATE_IP:
51 ALLOWED_HOSTS.append(EC2_PRIVATE_IP)
52
53 # Application definition
54
55 INSTALLED_APPS = [
56 'django.contrib.admin',
57 'django.contrib.auth',
58 'django.contrib.contenttypes',
59 'django.contrib.sessions',
60 'django.contrib.messages',
61 'django.contrib.staticfiles',
62 'rest_framework',
63 'dashboard',
64 'marketplace',
65 'roombookings',
66 'oauth',
67 'timetable',
68 'common',
69 'opbeat.contrib.django',
70 'raven.contrib.django.raven_compat',
71 'corsheaders',
72 'workspaces'
73 ]
74
75 MIDDLEWARE = [
76 'opbeat.contrib.django.middleware.OpbeatAPMMiddleware',
77 'django.middleware.security.SecurityMiddleware',
78 'django.contrib.sessions.middleware.SessionMiddleware',
79 'corsheaders.middleware.CorsMiddleware',
80 'django.middleware.common.CommonMiddleware',
81 'django.middleware.csrf.CsrfViewMiddleware',
82 'django.contrib.auth.middleware.AuthenticationMiddleware',
83 'django.contrib.messages.middleware.MessageMiddleware',
84 'django.middleware.clickjacking.XFrameOptionsMiddleware',
85 ]
86
87 if DEBUG:
88 MIDDLEWARE.append(
89 'dashboard.middleware.fake_shibboleth_middleware'
90 '.FakeShibbolethMiddleWare'
91 )
92
93 ROOT_URLCONF = 'uclapi.urls'
94
95 TEMPLATES = [
96 {
97 'BACKEND': 'django.template.backends.django.DjangoTemplates',
98 'DIRS': [],
99 'APP_DIRS': True,
100 'OPTIONS': {
101 'context_processors': [
102 'django.template.context_processors.debug',
103 'django.template.context_processors.request',
104 'django.contrib.auth.context_processors.auth',
105 'django.contrib.messages.context_processors.messages',
106 ],
107 },
108 },
109 ]
110
111 WSGI_APPLICATION = 'uclapi.wsgi.application'
112
113
114 # Database
115 # https://docs.djangoproject.com/en/1.10/ref/settings/#databases
116
117 DATABASES = {
118 'default': {
119 'ENGINE': 'django.db.backends.postgresql',
120 'NAME': os.environ.get("DB_UCLAPI_NAME"),
121 'USER': os.environ.get("DB_UCLAPI_USERNAME"),
122 'PASSWORD': os.environ.get("DB_UCLAPI_PASSWORD"),
123 'HOST': os.environ.get("DB_UCLAPI_HOST"),
124 'PORT': os.environ.get("DB_UCLAPI_PORT")
125 },
126 'roombookings': {
127 'ENGINE': 'django.db.backends.oracle',
128 'NAME': os.environ.get("DB_ROOMS_NAME"),
129 'USER': os.environ.get("DB_ROOMS_USERNAME"),
130 'PASSWORD': os.environ.get("DB_ROOMS_PASSWORD"),
131 'HOST': '',
132 'PORT': ''
133 },
134 'gencache': {
135 'ENGINE': 'django.db.backends.postgresql',
136 'NAME': os.environ.get("DB_CACHE_NAME"),
137 'USER': os.environ.get("DB_CACHE_USERNAME"),
138 'PASSWORD': os.environ.get("DB_CACHE_PASSWORD"),
139 'HOST': os.environ.get("DB_CACHE_HOST"),
140 'PORT': os.environ.get("DB_CACHE_PORT")
141 }
142 }
143
144 DATABASE_ROUTERS = ['uclapi.dbrouters.ModelRouter']
145
146 # analytics
147 OPBEAT = {
148 'ORGANIZATION_ID': os.environ.get("OPBEAT_ORG_ID"),
149 'APP_ID': os.environ.get("OPBEAT_APP_ID"),
150 'SECRET_TOKEN': os.environ.get("OPBEAT_SECRET_TOKEN")
151 }
152
153 RAVEN_CONFIG = {
154 'dsn': os.environ.get("SENTRY_DSN"),
155 }
156
157
158 # Password validation
159 # https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators
160
161 AUTH_PASSWORD_VALIDATORS = [
162 {
163 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', # noqa
164 },
165 {
166 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', # noqa
167 },
168 {
169 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', # noqa
170 },
171 {
172 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', # noqa
173 },
174 ]
175
176
177 # Internationalization
178 # https://docs.djangoproject.com/en/1.10/topics/i18n/
179
180 LANGUAGE_CODE = 'en-us'
181
182 TIME_ZONE = 'UTC'
183
184 USE_I18N = True
185
186 USE_L10N = True
187
188 USE_TZ = False
189
190 # Static files (CSS, JavaScript, Images)
191 # https://docs.djangoproject.com/en/1.10/howto/static-files/
192
193 STATIC_URL = '/static/'
194 STATIC_ROOT = os.path.join(BASE_DIR, 'static')
195
196 # Cross Origin settings
197 CORS_ORIGIN_ALLOW_ALL = True
198 CORS_URLS_REGEX = r'^/roombookings/.*$'
199
200 # Fair use policy
201 fair_use_policy_path = os.path.join(
202 BASE_DIR,
203 'uclapi/UCLAPIAcceptableUsePolicy.txt'
204 )
205 with open(fair_use_policy_path, 'r', encoding='utf-8') as fp:
206 FAIR_USE_POLICY = list(fp)
207
208 REDIS_UCLAPI_HOST = os.environ["REDIS_UCLAPI_HOST"]
209
210 # Celery Settings
211 CELERY_BROKER_URL = 'redis://' + REDIS_UCLAPI_HOST
212 CELERY_ACCEPT_CONTENT = ['json']
213 CELERY_TASK_SERIALIZER = 'json'
214 CELERY_RESULT_SERIALIZER = 'json'
215
216
217 ROOMBOOKINGS_SETID = 'LIVE-17-18'
218
219 # S3 file storage settings
220 # There are three scenarios to consider:
221 # 1) Local development
222 # In local dev, AWS_S3_STATICS = False
223 # AWS_S3_STATICS_CREDENTIALS_ENABLED = False
224 # These allow you to use local statics using /static/ in the
225 # same way as you would normally.
226 # 2) Production
227 # In prod, AWS_S3_STATICS = True
228 # AWS_S3_STATICS_CREDENTIALS_ENABLED = False
229 # This means that S3 statics will be used, but no creds are
230 # needed on the boxes because web servers should never do
231 # uploads to the remote S3 bucket.
232 # 3) Deployment
233 # In deployment, AWS_S3_STATICS = True
234 # AWS_S3_STATICS_CREDENTIALS_ENABLED = True
235 # This will be done either from CI/CD or from the computer
236 # of a person who has permission to upload new statics to
237 # S3.
238
239 if strtobool(os.environ.get("AWS_S3_STATICS", "False")):
240 DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
241 STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
242 AWS_STORAGE_BUCKET_NAME = os.environ["AWS_S3_BUCKET_NAME"]
243 AWS_LOCATION = os.environ["AWS_S3_BUCKET_PATH"]
244 AWS_S3_REGION_NAME = os.environ["AWS_S3_REGION"]
245
246 # This is a hack to not require AWS Access Credentials
247 # when the system is running in the Cloud. This avoids us from
248 # needing to store AWS credentials.
249 # https://github.com/jschneier/django-storages/issues/254#issuecomment-329813295 # noqa
250 AWS_S3_CUSTOM_DOMAIN = "{}.s3.amazonaws.com".format(
251 AWS_STORAGE_BUCKET_NAME
252 )
253
254 # If credentials are enabled, collectstatic can do uploads
255 if strtobool(os.environ["AWS_S3_STATICS_CREDENTIALS_ENABLED"]):
256 AWS_ACCESS_KEY_ID = os.environ["AWS_ACCESS_KEY_ID"]
257 AWS_SECRET_ACCESS_KEY = os.environ["AWS_ACCESS_SECRET"]
258 AWS_S3_OBJECT_PARAMETERS = {
259 'CacheControl': 'max-age=86400',
260 }
261 AWS_S3_ENCRYPTION = False
262 else:
263 AWS_QUERYSTRING_AUTH = False
264
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/backend/uclapi/uclapi/celery.py b/backend/uclapi/uclapi/celery.py
--- a/backend/uclapi/uclapi/celery.py
+++ b/backend/uclapi/uclapi/celery.py
@@ -27,23 +27,6 @@
app.config_from_object('django.conf.settings', namespace='CELERY')
-
-from opbeat.contrib.django.models import \
- register_handlers as opbeat_register_handlers, \
- logger as opbeat_logger # noqa: E402#
-
-from opbeat.contrib.celery import \
- register_signal as opbeat_register_signal # noqa: E402
-
-
-try:
- opbeat_register_signal(app)
-except Exception as e:
- opbeat_logger.exception('Failed installing celery hook: %s' % e)
-
-if 'opbeat.contrib.django' in settings.INSTALLED_APPS:
- opbeat_register_handlers()
-
app.autodiscover_tasks()
diff --git a/backend/uclapi/uclapi/settings.py b/backend/uclapi/uclapi/settings.py
--- a/backend/uclapi/uclapi/settings.py
+++ b/backend/uclapi/uclapi/settings.py
@@ -66,14 +66,12 @@
'oauth',
'timetable',
'common',
- 'opbeat.contrib.django',
'raven.contrib.django.raven_compat',
'corsheaders',
'workspaces'
]
MIDDLEWARE = [
- 'opbeat.contrib.django.middleware.OpbeatAPMMiddleware',
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'corsheaders.middleware.CorsMiddleware',
@@ -143,13 +141,6 @@
DATABASE_ROUTERS = ['uclapi.dbrouters.ModelRouter']
-# analytics
-OPBEAT = {
- 'ORGANIZATION_ID': os.environ.get("OPBEAT_ORG_ID"),
- 'APP_ID': os.environ.get("OPBEAT_APP_ID"),
- 'SECRET_TOKEN': os.environ.get("OPBEAT_SECRET_TOKEN")
-}
-
RAVEN_CONFIG = {
'dsn': os.environ.get("SENTRY_DSN"),
}
| {"golden_diff": "diff --git a/backend/uclapi/uclapi/celery.py b/backend/uclapi/uclapi/celery.py\n--- a/backend/uclapi/uclapi/celery.py\n+++ b/backend/uclapi/uclapi/celery.py\n@@ -27,23 +27,6 @@\n \n app.config_from_object('django.conf.settings', namespace='CELERY')\n \n-\n-from opbeat.contrib.django.models import \\\n- register_handlers as opbeat_register_handlers, \\\n- logger as opbeat_logger # noqa: E402#\n-\n-from opbeat.contrib.celery import \\\n- register_signal as opbeat_register_signal # noqa: E402\n-\n-\n-try:\n- opbeat_register_signal(app)\n-except Exception as e:\n- opbeat_logger.exception('Failed installing celery hook: %s' % e)\n-\n-if 'opbeat.contrib.django' in settings.INSTALLED_APPS:\n- opbeat_register_handlers()\n-\n app.autodiscover_tasks()\n \n \ndiff --git a/backend/uclapi/uclapi/settings.py b/backend/uclapi/uclapi/settings.py\n--- a/backend/uclapi/uclapi/settings.py\n+++ b/backend/uclapi/uclapi/settings.py\n@@ -66,14 +66,12 @@\n 'oauth',\n 'timetable',\n 'common',\n- 'opbeat.contrib.django',\n 'raven.contrib.django.raven_compat',\n 'corsheaders',\n 'workspaces'\n ]\n \n MIDDLEWARE = [\n- 'opbeat.contrib.django.middleware.OpbeatAPMMiddleware',\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'corsheaders.middleware.CorsMiddleware',\n@@ -143,13 +141,6 @@\n \n DATABASE_ROUTERS = ['uclapi.dbrouters.ModelRouter']\n \n-# analytics\n-OPBEAT = {\n- 'ORGANIZATION_ID': os.environ.get(\"OPBEAT_ORG_ID\"),\n- 'APP_ID': os.environ.get(\"OPBEAT_APP_ID\"),\n- 'SECRET_TOKEN': os.environ.get(\"OPBEAT_SECRET_TOKEN\")\n-}\n-\n RAVEN_CONFIG = {\n 'dsn': os.environ.get(\"SENTRY_DSN\"),\n }\n", "issue": "Remove opbeat\nOpbeat is no longer a thing, so we should remove it and replace it with something else? Maybe double down on Sentry and integrate that more tightly\r\n\r\nhttps://github.com/uclapi/uclapi/blob/81b90305f9316b020664b32c2436e27ab957e8a7/backend/uclapi/requirements.txt#L33\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nimport celery\nimport os\nimport raven\n\nfrom django.conf import settings\nfrom raven.contrib.celery import register_signal as raven_register_signal, \\\n register_logger_signal as raven_register_logger_signal\n\nfrom common.helpers import read_dotenv\n\nread_dotenv(os.path.join(os.path.dirname(os.path.dirname(__file__)), '.env'))\n\nos.environ.setdefault('DJANGO_SETTINGS_MODULE', 'uclapi.settings')\n\n\nclass Celery(celery.Celery):\n def on_configure(self):\n if os.environ.get(\"SENTRY_DSN\") is not None:\n client = raven.Client(os.environ.get(\"SENTRY_DSN\"))\n raven_register_logger_signal(client)\n raven_register_signal(client)\n\n\napp = Celery('uclapi')\n\napp.config_from_object('django.conf.settings', namespace='CELERY')\n\n\nfrom opbeat.contrib.django.models import \\\n register_handlers as opbeat_register_handlers, \\\n logger as opbeat_logger # noqa: E402#\n\nfrom opbeat.contrib.celery import \\\n register_signal as opbeat_register_signal # noqa: E402\n\n\ntry:\n opbeat_register_signal(app)\nexcept Exception as e:\n opbeat_logger.exception('Failed installing celery hook: %s' % e)\n\nif 'opbeat.contrib.django' in settings.INSTALLED_APPS:\n opbeat_register_handlers()\n\napp.autodiscover_tasks()\n\n\[email protected](bind=True)\ndef task(self):\n print('Request: {0!r}'.format(self.request))\n", "path": "backend/uclapi/uclapi/celery.py"}, {"content": "\"\"\"\nDjango settings for uclapi project.\n\nGenerated by 'django-admin startproject' using Django 1.10.4.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.10/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.10/ref/settings/\n\"\"\"\n\nimport os\nimport requests\nfrom distutils.util import strtobool\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.environ.get(\"SECRET_KEY\")\n\n# SECURITY WARNING: don't run with debug turned on in production!\n# This value should be set by the UCLAPI_PRODUCTION environment\n# variable anyway. If in production, debug should be false.\nDEBUG = not strtobool(os.environ.get(\"UCLAPI_PRODUCTION\"))\n\nALLOWED_HOSTS = [\"localhost\"]\n\n# If a domain is specified then make this an allowed host\nif os.environ.get(\"UCLAPI_DOMAIN\"):\n ALLOWED_HOSTS.append(os.environ.get(\"UCLAPI_DOMAIN\"))\n\n# If we are running under the AWS Elastic Load Balancer then enable internal\n# requests so that the ELB and Health Checks work\nif strtobool(os.environ.get(\"UCLAPI_RUNNING_ON_AWS_ELB\")):\n EC2_PRIVATE_IP = None\n try:\n EC2_PRIVATE_IP = requests.get(\n \"http://169.254.169.254/latest/meta-data/local-ipv4\",\n timeout=0.01\n ).text\n except requests.exceptions.RequestException:\n pass\n\n if EC2_PRIVATE_IP:\n ALLOWED_HOSTS.append(EC2_PRIVATE_IP)\n\n# Application definition\n\nINSTALLED_APPS = [\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'rest_framework',\n 'dashboard',\n 'marketplace',\n 'roombookings',\n 'oauth',\n 'timetable',\n 'common',\n 'opbeat.contrib.django',\n 'raven.contrib.django.raven_compat',\n 'corsheaders',\n 'workspaces'\n]\n\nMIDDLEWARE = [\n 'opbeat.contrib.django.middleware.OpbeatAPMMiddleware',\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'corsheaders.middleware.CorsMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nif DEBUG:\n MIDDLEWARE.append(\n 'dashboard.middleware.fake_shibboleth_middleware'\n '.FakeShibbolethMiddleWare'\n )\n\nROOT_URLCONF = 'uclapi.urls'\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'uclapi.wsgi.application'\n\n\n# Database\n# https://docs.djangoproject.com/en/1.10/ref/settings/#databases\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.postgresql',\n 'NAME': os.environ.get(\"DB_UCLAPI_NAME\"),\n 'USER': os.environ.get(\"DB_UCLAPI_USERNAME\"),\n 'PASSWORD': os.environ.get(\"DB_UCLAPI_PASSWORD\"),\n 'HOST': os.environ.get(\"DB_UCLAPI_HOST\"),\n 'PORT': os.environ.get(\"DB_UCLAPI_PORT\")\n },\n 'roombookings': {\n 'ENGINE': 'django.db.backends.oracle',\n 'NAME': os.environ.get(\"DB_ROOMS_NAME\"),\n 'USER': os.environ.get(\"DB_ROOMS_USERNAME\"),\n 'PASSWORD': os.environ.get(\"DB_ROOMS_PASSWORD\"),\n 'HOST': '',\n 'PORT': ''\n },\n 'gencache': {\n 'ENGINE': 'django.db.backends.postgresql',\n 'NAME': os.environ.get(\"DB_CACHE_NAME\"),\n 'USER': os.environ.get(\"DB_CACHE_USERNAME\"),\n 'PASSWORD': os.environ.get(\"DB_CACHE_PASSWORD\"),\n 'HOST': os.environ.get(\"DB_CACHE_HOST\"),\n 'PORT': os.environ.get(\"DB_CACHE_PORT\")\n }\n}\n\nDATABASE_ROUTERS = ['uclapi.dbrouters.ModelRouter']\n\n# analytics\nOPBEAT = {\n 'ORGANIZATION_ID': os.environ.get(\"OPBEAT_ORG_ID\"),\n 'APP_ID': os.environ.get(\"OPBEAT_APP_ID\"),\n 'SECRET_TOKEN': os.environ.get(\"OPBEAT_SECRET_TOKEN\")\n}\n\nRAVEN_CONFIG = {\n 'dsn': os.environ.get(\"SENTRY_DSN\"),\n}\n\n\n# Password validation\n# https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', # noqa\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.10/topics/i18n/\n\nLANGUAGE_CODE = 'en-us'\n\nTIME_ZONE = 'UTC'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = False\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.10/howto/static-files/\n\nSTATIC_URL = '/static/'\nSTATIC_ROOT = os.path.join(BASE_DIR, 'static')\n\n# Cross Origin settings\nCORS_ORIGIN_ALLOW_ALL = True\nCORS_URLS_REGEX = r'^/roombookings/.*$'\n\n# Fair use policy\nfair_use_policy_path = os.path.join(\n BASE_DIR,\n 'uclapi/UCLAPIAcceptableUsePolicy.txt'\n)\nwith open(fair_use_policy_path, 'r', encoding='utf-8') as fp:\n FAIR_USE_POLICY = list(fp)\n\nREDIS_UCLAPI_HOST = os.environ[\"REDIS_UCLAPI_HOST\"]\n\n# Celery Settings\nCELERY_BROKER_URL = 'redis://' + REDIS_UCLAPI_HOST\nCELERY_ACCEPT_CONTENT = ['json']\nCELERY_TASK_SERIALIZER = 'json'\nCELERY_RESULT_SERIALIZER = 'json'\n\n\nROOMBOOKINGS_SETID = 'LIVE-17-18'\n\n# S3 file storage settings\n# There are three scenarios to consider:\n# 1) Local development\n# In local dev, AWS_S3_STATICS = False\n# AWS_S3_STATICS_CREDENTIALS_ENABLED = False\n# These allow you to use local statics using /static/ in the\n# same way as you would normally.\n# 2) Production\n# In prod, AWS_S3_STATICS = True\n# AWS_S3_STATICS_CREDENTIALS_ENABLED = False\n# This means that S3 statics will be used, but no creds are\n# needed on the boxes because web servers should never do\n# uploads to the remote S3 bucket.\n# 3) Deployment\n# In deployment, AWS_S3_STATICS = True\n# AWS_S3_STATICS_CREDENTIALS_ENABLED = True\n# This will be done either from CI/CD or from the computer\n# of a person who has permission to upload new statics to\n# S3.\n\nif strtobool(os.environ.get(\"AWS_S3_STATICS\", \"False\")):\n DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'\n STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'\n AWS_STORAGE_BUCKET_NAME = os.environ[\"AWS_S3_BUCKET_NAME\"]\n AWS_LOCATION = os.environ[\"AWS_S3_BUCKET_PATH\"]\n AWS_S3_REGION_NAME = os.environ[\"AWS_S3_REGION\"]\n\n # This is a hack to not require AWS Access Credentials\n # when the system is running in the Cloud. This avoids us from\n # needing to store AWS credentials.\n # https://github.com/jschneier/django-storages/issues/254#issuecomment-329813295 # noqa\n AWS_S3_CUSTOM_DOMAIN = \"{}.s3.amazonaws.com\".format(\n AWS_STORAGE_BUCKET_NAME\n )\n\n # If credentials are enabled, collectstatic can do uploads\n if strtobool(os.environ[\"AWS_S3_STATICS_CREDENTIALS_ENABLED\"]):\n AWS_ACCESS_KEY_ID = os.environ[\"AWS_ACCESS_KEY_ID\"]\n AWS_SECRET_ACCESS_KEY = os.environ[\"AWS_ACCESS_SECRET\"]\n AWS_S3_OBJECT_PARAMETERS = {\n 'CacheControl': 'max-age=86400',\n }\n AWS_S3_ENCRYPTION = False\n else:\n AWS_QUERYSTRING_AUTH = False\n", "path": "backend/uclapi/uclapi/settings.py"}], "after_files": [{"content": "from __future__ import absolute_import, unicode_literals\n\nimport celery\nimport os\nimport raven\n\nfrom django.conf import settings\nfrom raven.contrib.celery import register_signal as raven_register_signal, \\\n register_logger_signal as raven_register_logger_signal\n\nfrom common.helpers import read_dotenv\n\nread_dotenv(os.path.join(os.path.dirname(os.path.dirname(__file__)), '.env'))\n\nos.environ.setdefault('DJANGO_SETTINGS_MODULE', 'uclapi.settings')\n\n\nclass Celery(celery.Celery):\n def on_configure(self):\n if os.environ.get(\"SENTRY_DSN\") is not None:\n client = raven.Client(os.environ.get(\"SENTRY_DSN\"))\n raven_register_logger_signal(client)\n raven_register_signal(client)\n\n\napp = Celery('uclapi')\n\napp.config_from_object('django.conf.settings', namespace='CELERY')\n\napp.autodiscover_tasks()\n\n\[email protected](bind=True)\ndef task(self):\n print('Request: {0!r}'.format(self.request))\n", "path": "backend/uclapi/uclapi/celery.py"}, {"content": "\"\"\"\nDjango settings for uclapi project.\n\nGenerated by 'django-admin startproject' using Django 1.10.4.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.10/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.10/ref/settings/\n\"\"\"\n\nimport os\nimport requests\nfrom distutils.util import strtobool\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.10/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = os.environ.get(\"SECRET_KEY\")\n\n# SECURITY WARNING: don't run with debug turned on in production!\n# This value should be set by the UCLAPI_PRODUCTION environment\n# variable anyway. If in production, debug should be false.\nDEBUG = not strtobool(os.environ.get(\"UCLAPI_PRODUCTION\"))\n\nALLOWED_HOSTS = [\"localhost\"]\n\n# If a domain is specified then make this an allowed host\nif os.environ.get(\"UCLAPI_DOMAIN\"):\n ALLOWED_HOSTS.append(os.environ.get(\"UCLAPI_DOMAIN\"))\n\n# If we are running under the AWS Elastic Load Balancer then enable internal\n# requests so that the ELB and Health Checks work\nif strtobool(os.environ.get(\"UCLAPI_RUNNING_ON_AWS_ELB\")):\n EC2_PRIVATE_IP = None\n try:\n EC2_PRIVATE_IP = requests.get(\n \"http://169.254.169.254/latest/meta-data/local-ipv4\",\n timeout=0.01\n ).text\n except requests.exceptions.RequestException:\n pass\n\n if EC2_PRIVATE_IP:\n ALLOWED_HOSTS.append(EC2_PRIVATE_IP)\n\n# Application definition\n\nINSTALLED_APPS = [\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'rest_framework',\n 'dashboard',\n 'marketplace',\n 'roombookings',\n 'oauth',\n 'timetable',\n 'common',\n 'raven.contrib.django.raven_compat',\n 'corsheaders',\n 'workspaces'\n]\n\nMIDDLEWARE = [\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'corsheaders.middleware.CorsMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nif DEBUG:\n MIDDLEWARE.append(\n 'dashboard.middleware.fake_shibboleth_middleware'\n '.FakeShibbolethMiddleWare'\n )\n\nROOT_URLCONF = 'uclapi.urls'\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'uclapi.wsgi.application'\n\n\n# Database\n# https://docs.djangoproject.com/en/1.10/ref/settings/#databases\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.postgresql',\n 'NAME': os.environ.get(\"DB_UCLAPI_NAME\"),\n 'USER': os.environ.get(\"DB_UCLAPI_USERNAME\"),\n 'PASSWORD': os.environ.get(\"DB_UCLAPI_PASSWORD\"),\n 'HOST': os.environ.get(\"DB_UCLAPI_HOST\"),\n 'PORT': os.environ.get(\"DB_UCLAPI_PORT\")\n },\n 'roombookings': {\n 'ENGINE': 'django.db.backends.oracle',\n 'NAME': os.environ.get(\"DB_ROOMS_NAME\"),\n 'USER': os.environ.get(\"DB_ROOMS_USERNAME\"),\n 'PASSWORD': os.environ.get(\"DB_ROOMS_PASSWORD\"),\n 'HOST': '',\n 'PORT': ''\n },\n 'gencache': {\n 'ENGINE': 'django.db.backends.postgresql',\n 'NAME': os.environ.get(\"DB_CACHE_NAME\"),\n 'USER': os.environ.get(\"DB_CACHE_USERNAME\"),\n 'PASSWORD': os.environ.get(\"DB_CACHE_PASSWORD\"),\n 'HOST': os.environ.get(\"DB_CACHE_HOST\"),\n 'PORT': os.environ.get(\"DB_CACHE_PORT\")\n }\n}\n\nDATABASE_ROUTERS = ['uclapi.dbrouters.ModelRouter']\n\nRAVEN_CONFIG = {\n 'dsn': os.environ.get(\"SENTRY_DSN\"),\n}\n\n\n# Password validation\n# https://docs.djangoproject.com/en/1.10/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', # noqa\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', # noqa\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.10/topics/i18n/\n\nLANGUAGE_CODE = 'en-us'\n\nTIME_ZONE = 'UTC'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = False\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.10/howto/static-files/\n\nSTATIC_URL = '/static/'\nSTATIC_ROOT = os.path.join(BASE_DIR, 'static')\n\n# Cross Origin settings\nCORS_ORIGIN_ALLOW_ALL = True\nCORS_URLS_REGEX = r'^/roombookings/.*$'\n\n# Fair use policy\nfair_use_policy_path = os.path.join(\n BASE_DIR,\n 'uclapi/UCLAPIAcceptableUsePolicy.txt'\n)\nwith open(fair_use_policy_path, 'r', encoding='utf-8') as fp:\n FAIR_USE_POLICY = list(fp)\n\nREDIS_UCLAPI_HOST = os.environ[\"REDIS_UCLAPI_HOST\"]\n\n# Celery Settings\nCELERY_BROKER_URL = 'redis://' + REDIS_UCLAPI_HOST\nCELERY_ACCEPT_CONTENT = ['json']\nCELERY_TASK_SERIALIZER = 'json'\nCELERY_RESULT_SERIALIZER = 'json'\n\n\nROOMBOOKINGS_SETID = 'LIVE-17-18'\n\n# S3 file storage settings\n# There are three scenarios to consider:\n# 1) Local development\n# In local dev, AWS_S3_STATICS = False\n# AWS_S3_STATICS_CREDENTIALS_ENABLED = False\n# These allow you to use local statics using /static/ in the\n# same way as you would normally.\n# 2) Production\n# In prod, AWS_S3_STATICS = True\n# AWS_S3_STATICS_CREDENTIALS_ENABLED = False\n# This means that S3 statics will be used, but no creds are\n# needed on the boxes because web servers should never do\n# uploads to the remote S3 bucket.\n# 3) Deployment\n# In deployment, AWS_S3_STATICS = True\n# AWS_S3_STATICS_CREDENTIALS_ENABLED = True\n# This will be done either from CI/CD or from the computer\n# of a person who has permission to upload new statics to\n# S3.\n\nif strtobool(os.environ.get(\"AWS_S3_STATICS\", \"False\")):\n DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'\n STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'\n AWS_STORAGE_BUCKET_NAME = os.environ[\"AWS_S3_BUCKET_NAME\"]\n AWS_LOCATION = os.environ[\"AWS_S3_BUCKET_PATH\"]\n AWS_S3_REGION_NAME = os.environ[\"AWS_S3_REGION\"]\n\n # This is a hack to not require AWS Access Credentials\n # when the system is running in the Cloud. This avoids us from\n # needing to store AWS credentials.\n # https://github.com/jschneier/django-storages/issues/254#issuecomment-329813295 # noqa\n AWS_S3_CUSTOM_DOMAIN = \"{}.s3.amazonaws.com\".format(\n AWS_STORAGE_BUCKET_NAME\n )\n\n # If credentials are enabled, collectstatic can do uploads\n if strtobool(os.environ[\"AWS_S3_STATICS_CREDENTIALS_ENABLED\"]):\n AWS_ACCESS_KEY_ID = os.environ[\"AWS_ACCESS_KEY_ID\"]\n AWS_SECRET_ACCESS_KEY = os.environ[\"AWS_ACCESS_SECRET\"]\n AWS_S3_OBJECT_PARAMETERS = {\n 'CacheControl': 'max-age=86400',\n }\n AWS_S3_ENCRYPTION = False\n else:\n AWS_QUERYSTRING_AUTH = False\n", "path": "backend/uclapi/uclapi/settings.py"}]} | 3,595 | 487 |
gh_patches_debug_38644 | rasdani/github-patches | git_diff | python-pillow__Pillow-730 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add 16bit encode/decode to JPEG2K (WIP)
Add the ability to write 16bit JPEG2k Images (for DICOM medical images).
I'd trying to add a decoder too but I couldn't quite figure it out. Here's my dumb test case:
``` python
import numpy as np
from PIL import Image
arr = np.zeros((64, 96), dtype=np.uint16)
arr[16:64, 0:32] = 300
img = Image.fromarray(arr, 'I;16')
img.save('grr.j2k') # Works
img2 = Image.open('grr.j2k')
arr2 = np.array(img2)
np.testing.assert_almost_equal(arr, arr2) # Fails
```
Thoughts?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PIL/Jpeg2KImagePlugin.py`
Content:
```
1 #
2 # The Python Imaging Library
3 # $Id$
4 #
5 # JPEG2000 file handling
6 #
7 # History:
8 # 2014-03-12 ajh Created
9 #
10 # Copyright (c) 2014 Coriolis Systems Limited
11 # Copyright (c) 2014 Alastair Houghton
12 #
13 # See the README file for information on usage and redistribution.
14 #
15
16 __version__ = "0.1"
17
18 from PIL import Image, ImageFile
19 import struct
20 import os
21 import io
22
23
24 def _parse_codestream(fp):
25 """Parse the JPEG 2000 codestream to extract the size and component
26 count from the SIZ marker segment, returning a PIL (size, mode) tuple."""
27
28 hdr = fp.read(2)
29 lsiz = struct.unpack('>H', hdr)[0]
30 siz = hdr + fp.read(lsiz - 2)
31 lsiz, rsiz, xsiz, ysiz, xosiz, yosiz, xtsiz, ytsiz, \
32 xtosiz, ytosiz, csiz \
33 = struct.unpack('>HHIIIIIIIIH', siz[:38])
34 ssiz = [None]*csiz
35 xrsiz = [None]*csiz
36 yrsiz = [None]*csiz
37 for i in range(csiz):
38 ssiz[i], xrsiz[i], yrsiz[i] \
39 = struct.unpack('>BBB', siz[36 + 3 * i:39 + 3 * i])
40
41 size = (xsiz - xosiz, ysiz - yosiz)
42 if csiz == 1:
43 mode = 'L'
44 elif csiz == 2:
45 mode = 'LA'
46 elif csiz == 3:
47 mode = 'RGB'
48 elif csiz == 4:
49 mode = 'RGBA'
50 else:
51 mode = None
52
53 return (size, mode)
54
55
56 def _parse_jp2_header(fp):
57 """Parse the JP2 header box to extract size, component count and
58 color space information, returning a PIL (size, mode) tuple."""
59
60 # Find the JP2 header box
61 header = None
62 while True:
63 lbox, tbox = struct.unpack('>I4s', fp.read(8))
64 if lbox == 1:
65 lbox = struct.unpack('>Q', fp.read(8))[0]
66 hlen = 16
67 else:
68 hlen = 8
69
70 if tbox == b'jp2h':
71 header = fp.read(lbox - hlen)
72 break
73 else:
74 fp.seek(lbox - hlen, os.SEEK_CUR)
75
76 if header is None:
77 raise SyntaxError('could not find JP2 header')
78
79 size = None
80 mode = None
81
82 hio = io.BytesIO(header)
83 while True:
84 lbox, tbox = struct.unpack('>I4s', hio.read(8))
85 if lbox == 1:
86 lbox = struct.unpack('>Q', hio.read(8))[0]
87 hlen = 16
88 else:
89 hlen = 8
90
91 content = hio.read(lbox - hlen)
92
93 if tbox == b'ihdr':
94 height, width, nc, bpc, c, unkc, ipr \
95 = struct.unpack('>IIHBBBB', content)
96 size = (width, height)
97 if unkc:
98 if nc == 1:
99 mode = 'L'
100 elif nc == 2:
101 mode = 'LA'
102 elif nc == 3:
103 mode = 'RGB'
104 elif nc == 4:
105 mode = 'RGBA'
106 break
107 elif tbox == b'colr':
108 meth, prec, approx = struct.unpack('>BBB', content[:3])
109 if meth == 1:
110 cs = struct.unpack('>I', content[3:7])[0]
111 if cs == 16: # sRGB
112 if nc == 3:
113 mode = 'RGB'
114 elif nc == 4:
115 mode = 'RGBA'
116 break
117 elif cs == 17: # grayscale
118 if nc == 1:
119 mode = 'L'
120 elif nc == 2:
121 mode = 'LA'
122 break
123 elif cs == 18: # sYCC
124 if nc == 3:
125 mode = 'RGB'
126 elif nc == 4:
127 mode = 'RGBA'
128 break
129
130 return (size, mode)
131
132
133 ##
134 # Image plugin for JPEG2000 images.
135
136 class Jpeg2KImageFile(ImageFile.ImageFile):
137 format = "JPEG2000"
138 format_description = "JPEG 2000 (ISO 15444)"
139
140 def _open(self):
141 sig = self.fp.read(4)
142 if sig == b'\xff\x4f\xff\x51':
143 self.codec = "j2k"
144 self.size, self.mode = _parse_codestream(self.fp)
145 else:
146 sig = sig + self.fp.read(8)
147
148 if sig == b'\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a':
149 self.codec = "jp2"
150 self.size, self.mode = _parse_jp2_header(self.fp)
151 else:
152 raise SyntaxError('not a JPEG 2000 file')
153
154 if self.size is None or self.mode is None:
155 raise SyntaxError('unable to determine size/mode')
156
157 self.reduce = 0
158 self.layers = 0
159
160 fd = -1
161 length = -1
162
163 if hasattr(self.fp, "fileno"):
164 try:
165 fd = self.fp.fileno()
166 length = os.fstat(fd).st_size
167 except:
168 fd = -1
169 elif hasattr(self.fp, "seek"):
170 try:
171 pos = f.tell()
172 f.seek(0, 2)
173 length = f.tell()
174 f.seek(pos, 0)
175 except:
176 length = -1
177
178 self.tile = [('jpeg2k', (0, 0) + self.size, 0,
179 (self.codec, self.reduce, self.layers, fd, length))]
180
181 def load(self):
182 if self.reduce:
183 power = 1 << self.reduce
184 adjust = power >> 1
185 self.size = (int((self.size[0] + adjust) / power),
186 int((self.size[1] + adjust) / power))
187
188 if self.tile:
189 # Update the reduce and layers settings
190 t = self.tile[0]
191 t3 = (t[3][0], self.reduce, self.layers, t[3][3], t[3][4])
192 self.tile = [(t[0], (0, 0) + self.size, t[2], t3)]
193
194 ImageFile.ImageFile.load(self)
195
196
197 def _accept(prefix):
198 return (prefix[:4] == b'\xff\x4f\xff\x51'
199 or prefix[:12] == b'\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a')
200
201
202 # ------------------------------------------------------------
203 # Save support
204
205 def _save(im, fp, filename):
206 if filename.endswith('.j2k'):
207 kind = 'j2k'
208 else:
209 kind = 'jp2'
210
211 # Get the keyword arguments
212 info = im.encoderinfo
213
214 offset = info.get('offset', None)
215 tile_offset = info.get('tile_offset', None)
216 tile_size = info.get('tile_size', None)
217 quality_mode = info.get('quality_mode', 'rates')
218 quality_layers = info.get('quality_layers', None)
219 num_resolutions = info.get('num_resolutions', 0)
220 cblk_size = info.get('codeblock_size', None)
221 precinct_size = info.get('precinct_size', None)
222 irreversible = info.get('irreversible', False)
223 progression = info.get('progression', 'LRCP')
224 cinema_mode = info.get('cinema_mode', 'no')
225 fd = -1
226
227 if hasattr(fp, "fileno"):
228 try:
229 fd = fp.fileno()
230 except:
231 fd = -1
232
233 im.encoderconfig = (
234 offset,
235 tile_offset,
236 tile_size,
237 quality_mode,
238 quality_layers,
239 num_resolutions,
240 cblk_size,
241 precinct_size,
242 irreversible,
243 progression,
244 cinema_mode,
245 fd
246 )
247
248 ImageFile._save(im, fp, [('jpeg2k', (0, 0)+im.size, 0, kind)])
249
250 # ------------------------------------------------------------
251 # Registry stuff
252
253 Image.register_open('JPEG2000', Jpeg2KImageFile, _accept)
254 Image.register_save('JPEG2000', _save)
255
256 Image.register_extension('JPEG2000', '.jp2')
257 Image.register_extension('JPEG2000', '.j2k')
258 Image.register_extension('JPEG2000', '.jpc')
259 Image.register_extension('JPEG2000', '.jpf')
260 Image.register_extension('JPEG2000', '.jpx')
261 Image.register_extension('JPEG2000', '.j2c')
262
263 Image.register_mime('JPEG2000', 'image/jp2')
264 Image.register_mime('JPEG2000', 'image/jpx')
265
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/PIL/Jpeg2KImagePlugin.py b/PIL/Jpeg2KImagePlugin.py
--- a/PIL/Jpeg2KImagePlugin.py
+++ b/PIL/Jpeg2KImagePlugin.py
@@ -40,7 +40,10 @@
size = (xsiz - xosiz, ysiz - yosiz)
if csiz == 1:
- mode = 'L'
+ if (yrsiz[0] & 0x7f) > 8:
+ mode = 'I;16'
+ else:
+ mode = 'L'
elif csiz == 2:
mode = 'LA'
elif csiz == 3:
@@ -78,6 +81,7 @@
size = None
mode = None
+ bpc = None
hio = io.BytesIO(header)
while True:
@@ -95,7 +99,9 @@
= struct.unpack('>IIHBBBB', content)
size = (width, height)
if unkc:
- if nc == 1:
+ if nc == 1 and (bpc & 0x7f) > 8:
+ mode = 'I;16'
+ elif nc == 1:
mode = 'L'
elif nc == 2:
mode = 'LA'
@@ -109,13 +115,19 @@
if meth == 1:
cs = struct.unpack('>I', content[3:7])[0]
if cs == 16: # sRGB
- if nc == 3:
+ if nc == 1 and (bpc & 0x7f) > 8:
+ mode = 'I;16'
+ elif nc == 1:
+ mode = 'L'
+ elif nc == 3:
mode = 'RGB'
elif nc == 4:
mode = 'RGBA'
break
elif cs == 17: # grayscale
- if nc == 1:
+ if nc == 1 and (bpc & 0x7f) > 8:
+ mode = 'I;16'
+ elif nc == 1:
mode = 'L'
elif nc == 2:
mode = 'LA'
@@ -129,10 +141,10 @@
return (size, mode)
-
##
# Image plugin for JPEG2000 images.
+
class Jpeg2KImageFile(ImageFile.ImageFile):
format = "JPEG2000"
format_description = "JPEG 2000 (ISO 15444)"
@@ -174,7 +186,7 @@
f.seek(pos, 0)
except:
length = -1
-
+
self.tile = [('jpeg2k', (0, 0) + self.size, 0,
(self.codec, self.reduce, self.layers, fd, length))]
| {"golden_diff": "diff --git a/PIL/Jpeg2KImagePlugin.py b/PIL/Jpeg2KImagePlugin.py\n--- a/PIL/Jpeg2KImagePlugin.py\n+++ b/PIL/Jpeg2KImagePlugin.py\n@@ -40,7 +40,10 @@\n \n size = (xsiz - xosiz, ysiz - yosiz)\n if csiz == 1:\n- mode = 'L'\n+ if (yrsiz[0] & 0x7f) > 8:\n+ mode = 'I;16'\n+ else:\n+ mode = 'L'\n elif csiz == 2:\n mode = 'LA'\n elif csiz == 3:\n@@ -78,6 +81,7 @@\n \n size = None\n mode = None\n+ bpc = None\n \n hio = io.BytesIO(header)\n while True:\n@@ -95,7 +99,9 @@\n = struct.unpack('>IIHBBBB', content)\n size = (width, height)\n if unkc:\n- if nc == 1:\n+ if nc == 1 and (bpc & 0x7f) > 8:\n+ mode = 'I;16'\n+ elif nc == 1:\n mode = 'L'\n elif nc == 2:\n mode = 'LA'\n@@ -109,13 +115,19 @@\n if meth == 1:\n cs = struct.unpack('>I', content[3:7])[0]\n if cs == 16: # sRGB\n- if nc == 3:\n+ if nc == 1 and (bpc & 0x7f) > 8:\n+ mode = 'I;16'\n+ elif nc == 1:\n+ mode = 'L'\n+ elif nc == 3:\n mode = 'RGB'\n elif nc == 4:\n mode = 'RGBA'\n break\n elif cs == 17: # grayscale\n- if nc == 1:\n+ if nc == 1 and (bpc & 0x7f) > 8:\n+ mode = 'I;16'\n+ elif nc == 1:\n mode = 'L'\n elif nc == 2:\n mode = 'LA'\n@@ -129,10 +141,10 @@\n \n return (size, mode)\n \n-\n ##\n # Image plugin for JPEG2000 images.\n \n+\n class Jpeg2KImageFile(ImageFile.ImageFile):\n format = \"JPEG2000\"\n format_description = \"JPEG 2000 (ISO 15444)\"\n@@ -174,7 +186,7 @@\n f.seek(pos, 0)\n except:\n length = -1\n- \n+\n self.tile = [('jpeg2k', (0, 0) + self.size, 0,\n (self.codec, self.reduce, self.layers, fd, length))]\n", "issue": "Add 16bit encode/decode to JPEG2K (WIP)\nAdd the ability to write 16bit JPEG2k Images (for DICOM medical images).\n\nI'd trying to add a decoder too but I couldn't quite figure it out. Here's my dumb test case:\n\n``` python\nimport numpy as np\nfrom PIL import Image\n\narr = np.zeros((64, 96), dtype=np.uint16)\narr[16:64, 0:32] = 300\n\nimg = Image.fromarray(arr, 'I;16')\nimg.save('grr.j2k') # Works\n\nimg2 = Image.open('grr.j2k')\narr2 = np.array(img2)\n\nnp.testing.assert_almost_equal(arr, arr2) # Fails\n```\n\nThoughts?\n\n", "before_files": [{"content": "#\n# The Python Imaging Library\n# $Id$\n#\n# JPEG2000 file handling\n#\n# History:\n# 2014-03-12 ajh Created\n#\n# Copyright (c) 2014 Coriolis Systems Limited\n# Copyright (c) 2014 Alastair Houghton\n#\n# See the README file for information on usage and redistribution.\n#\n\n__version__ = \"0.1\"\n\nfrom PIL import Image, ImageFile\nimport struct\nimport os\nimport io\n\n\ndef _parse_codestream(fp):\n \"\"\"Parse the JPEG 2000 codestream to extract the size and component\n count from the SIZ marker segment, returning a PIL (size, mode) tuple.\"\"\"\n\n hdr = fp.read(2)\n lsiz = struct.unpack('>H', hdr)[0]\n siz = hdr + fp.read(lsiz - 2)\n lsiz, rsiz, xsiz, ysiz, xosiz, yosiz, xtsiz, ytsiz, \\\n xtosiz, ytosiz, csiz \\\n = struct.unpack('>HHIIIIIIIIH', siz[:38])\n ssiz = [None]*csiz\n xrsiz = [None]*csiz\n yrsiz = [None]*csiz\n for i in range(csiz):\n ssiz[i], xrsiz[i], yrsiz[i] \\\n = struct.unpack('>BBB', siz[36 + 3 * i:39 + 3 * i])\n\n size = (xsiz - xosiz, ysiz - yosiz)\n if csiz == 1:\n mode = 'L'\n elif csiz == 2:\n mode = 'LA'\n elif csiz == 3:\n mode = 'RGB'\n elif csiz == 4:\n mode = 'RGBA'\n else:\n mode = None\n\n return (size, mode)\n\n\ndef _parse_jp2_header(fp):\n \"\"\"Parse the JP2 header box to extract size, component count and\n color space information, returning a PIL (size, mode) tuple.\"\"\"\n\n # Find the JP2 header box\n header = None\n while True:\n lbox, tbox = struct.unpack('>I4s', fp.read(8))\n if lbox == 1:\n lbox = struct.unpack('>Q', fp.read(8))[0]\n hlen = 16\n else:\n hlen = 8\n\n if tbox == b'jp2h':\n header = fp.read(lbox - hlen)\n break\n else:\n fp.seek(lbox - hlen, os.SEEK_CUR)\n\n if header is None:\n raise SyntaxError('could not find JP2 header')\n\n size = None\n mode = None\n\n hio = io.BytesIO(header)\n while True:\n lbox, tbox = struct.unpack('>I4s', hio.read(8))\n if lbox == 1:\n lbox = struct.unpack('>Q', hio.read(8))[0]\n hlen = 16\n else:\n hlen = 8\n\n content = hio.read(lbox - hlen)\n\n if tbox == b'ihdr':\n height, width, nc, bpc, c, unkc, ipr \\\n = struct.unpack('>IIHBBBB', content)\n size = (width, height)\n if unkc:\n if nc == 1:\n mode = 'L'\n elif nc == 2:\n mode = 'LA'\n elif nc == 3:\n mode = 'RGB'\n elif nc == 4:\n mode = 'RGBA'\n break\n elif tbox == b'colr':\n meth, prec, approx = struct.unpack('>BBB', content[:3])\n if meth == 1:\n cs = struct.unpack('>I', content[3:7])[0]\n if cs == 16: # sRGB\n if nc == 3:\n mode = 'RGB'\n elif nc == 4:\n mode = 'RGBA'\n break\n elif cs == 17: # grayscale\n if nc == 1:\n mode = 'L'\n elif nc == 2:\n mode = 'LA'\n break\n elif cs == 18: # sYCC\n if nc == 3:\n mode = 'RGB'\n elif nc == 4:\n mode = 'RGBA'\n break\n\n return (size, mode)\n\n\n##\n# Image plugin for JPEG2000 images.\n\nclass Jpeg2KImageFile(ImageFile.ImageFile):\n format = \"JPEG2000\"\n format_description = \"JPEG 2000 (ISO 15444)\"\n\n def _open(self):\n sig = self.fp.read(4)\n if sig == b'\\xff\\x4f\\xff\\x51':\n self.codec = \"j2k\"\n self.size, self.mode = _parse_codestream(self.fp)\n else:\n sig = sig + self.fp.read(8)\n\n if sig == b'\\x00\\x00\\x00\\x0cjP \\x0d\\x0a\\x87\\x0a':\n self.codec = \"jp2\"\n self.size, self.mode = _parse_jp2_header(self.fp)\n else:\n raise SyntaxError('not a JPEG 2000 file')\n\n if self.size is None or self.mode is None:\n raise SyntaxError('unable to determine size/mode')\n\n self.reduce = 0\n self.layers = 0\n\n fd = -1\n length = -1\n\n if hasattr(self.fp, \"fileno\"):\n try:\n fd = self.fp.fileno()\n length = os.fstat(fd).st_size\n except:\n fd = -1\n elif hasattr(self.fp, \"seek\"):\n try:\n pos = f.tell()\n f.seek(0, 2)\n length = f.tell()\n f.seek(pos, 0)\n except:\n length = -1\n \n self.tile = [('jpeg2k', (0, 0) + self.size, 0,\n (self.codec, self.reduce, self.layers, fd, length))]\n\n def load(self):\n if self.reduce:\n power = 1 << self.reduce\n adjust = power >> 1\n self.size = (int((self.size[0] + adjust) / power),\n int((self.size[1] + adjust) / power))\n\n if self.tile:\n # Update the reduce and layers settings\n t = self.tile[0]\n t3 = (t[3][0], self.reduce, self.layers, t[3][3], t[3][4])\n self.tile = [(t[0], (0, 0) + self.size, t[2], t3)]\n\n ImageFile.ImageFile.load(self)\n\n\ndef _accept(prefix):\n return (prefix[:4] == b'\\xff\\x4f\\xff\\x51'\n or prefix[:12] == b'\\x00\\x00\\x00\\x0cjP \\x0d\\x0a\\x87\\x0a')\n\n\n# ------------------------------------------------------------\n# Save support\n\ndef _save(im, fp, filename):\n if filename.endswith('.j2k'):\n kind = 'j2k'\n else:\n kind = 'jp2'\n\n # Get the keyword arguments\n info = im.encoderinfo\n\n offset = info.get('offset', None)\n tile_offset = info.get('tile_offset', None)\n tile_size = info.get('tile_size', None)\n quality_mode = info.get('quality_mode', 'rates')\n quality_layers = info.get('quality_layers', None)\n num_resolutions = info.get('num_resolutions', 0)\n cblk_size = info.get('codeblock_size', None)\n precinct_size = info.get('precinct_size', None)\n irreversible = info.get('irreversible', False)\n progression = info.get('progression', 'LRCP')\n cinema_mode = info.get('cinema_mode', 'no')\n fd = -1\n\n if hasattr(fp, \"fileno\"):\n try:\n fd = fp.fileno()\n except:\n fd = -1\n\n im.encoderconfig = (\n offset,\n tile_offset,\n tile_size,\n quality_mode,\n quality_layers,\n num_resolutions,\n cblk_size,\n precinct_size,\n irreversible,\n progression,\n cinema_mode,\n fd\n )\n\n ImageFile._save(im, fp, [('jpeg2k', (0, 0)+im.size, 0, kind)])\n\n# ------------------------------------------------------------\n# Registry stuff\n\nImage.register_open('JPEG2000', Jpeg2KImageFile, _accept)\nImage.register_save('JPEG2000', _save)\n\nImage.register_extension('JPEG2000', '.jp2')\nImage.register_extension('JPEG2000', '.j2k')\nImage.register_extension('JPEG2000', '.jpc')\nImage.register_extension('JPEG2000', '.jpf')\nImage.register_extension('JPEG2000', '.jpx')\nImage.register_extension('JPEG2000', '.j2c')\n\nImage.register_mime('JPEG2000', 'image/jp2')\nImage.register_mime('JPEG2000', 'image/jpx')\n", "path": "PIL/Jpeg2KImagePlugin.py"}], "after_files": [{"content": "#\n# The Python Imaging Library\n# $Id$\n#\n# JPEG2000 file handling\n#\n# History:\n# 2014-03-12 ajh Created\n#\n# Copyright (c) 2014 Coriolis Systems Limited\n# Copyright (c) 2014 Alastair Houghton\n#\n# See the README file for information on usage and redistribution.\n#\n\n__version__ = \"0.1\"\n\nfrom PIL import Image, ImageFile\nimport struct\nimport os\nimport io\n\n\ndef _parse_codestream(fp):\n \"\"\"Parse the JPEG 2000 codestream to extract the size and component\n count from the SIZ marker segment, returning a PIL (size, mode) tuple.\"\"\"\n\n hdr = fp.read(2)\n lsiz = struct.unpack('>H', hdr)[0]\n siz = hdr + fp.read(lsiz - 2)\n lsiz, rsiz, xsiz, ysiz, xosiz, yosiz, xtsiz, ytsiz, \\\n xtosiz, ytosiz, csiz \\\n = struct.unpack('>HHIIIIIIIIH', siz[:38])\n ssiz = [None]*csiz\n xrsiz = [None]*csiz\n yrsiz = [None]*csiz\n for i in range(csiz):\n ssiz[i], xrsiz[i], yrsiz[i] \\\n = struct.unpack('>BBB', siz[36 + 3 * i:39 + 3 * i])\n\n size = (xsiz - xosiz, ysiz - yosiz)\n if csiz == 1:\n if (yrsiz[0] & 0x7f) > 8:\n mode = 'I;16'\n else:\n mode = 'L'\n elif csiz == 2:\n mode = 'LA'\n elif csiz == 3:\n mode = 'RGB'\n elif csiz == 4:\n mode = 'RGBA'\n else:\n mode = None\n\n return (size, mode)\n\n\ndef _parse_jp2_header(fp):\n \"\"\"Parse the JP2 header box to extract size, component count and\n color space information, returning a PIL (size, mode) tuple.\"\"\"\n\n # Find the JP2 header box\n header = None\n while True:\n lbox, tbox = struct.unpack('>I4s', fp.read(8))\n if lbox == 1:\n lbox = struct.unpack('>Q', fp.read(8))[0]\n hlen = 16\n else:\n hlen = 8\n\n if tbox == b'jp2h':\n header = fp.read(lbox - hlen)\n break\n else:\n fp.seek(lbox - hlen, os.SEEK_CUR)\n\n if header is None:\n raise SyntaxError('could not find JP2 header')\n\n size = None\n mode = None\n bpc = None\n\n hio = io.BytesIO(header)\n while True:\n lbox, tbox = struct.unpack('>I4s', hio.read(8))\n if lbox == 1:\n lbox = struct.unpack('>Q', hio.read(8))[0]\n hlen = 16\n else:\n hlen = 8\n\n content = hio.read(lbox - hlen)\n\n if tbox == b'ihdr':\n height, width, nc, bpc, c, unkc, ipr \\\n = struct.unpack('>IIHBBBB', content)\n size = (width, height)\n if unkc:\n if nc == 1 and (bpc & 0x7f) > 8:\n mode = 'I;16'\n elif nc == 1:\n mode = 'L'\n elif nc == 2:\n mode = 'LA'\n elif nc == 3:\n mode = 'RGB'\n elif nc == 4:\n mode = 'RGBA'\n break\n elif tbox == b'colr':\n meth, prec, approx = struct.unpack('>BBB', content[:3])\n if meth == 1:\n cs = struct.unpack('>I', content[3:7])[0]\n if cs == 16: # sRGB\n if nc == 1 and (bpc & 0x7f) > 8:\n mode = 'I;16'\n elif nc == 1:\n mode = 'L'\n elif nc == 3:\n mode = 'RGB'\n elif nc == 4:\n mode = 'RGBA'\n break\n elif cs == 17: # grayscale\n if nc == 1 and (bpc & 0x7f) > 8:\n mode = 'I;16'\n elif nc == 1:\n mode = 'L'\n elif nc == 2:\n mode = 'LA'\n break\n elif cs == 18: # sYCC\n if nc == 3:\n mode = 'RGB'\n elif nc == 4:\n mode = 'RGBA'\n break\n\n return (size, mode)\n\n##\n# Image plugin for JPEG2000 images.\n\n\nclass Jpeg2KImageFile(ImageFile.ImageFile):\n format = \"JPEG2000\"\n format_description = \"JPEG 2000 (ISO 15444)\"\n\n def _open(self):\n sig = self.fp.read(4)\n if sig == b'\\xff\\x4f\\xff\\x51':\n self.codec = \"j2k\"\n self.size, self.mode = _parse_codestream(self.fp)\n else:\n sig = sig + self.fp.read(8)\n\n if sig == b'\\x00\\x00\\x00\\x0cjP \\x0d\\x0a\\x87\\x0a':\n self.codec = \"jp2\"\n self.size, self.mode = _parse_jp2_header(self.fp)\n else:\n raise SyntaxError('not a JPEG 2000 file')\n\n if self.size is None or self.mode is None:\n raise SyntaxError('unable to determine size/mode')\n\n self.reduce = 0\n self.layers = 0\n\n fd = -1\n length = -1\n\n if hasattr(self.fp, \"fileno\"):\n try:\n fd = self.fp.fileno()\n length = os.fstat(fd).st_size\n except:\n fd = -1\n elif hasattr(self.fp, \"seek\"):\n try:\n pos = f.tell()\n f.seek(0, 2)\n length = f.tell()\n f.seek(pos, 0)\n except:\n length = -1\n\n self.tile = [('jpeg2k', (0, 0) + self.size, 0,\n (self.codec, self.reduce, self.layers, fd, length))]\n\n def load(self):\n if self.reduce:\n power = 1 << self.reduce\n adjust = power >> 1\n self.size = (int((self.size[0] + adjust) / power),\n int((self.size[1] + adjust) / power))\n\n if self.tile:\n # Update the reduce and layers settings\n t = self.tile[0]\n t3 = (t[3][0], self.reduce, self.layers, t[3][3], t[3][4])\n self.tile = [(t[0], (0, 0) + self.size, t[2], t3)]\n\n ImageFile.ImageFile.load(self)\n\n\ndef _accept(prefix):\n return (prefix[:4] == b'\\xff\\x4f\\xff\\x51'\n or prefix[:12] == b'\\x00\\x00\\x00\\x0cjP \\x0d\\x0a\\x87\\x0a')\n\n\n# ------------------------------------------------------------\n# Save support\n\ndef _save(im, fp, filename):\n if filename.endswith('.j2k'):\n kind = 'j2k'\n else:\n kind = 'jp2'\n\n # Get the keyword arguments\n info = im.encoderinfo\n\n offset = info.get('offset', None)\n tile_offset = info.get('tile_offset', None)\n tile_size = info.get('tile_size', None)\n quality_mode = info.get('quality_mode', 'rates')\n quality_layers = info.get('quality_layers', None)\n num_resolutions = info.get('num_resolutions', 0)\n cblk_size = info.get('codeblock_size', None)\n precinct_size = info.get('precinct_size', None)\n irreversible = info.get('irreversible', False)\n progression = info.get('progression', 'LRCP')\n cinema_mode = info.get('cinema_mode', 'no')\n fd = -1\n\n if hasattr(fp, \"fileno\"):\n try:\n fd = fp.fileno()\n except:\n fd = -1\n\n im.encoderconfig = (\n offset,\n tile_offset,\n tile_size,\n quality_mode,\n quality_layers,\n num_resolutions,\n cblk_size,\n precinct_size,\n irreversible,\n progression,\n cinema_mode,\n fd\n )\n\n ImageFile._save(im, fp, [('jpeg2k', (0, 0)+im.size, 0, kind)])\n\n# ------------------------------------------------------------\n# Registry stuff\n\nImage.register_open('JPEG2000', Jpeg2KImageFile, _accept)\nImage.register_save('JPEG2000', _save)\n\nImage.register_extension('JPEG2000', '.jp2')\nImage.register_extension('JPEG2000', '.j2k')\nImage.register_extension('JPEG2000', '.jpc')\nImage.register_extension('JPEG2000', '.jpf')\nImage.register_extension('JPEG2000', '.jpx')\nImage.register_extension('JPEG2000', '.j2c')\n\nImage.register_mime('JPEG2000', 'image/jp2')\nImage.register_mime('JPEG2000', 'image/jpx')\n", "path": "PIL/Jpeg2KImagePlugin.py"}]} | 3,216 | 674 |
gh_patches_debug_23568 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-2921 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Clean up dependencies
Spring cleaning! We currently declare some dependencies which are either unused or can easily be substituted:
- h11 - not used at all?
- requests - tests + examples only.
We should IMHO also eventually consider removing the following dependencies, although that involves a bit of work and shouldn't be in scope for this issue:
- pyasn1 - replace with asn1crypto, which is used by cryptography/pyOpenSSL
- ldap3 - only used for ldap proxy auth, which should probably live outside of the core once we have a healthy addon system.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2 from codecs import open
3
4 import re
5 from setuptools import setup, find_packages
6
7 # Based on https://github.com/pypa/sampleproject/blob/master/setup.py
8 # and https://python-packaging-user-guide.readthedocs.org/
9
10 here = os.path.abspath(os.path.dirname(__file__))
11
12 with open(os.path.join(here, 'README.rst'), encoding='utf-8') as f:
13 long_description = f.read()
14
15 with open(os.path.join(here, "mitmproxy", "version.py")) as f:
16 VERSION = re.search(r'VERSION = "(.+?)(?:-0x|")', f.read()).group(1)
17
18 setup(
19 name="mitmproxy",
20 version=VERSION,
21 description="An interactive, SSL-capable, man-in-the-middle HTTP proxy for penetration testers and software developers.",
22 long_description=long_description,
23 url="http://mitmproxy.org",
24 author="Aldo Cortesi",
25 author_email="[email protected]",
26 license="MIT",
27 classifiers=[
28 "License :: OSI Approved :: MIT License",
29 "Development Status :: 5 - Production/Stable",
30 "Environment :: Console",
31 "Environment :: Console :: Curses",
32 "Operating System :: MacOS :: MacOS X",
33 "Operating System :: POSIX",
34 "Operating System :: Microsoft :: Windows",
35 "Programming Language :: Python",
36 "Programming Language :: Python :: 3",
37 "Programming Language :: Python :: 3 :: Only",
38 "Programming Language :: Python :: 3.5",
39 "Programming Language :: Python :: 3.6",
40 "Programming Language :: Python :: Implementation :: CPython",
41 "Topic :: Security",
42 "Topic :: Internet",
43 "Topic :: Internet :: WWW/HTTP",
44 "Topic :: Internet :: Proxy Servers",
45 "Topic :: Software Development :: Testing"
46 ],
47 packages=find_packages(include=[
48 "mitmproxy", "mitmproxy.*",
49 "pathod", "pathod.*",
50 ]),
51 include_package_data=True,
52 entry_points={
53 'console_scripts': [
54 "mitmproxy = mitmproxy.tools.main:mitmproxy",
55 "mitmdump = mitmproxy.tools.main:mitmdump",
56 "mitmweb = mitmproxy.tools.main:mitmweb",
57 "pathod = pathod.pathod_cmdline:go_pathod",
58 "pathoc = pathod.pathoc_cmdline:go_pathoc"
59 ]
60 },
61 # https://packaging.python.org/en/latest/requirements/#install-requires
62 # It is not considered best practice to use install_requires to pin dependencies to specific versions.
63 install_requires=[
64 "blinker>=1.4, <1.5",
65 "brotlipy>=0.7.0,<0.8",
66 "certifi>=2015.11.20.1", # no semver here - this should always be on the last release!
67 "click>=6.2, <7",
68 "cryptography>=2.1.4,<2.2",
69 'h11>=0.7.0,<0.8',
70 "h2>=3.0.1,<4",
71 "hyperframe>=5.1.0,<6",
72 "kaitaistruct>=0.7,<0.9",
73 "ldap3>=2.4,<2.5",
74 "passlib>=1.6.5, <1.8",
75 "pyasn1>=0.3.1,<0.5",
76 "pyOpenSSL>=17.5,<17.6",
77 "pyparsing>=2.1.3, <2.3",
78 "pyperclip>=1.6.0, <1.7",
79 "requests>=2.9.1, <3",
80 "ruamel.yaml>=0.13.2, <0.16",
81 "sortedcontainers>=1.5.4, <1.6",
82 "tornado>=4.3, <4.6",
83 "urwid>=2.0.1,<2.1",
84 "wsproto>=0.11.0,<0.12.0",
85 ],
86 extras_require={
87 ':sys_platform == "win32"': [
88 "pydivert>=2.0.3,<2.2",
89 ],
90 'dev': [
91 "flake8>=3.5, <3.6",
92 "Flask>=0.10.1, <0.13",
93 "mypy>=0.560,<0.561",
94 "pytest-cov>=2.5.1,<3",
95 "pytest-faulthandler>=1.3.1,<2",
96 "pytest-timeout>=1.2.1,<2",
97 "pytest-xdist>=1.22,<2",
98 "pytest>=3.3,<4",
99 "tox>=2.3, <3",
100 "rstcheck>=2.2, <4.0",
101 ],
102 'examples': [
103 "beautifulsoup4>=4.4.1, <4.7",
104 "Pillow>=4.3,<5.1",
105 ]
106 }
107 )
108
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -66,7 +66,6 @@
"certifi>=2015.11.20.1", # no semver here - this should always be on the last release!
"click>=6.2, <7",
"cryptography>=2.1.4,<2.2",
- 'h11>=0.7.0,<0.8',
"h2>=3.0.1,<4",
"hyperframe>=5.1.0,<6",
"kaitaistruct>=0.7,<0.9",
@@ -76,7 +75,6 @@
"pyOpenSSL>=17.5,<17.6",
"pyparsing>=2.1.3, <2.3",
"pyperclip>=1.6.0, <1.7",
- "requests>=2.9.1, <3",
"ruamel.yaml>=0.13.2, <0.16",
"sortedcontainers>=1.5.4, <1.6",
"tornado>=4.3, <4.6",
@@ -96,6 +94,7 @@
"pytest-timeout>=1.2.1,<2",
"pytest-xdist>=1.22,<2",
"pytest>=3.3,<4",
+ "requests>=2.9.1, <3",
"tox>=2.3, <3",
"rstcheck>=2.2, <4.0",
],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -66,7 +66,6 @@\n \"certifi>=2015.11.20.1\", # no semver here - this should always be on the last release!\n \"click>=6.2, <7\",\n \"cryptography>=2.1.4,<2.2\",\n- 'h11>=0.7.0,<0.8',\n \"h2>=3.0.1,<4\",\n \"hyperframe>=5.1.0,<6\",\n \"kaitaistruct>=0.7,<0.9\",\n@@ -76,7 +75,6 @@\n \"pyOpenSSL>=17.5,<17.6\",\n \"pyparsing>=2.1.3, <2.3\",\n \"pyperclip>=1.6.0, <1.7\",\n- \"requests>=2.9.1, <3\",\n \"ruamel.yaml>=0.13.2, <0.16\",\n \"sortedcontainers>=1.5.4, <1.6\",\n \"tornado>=4.3, <4.6\",\n@@ -96,6 +94,7 @@\n \"pytest-timeout>=1.2.1,<2\",\n \"pytest-xdist>=1.22,<2\",\n \"pytest>=3.3,<4\",\n+ \"requests>=2.9.1, <3\",\n \"tox>=2.3, <3\",\n \"rstcheck>=2.2, <4.0\",\n ],\n", "issue": "Clean up dependencies\nSpring cleaning! We currently declare some dependencies which are either unused or can easily be substituted:\r\n\r\n - h11 - not used at all?\r\n - requests - tests + examples only.\r\n\r\nWe should IMHO also eventually consider removing the following dependencies, although that involves a bit of work and shouldn't be in scope for this issue:\r\n\r\n - pyasn1 - replace with asn1crypto, which is used by cryptography/pyOpenSSL\r\n - ldap3 - only used for ldap proxy auth, which should probably live outside of the core once we have a healthy addon system.\n", "before_files": [{"content": "import os\nfrom codecs import open\n\nimport re\nfrom setuptools import setup, find_packages\n\n# Based on https://github.com/pypa/sampleproject/blob/master/setup.py\n# and https://python-packaging-user-guide.readthedocs.org/\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(here, 'README.rst'), encoding='utf-8') as f:\n long_description = f.read()\n\nwith open(os.path.join(here, \"mitmproxy\", \"version.py\")) as f:\n VERSION = re.search(r'VERSION = \"(.+?)(?:-0x|\")', f.read()).group(1)\n\nsetup(\n name=\"mitmproxy\",\n version=VERSION,\n description=\"An interactive, SSL-capable, man-in-the-middle HTTP proxy for penetration testers and software developers.\",\n long_description=long_description,\n url=\"http://mitmproxy.org\",\n author=\"Aldo Cortesi\",\n author_email=\"[email protected]\",\n license=\"MIT\",\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Environment :: Console :: Curses\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Topic :: Security\",\n \"Topic :: Internet\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: Proxy Servers\",\n \"Topic :: Software Development :: Testing\"\n ],\n packages=find_packages(include=[\n \"mitmproxy\", \"mitmproxy.*\",\n \"pathod\", \"pathod.*\",\n ]),\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n \"mitmproxy = mitmproxy.tools.main:mitmproxy\",\n \"mitmdump = mitmproxy.tools.main:mitmdump\",\n \"mitmweb = mitmproxy.tools.main:mitmweb\",\n \"pathod = pathod.pathod_cmdline:go_pathod\",\n \"pathoc = pathod.pathoc_cmdline:go_pathoc\"\n ]\n },\n # https://packaging.python.org/en/latest/requirements/#install-requires\n # It is not considered best practice to use install_requires to pin dependencies to specific versions.\n install_requires=[\n \"blinker>=1.4, <1.5\",\n \"brotlipy>=0.7.0,<0.8\",\n \"certifi>=2015.11.20.1\", # no semver here - this should always be on the last release!\n \"click>=6.2, <7\",\n \"cryptography>=2.1.4,<2.2\",\n 'h11>=0.7.0,<0.8',\n \"h2>=3.0.1,<4\",\n \"hyperframe>=5.1.0,<6\",\n \"kaitaistruct>=0.7,<0.9\",\n \"ldap3>=2.4,<2.5\",\n \"passlib>=1.6.5, <1.8\",\n \"pyasn1>=0.3.1,<0.5\",\n \"pyOpenSSL>=17.5,<17.6\",\n \"pyparsing>=2.1.3, <2.3\",\n \"pyperclip>=1.6.0, <1.7\",\n \"requests>=2.9.1, <3\",\n \"ruamel.yaml>=0.13.2, <0.16\",\n \"sortedcontainers>=1.5.4, <1.6\",\n \"tornado>=4.3, <4.6\",\n \"urwid>=2.0.1,<2.1\",\n \"wsproto>=0.11.0,<0.12.0\",\n ],\n extras_require={\n ':sys_platform == \"win32\"': [\n \"pydivert>=2.0.3,<2.2\",\n ],\n 'dev': [\n \"flake8>=3.5, <3.6\",\n \"Flask>=0.10.1, <0.13\",\n \"mypy>=0.560,<0.561\",\n \"pytest-cov>=2.5.1,<3\",\n \"pytest-faulthandler>=1.3.1,<2\",\n \"pytest-timeout>=1.2.1,<2\",\n \"pytest-xdist>=1.22,<2\",\n \"pytest>=3.3,<4\",\n \"tox>=2.3, <3\",\n \"rstcheck>=2.2, <4.0\",\n ],\n 'examples': [\n \"beautifulsoup4>=4.4.1, <4.7\",\n \"Pillow>=4.3,<5.1\",\n ]\n }\n)\n", "path": "setup.py"}], "after_files": [{"content": "import os\nfrom codecs import open\n\nimport re\nfrom setuptools import setup, find_packages\n\n# Based on https://github.com/pypa/sampleproject/blob/master/setup.py\n# and https://python-packaging-user-guide.readthedocs.org/\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(here, 'README.rst'), encoding='utf-8') as f:\n long_description = f.read()\n\nwith open(os.path.join(here, \"mitmproxy\", \"version.py\")) as f:\n VERSION = re.search(r'VERSION = \"(.+?)(?:-0x|\")', f.read()).group(1)\n\nsetup(\n name=\"mitmproxy\",\n version=VERSION,\n description=\"An interactive, SSL-capable, man-in-the-middle HTTP proxy for penetration testers and software developers.\",\n long_description=long_description,\n url=\"http://mitmproxy.org\",\n author=\"Aldo Cortesi\",\n author_email=\"[email protected]\",\n license=\"MIT\",\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Environment :: Console :: Curses\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Topic :: Security\",\n \"Topic :: Internet\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: Proxy Servers\",\n \"Topic :: Software Development :: Testing\"\n ],\n packages=find_packages(include=[\n \"mitmproxy\", \"mitmproxy.*\",\n \"pathod\", \"pathod.*\",\n ]),\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n \"mitmproxy = mitmproxy.tools.main:mitmproxy\",\n \"mitmdump = mitmproxy.tools.main:mitmdump\",\n \"mitmweb = mitmproxy.tools.main:mitmweb\",\n \"pathod = pathod.pathod_cmdline:go_pathod\",\n \"pathoc = pathod.pathoc_cmdline:go_pathoc\"\n ]\n },\n # https://packaging.python.org/en/latest/requirements/#install-requires\n # It is not considered best practice to use install_requires to pin dependencies to specific versions.\n install_requires=[\n \"blinker>=1.4, <1.5\",\n \"brotlipy>=0.7.0,<0.8\",\n \"certifi>=2015.11.20.1\", # no semver here - this should always be on the last release!\n \"click>=6.2, <7\",\n \"cryptography>=2.1.4,<2.2\",\n \"h2>=3.0.1,<4\",\n \"hyperframe>=5.1.0,<6\",\n \"kaitaistruct>=0.7,<0.9\",\n \"ldap3>=2.4,<2.5\",\n \"passlib>=1.6.5, <1.8\",\n \"pyasn1>=0.3.1,<0.5\",\n \"pyOpenSSL>=17.5,<17.6\",\n \"pyparsing>=2.1.3, <2.3\",\n \"pyperclip>=1.6.0, <1.7\",\n \"ruamel.yaml>=0.13.2, <0.16\",\n \"sortedcontainers>=1.5.4, <1.6\",\n \"tornado>=4.3, <4.6\",\n \"urwid>=2.0.1,<2.1\",\n \"wsproto>=0.11.0,<0.12.0\",\n ],\n extras_require={\n ':sys_platform == \"win32\"': [\n \"pydivert>=2.0.3,<2.2\",\n ],\n 'dev': [\n \"flake8>=3.5, <3.6\",\n \"Flask>=0.10.1, <0.13\",\n \"mypy>=0.560,<0.561\",\n \"pytest-cov>=2.5.1,<3\",\n \"pytest-faulthandler>=1.3.1,<2\",\n \"pytest-timeout>=1.2.1,<2\",\n \"pytest-xdist>=1.22,<2\",\n \"pytest>=3.3,<4\",\n \"requests>=2.9.1, <3\",\n \"tox>=2.3, <3\",\n \"rstcheck>=2.2, <4.0\",\n ],\n 'examples': [\n \"beautifulsoup4>=4.4.1, <4.7\",\n \"Pillow>=4.3,<5.1\",\n ]\n }\n)\n", "path": "setup.py"}]} | 1,718 | 367 |
gh_patches_debug_33273 | rasdani/github-patches | git_diff | GeotrekCE__Geotrek-admin-1377 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Should not disable edit button if having bypass structure permission
Workaround: write url by hand (eg. "/trek/edit/1/").
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `geotrek/authent/models.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """
4 Models to manage users and profiles
5 """
6 from django.db import models
7 from django.contrib.auth.models import User
8 from django.conf import settings
9 from django.utils.translation import ugettext_lazy as _
10 from django.dispatch import receiver
11 from django.contrib.auth.signals import user_logged_in
12
13 from geotrek.common.utils import reify
14
15
16 class Structure(models.Model):
17 """
18 Represents an organisational structure, to which users are related.
19 """
20 name = models.CharField(max_length=256, verbose_name=_(u"Nom"))
21
22 def __unicode__(self):
23 return self.name
24
25 class Meta:
26 verbose_name = _(u"Structure")
27 verbose_name_plural = _(u"Structures")
28 ordering = ['name']
29 permissions = (("can_bypass_structure", _("Can by structure")),)
30
31
32 def default_structure():
33 """ Create default structure if necessary """
34 return Structure.objects.get_or_create(name=settings.DEFAULT_STRUCTURE_NAME)[0]
35
36
37 class StructureRelatedQuerySet(models.query.QuerySet):
38 def for_user(self, user):
39 return StructureRelatedQuerySet.queryset_for_user(self, user)
40
41 @staticmethod
42 def queryset_for_user(queryset, user):
43 return queryset.filter(structure=user.profile.structure)
44
45
46 class StructureRelatedManager(models.Manager):
47 """ A simple manager to manage structure related objects"""
48 def get_queryset(self):
49 return StructureRelatedQuerySet(self.model, using=self._db)
50
51 def for_user(self, user):
52 """ Filter by user's structure """
53 return self.get_queryset().for_user(user)
54
55
56 class StructureRelated(models.Model):
57 """
58 A mixin used for any entities that belong to a structure
59 """
60 structure = models.ForeignKey(Structure, default=default_structure,
61 verbose_name=_(u"Related structure"), db_column='structure')
62
63 objects = models.Manager()
64 in_structure = StructureRelatedManager()
65
66 @classmethod
67 def for_user(cls, user):
68 """ Shortcut to manager's filter by user """
69 return cls.in_structure.for_user(user)
70
71 def same_structure(self, user):
72 """ Returns True if the user is in the same structure, False otherwise. """
73 return user.profile.structure == self.structure
74
75 class Meta:
76 abstract = True
77 verbose_name = _(u"Related structures")
78 verbose_name_plural = _(u"Related structure")
79
80
81 class UserProfile(StructureRelated):
82 """
83 A custom user profile
84 """
85 user = models.OneToOneField(User, unique=True)
86
87 language = models.CharField(_(u"Language"), max_length=10,
88 choices=settings.LANGUAGES,
89 default=settings.LANGUAGE_CODE)
90
91 class Meta:
92 verbose_name = _(u"User's profile")
93 verbose_name_plural = _(u"User's profiles")
94
95 def __unicode__(self):
96 return _("Profile for %s") % self.user
97
98 User.profile = reify(lambda u: UserProfile.objects.get_or_create(user=u)[0])
99
100
101 @receiver(user_logged_in)
102 def lang(sender, **kwargs):
103 """ Set user's language in session when he logs in. """
104 lang_code = kwargs['user'].profile.language
105 kwargs['request'].session['django_language'] = lang_code
106
```
Path: `geotrek/authent/decorators.py`
Content:
```
1 from functools import wraps
2
3 from django.shortcuts import redirect
4 from django.contrib import messages
5 from django.http import HttpResponseRedirect
6 from django.utils.translation import ugettext_lazy as _
7 from django.utils.decorators import available_attrs
8
9
10 def same_structure_required(redirect_to):
11 """
12 A decorator for class-based views. It relies on ``self.get_object()``
13 method object, and assumes decorated views to handle ``StructureRelated``
14 objects.
15 """
16 def decorator(view_func):
17 @wraps(view_func, assigned=available_attrs(view_func))
18 def _wrapped_view(self, request, *args, **kwargs):
19 result = view_func(self, request, *args, **kwargs)
20
21 # Superuser is always allowed
22 if request.user.is_superuser:
23 return result
24
25 if isinstance(result, HttpResponseRedirect):
26 return result
27
28 can_bypass_structure = request.user.has_perm('can_bypass_structure')
29 obj = hasattr(self, 'get_object') and self.get_object() or getattr(self, 'object', None)
30 if can_bypass_structure or (obj and obj.same_structure(request.user)):
31 return result
32 messages.warning(request, _(u'Access to the requested resource is restricted by structure. You have been redirected.'))
33
34 return redirect(redirect_to, *args, **kwargs)
35 return _wrapped_view
36 return decorator
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/geotrek/authent/decorators.py b/geotrek/authent/decorators.py
--- a/geotrek/authent/decorators.py
+++ b/geotrek/authent/decorators.py
@@ -18,16 +18,11 @@
def _wrapped_view(self, request, *args, **kwargs):
result = view_func(self, request, *args, **kwargs)
- # Superuser is always allowed
- if request.user.is_superuser:
- return result
-
if isinstance(result, HttpResponseRedirect):
return result
- can_bypass_structure = request.user.has_perm('can_bypass_structure')
obj = hasattr(self, 'get_object') and self.get_object() or getattr(self, 'object', None)
- if can_bypass_structure or (obj and obj.same_structure(request.user)):
+ if obj.same_structure(request.user):
return result
messages.warning(request, _(u'Access to the requested resource is restricted by structure. You have been redirected.'))
diff --git a/geotrek/authent/models.py b/geotrek/authent/models.py
--- a/geotrek/authent/models.py
+++ b/geotrek/authent/models.py
@@ -26,7 +26,7 @@
verbose_name = _(u"Structure")
verbose_name_plural = _(u"Structures")
ordering = ['name']
- permissions = (("can_bypass_structure", _("Can by structure")),)
+ permissions = (("can_bypass_structure", _("Can bypass structure")),)
def default_structure():
@@ -69,8 +69,11 @@
return cls.in_structure.for_user(user)
def same_structure(self, user):
- """ Returns True if the user is in the same structure, False otherwise. """
- return user.profile.structure == self.structure
+ """ Returns True if the user is in the same structure or has
+ bypass_structure permission, False otherwise. """
+ return (user.profile.structure == self.structure or
+ user.is_superuser or
+ user.has_perm('authent.can_bypass_structure'))
class Meta:
abstract = True
| {"golden_diff": "diff --git a/geotrek/authent/decorators.py b/geotrek/authent/decorators.py\n--- a/geotrek/authent/decorators.py\n+++ b/geotrek/authent/decorators.py\n@@ -18,16 +18,11 @@\n def _wrapped_view(self, request, *args, **kwargs):\n result = view_func(self, request, *args, **kwargs)\n \n- # Superuser is always allowed\n- if request.user.is_superuser:\n- return result\n-\n if isinstance(result, HttpResponseRedirect):\n return result\n \n- can_bypass_structure = request.user.has_perm('can_bypass_structure')\n obj = hasattr(self, 'get_object') and self.get_object() or getattr(self, 'object', None)\n- if can_bypass_structure or (obj and obj.same_structure(request.user)):\n+ if obj.same_structure(request.user):\n return result\n messages.warning(request, _(u'Access to the requested resource is restricted by structure. You have been redirected.'))\n \ndiff --git a/geotrek/authent/models.py b/geotrek/authent/models.py\n--- a/geotrek/authent/models.py\n+++ b/geotrek/authent/models.py\n@@ -26,7 +26,7 @@\n verbose_name = _(u\"Structure\")\n verbose_name_plural = _(u\"Structures\")\n ordering = ['name']\n- permissions = ((\"can_bypass_structure\", _(\"Can by structure\")),)\n+ permissions = ((\"can_bypass_structure\", _(\"Can bypass structure\")),)\n \n \n def default_structure():\n@@ -69,8 +69,11 @@\n return cls.in_structure.for_user(user)\n \n def same_structure(self, user):\n- \"\"\" Returns True if the user is in the same structure, False otherwise. \"\"\"\n- return user.profile.structure == self.structure\n+ \"\"\" Returns True if the user is in the same structure or has\n+ bypass_structure permission, False otherwise. \"\"\"\n+ return (user.profile.structure == self.structure or\n+ user.is_superuser or\n+ user.has_perm('authent.can_bypass_structure'))\n \n class Meta:\n abstract = True\n", "issue": "Should not disable edit button if having bypass structure permission\nWorkaround: write url by hand (eg. \"/trek/edit/1/\").\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\n Models to manage users and profiles\n\"\"\"\nfrom django.db import models\nfrom django.contrib.auth.models import User\nfrom django.conf import settings\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.dispatch import receiver\nfrom django.contrib.auth.signals import user_logged_in\n\nfrom geotrek.common.utils import reify\n\n\nclass Structure(models.Model):\n \"\"\"\n Represents an organisational structure, to which users are related.\n \"\"\"\n name = models.CharField(max_length=256, verbose_name=_(u\"Nom\"))\n\n def __unicode__(self):\n return self.name\n\n class Meta:\n verbose_name = _(u\"Structure\")\n verbose_name_plural = _(u\"Structures\")\n ordering = ['name']\n permissions = ((\"can_bypass_structure\", _(\"Can by structure\")),)\n\n\ndef default_structure():\n \"\"\" Create default structure if necessary \"\"\"\n return Structure.objects.get_or_create(name=settings.DEFAULT_STRUCTURE_NAME)[0]\n\n\nclass StructureRelatedQuerySet(models.query.QuerySet):\n def for_user(self, user):\n return StructureRelatedQuerySet.queryset_for_user(self, user)\n\n @staticmethod\n def queryset_for_user(queryset, user):\n return queryset.filter(structure=user.profile.structure)\n\n\nclass StructureRelatedManager(models.Manager):\n \"\"\" A simple manager to manage structure related objects\"\"\"\n def get_queryset(self):\n return StructureRelatedQuerySet(self.model, using=self._db)\n\n def for_user(self, user):\n \"\"\" Filter by user's structure \"\"\"\n return self.get_queryset().for_user(user)\n\n\nclass StructureRelated(models.Model):\n \"\"\"\n A mixin used for any entities that belong to a structure\n \"\"\"\n structure = models.ForeignKey(Structure, default=default_structure,\n verbose_name=_(u\"Related structure\"), db_column='structure')\n\n objects = models.Manager()\n in_structure = StructureRelatedManager()\n\n @classmethod\n def for_user(cls, user):\n \"\"\" Shortcut to manager's filter by user \"\"\"\n return cls.in_structure.for_user(user)\n\n def same_structure(self, user):\n \"\"\" Returns True if the user is in the same structure, False otherwise. \"\"\"\n return user.profile.structure == self.structure\n\n class Meta:\n abstract = True\n verbose_name = _(u\"Related structures\")\n verbose_name_plural = _(u\"Related structure\")\n\n\nclass UserProfile(StructureRelated):\n \"\"\"\n A custom user profile\n \"\"\"\n user = models.OneToOneField(User, unique=True)\n\n language = models.CharField(_(u\"Language\"), max_length=10,\n choices=settings.LANGUAGES,\n default=settings.LANGUAGE_CODE)\n\n class Meta:\n verbose_name = _(u\"User's profile\")\n verbose_name_plural = _(u\"User's profiles\")\n\n def __unicode__(self):\n return _(\"Profile for %s\") % self.user\n\nUser.profile = reify(lambda u: UserProfile.objects.get_or_create(user=u)[0])\n\n\n@receiver(user_logged_in)\ndef lang(sender, **kwargs):\n \"\"\" Set user's language in session when he logs in. \"\"\"\n lang_code = kwargs['user'].profile.language\n kwargs['request'].session['django_language'] = lang_code\n", "path": "geotrek/authent/models.py"}, {"content": "from functools import wraps\n\nfrom django.shortcuts import redirect\nfrom django.contrib import messages\nfrom django.http import HttpResponseRedirect\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.utils.decorators import available_attrs\n\n\ndef same_structure_required(redirect_to):\n \"\"\"\n A decorator for class-based views. It relies on ``self.get_object()``\n method object, and assumes decorated views to handle ``StructureRelated``\n objects.\n \"\"\"\n def decorator(view_func):\n @wraps(view_func, assigned=available_attrs(view_func))\n def _wrapped_view(self, request, *args, **kwargs):\n result = view_func(self, request, *args, **kwargs)\n\n # Superuser is always allowed\n if request.user.is_superuser:\n return result\n\n if isinstance(result, HttpResponseRedirect):\n return result\n\n can_bypass_structure = request.user.has_perm('can_bypass_structure')\n obj = hasattr(self, 'get_object') and self.get_object() or getattr(self, 'object', None)\n if can_bypass_structure or (obj and obj.same_structure(request.user)):\n return result\n messages.warning(request, _(u'Access to the requested resource is restricted by structure. You have been redirected.'))\n\n return redirect(redirect_to, *args, **kwargs)\n return _wrapped_view\n return decorator\n", "path": "geotrek/authent/decorators.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\n Models to manage users and profiles\n\"\"\"\nfrom django.db import models\nfrom django.contrib.auth.models import User\nfrom django.conf import settings\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.dispatch import receiver\nfrom django.contrib.auth.signals import user_logged_in\n\nfrom geotrek.common.utils import reify\n\n\nclass Structure(models.Model):\n \"\"\"\n Represents an organisational structure, to which users are related.\n \"\"\"\n name = models.CharField(max_length=256, verbose_name=_(u\"Nom\"))\n\n def __unicode__(self):\n return self.name\n\n class Meta:\n verbose_name = _(u\"Structure\")\n verbose_name_plural = _(u\"Structures\")\n ordering = ['name']\n permissions = ((\"can_bypass_structure\", _(\"Can bypass structure\")),)\n\n\ndef default_structure():\n \"\"\" Create default structure if necessary \"\"\"\n return Structure.objects.get_or_create(name=settings.DEFAULT_STRUCTURE_NAME)[0]\n\n\nclass StructureRelatedQuerySet(models.query.QuerySet):\n def for_user(self, user):\n return StructureRelatedQuerySet.queryset_for_user(self, user)\n\n @staticmethod\n def queryset_for_user(queryset, user):\n return queryset.filter(structure=user.profile.structure)\n\n\nclass StructureRelatedManager(models.Manager):\n \"\"\" A simple manager to manage structure related objects\"\"\"\n def get_queryset(self):\n return StructureRelatedQuerySet(self.model, using=self._db)\n\n def for_user(self, user):\n \"\"\" Filter by user's structure \"\"\"\n return self.get_queryset().for_user(user)\n\n\nclass StructureRelated(models.Model):\n \"\"\"\n A mixin used for any entities that belong to a structure\n \"\"\"\n structure = models.ForeignKey(Structure, default=default_structure,\n verbose_name=_(u\"Related structure\"), db_column='structure')\n\n objects = models.Manager()\n in_structure = StructureRelatedManager()\n\n @classmethod\n def for_user(cls, user):\n \"\"\" Shortcut to manager's filter by user \"\"\"\n return cls.in_structure.for_user(user)\n\n def same_structure(self, user):\n \"\"\" Returns True if the user is in the same structure or has\n bypass_structure permission, False otherwise. \"\"\"\n return (user.profile.structure == self.structure or\n user.is_superuser or\n user.has_perm('authent.can_bypass_structure'))\n\n class Meta:\n abstract = True\n verbose_name = _(u\"Related structures\")\n verbose_name_plural = _(u\"Related structure\")\n\n\nclass UserProfile(StructureRelated):\n \"\"\"\n A custom user profile\n \"\"\"\n user = models.OneToOneField(User, unique=True)\n\n language = models.CharField(_(u\"Language\"), max_length=10,\n choices=settings.LANGUAGES,\n default=settings.LANGUAGE_CODE)\n\n class Meta:\n verbose_name = _(u\"User's profile\")\n verbose_name_plural = _(u\"User's profiles\")\n\n def __unicode__(self):\n return _(\"Profile for %s\") % self.user\n\nUser.profile = reify(lambda u: UserProfile.objects.get_or_create(user=u)[0])\n\n\n@receiver(user_logged_in)\ndef lang(sender, **kwargs):\n \"\"\" Set user's language in session when he logs in. \"\"\"\n lang_code = kwargs['user'].profile.language\n kwargs['request'].session['django_language'] = lang_code\n", "path": "geotrek/authent/models.py"}, {"content": "from functools import wraps\n\nfrom django.shortcuts import redirect\nfrom django.contrib import messages\nfrom django.http import HttpResponseRedirect\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.utils.decorators import available_attrs\n\n\ndef same_structure_required(redirect_to):\n \"\"\"\n A decorator for class-based views. It relies on ``self.get_object()``\n method object, and assumes decorated views to handle ``StructureRelated``\n objects.\n \"\"\"\n def decorator(view_func):\n @wraps(view_func, assigned=available_attrs(view_func))\n def _wrapped_view(self, request, *args, **kwargs):\n result = view_func(self, request, *args, **kwargs)\n\n if isinstance(result, HttpResponseRedirect):\n return result\n\n obj = hasattr(self, 'get_object') and self.get_object() or getattr(self, 'object', None)\n if obj.same_structure(request.user):\n return result\n messages.warning(request, _(u'Access to the requested resource is restricted by structure. You have been redirected.'))\n\n return redirect(redirect_to, *args, **kwargs)\n return _wrapped_view\n return decorator\n", "path": "geotrek/authent/decorators.py"}]} | 1,548 | 465 |
gh_patches_debug_19323 | rasdani/github-patches | git_diff | PokemonGoF__PokemonGo-Bot-5036 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Crash on Level Up
I'm gonna guess an issue with:
https://github.com/PokemonGoF/PokemonGo-Bot/pull/5016
which is also the version im on
```
Traceback (most recent call last):
File "pokecli.py", line 781, in <module>
main()
File "pokecli.py", line 139, in main
bot.tick()
File "C:\Users\Steve\Downloads\PokemonGo-Bot\pokemongo_bot\__init__.py", line 658, in tick
if worker.work() == WorkerResult.RUNNING:
File "C:\Users\Steve\Downloads\PokemonGo-Bot\pokemongo_bot\cell_workers\collect_level_up_reward.py", line 37, in work
self._collect_level_reward()
File "C:\Users\Steve\Downloads\PokemonGo-Bot\pokemongo_bot\cell_workers\collect_level_up_reward.py", line 70, in _collect_level_reward
'items': ', '.join(["{}x {}".format(data[x], x) for x in data])
TypeError: list indices must be integers, not dict
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pokemongo_bot/cell_workers/collect_level_up_reward.py`
Content:
```
1 import sys
2
3 from pokemongo_bot.base_task import BaseTask
4 from pokemongo_bot import inventory
5
6
7 class CollectLevelUpReward(BaseTask):
8 SUPPORTED_TASK_API_VERSION = 1
9
10 current_level = 0
11 previous_level = 0
12
13 def initialize(self):
14 self._process_config()
15 self.current_level = inventory.player().level
16 self.previous_level = 0
17
18 def work(self):
19 if self._should_run():
20 self.current_level = inventory.player().level
21
22 if self.collect_reward:
23 # let's check level reward on bot initialization
24 # to be able get rewards for old bots
25 if self.previous_level == 0:
26 self._collect_level_reward()
27 # level up situation
28 elif self.current_level > self.previous_level:
29 self.emit_event(
30 'level_up',
31 formatted='Level up from {previous_level} to {current_level}',
32 data={
33 'previous_level': self.previous_level,
34 'current_level': self.current_level
35 }
36 )
37 self._collect_level_reward()
38
39 if self.level_limit != -1 and self.current_level >= self.level_limit:
40 sys.exit("You have reached your target level! Exiting now.")
41
42 self.previous_level = self.current_level
43
44 def _process_config(self):
45 self.level_limit = self.config.get('level_limit', -1)
46 self.collect_reward = self.config.get('collect_reward', True)
47
48 def _should_run(self):
49 return self.level_limit != -1 or self.collect_reward
50
51 def _collect_level_reward(self):
52 response_dict = self.bot.api.level_up_rewards(level=self.current_level)
53 if 'status_code' in response_dict and response_dict['status_code'] == 1:
54 data = (response_dict
55 .get('responses', {})
56 .get('LEVEL_UP_REWARDS', {})
57 .get('items_awarded', []))
58
59 for item in data:
60 if 'item_id' in item and str(item['item_id']) in self.bot.item_list:
61 got_item = self.bot.item_list[str(item['item_id'])]
62 item['name'] = got_item
63 count = 'item_count' in item and item['item_count'] or 0
64 inventory.items().get(item['item_id']).add(count)
65 try:
66 self.emit_event(
67 'level_up_reward',
68 formatted='Received level up reward: {items}',
69 data={
70 'items': ', '.join(["{}x {}".format(data[x], x) for x in data])
71 }
72 )
73 except TypeError:
74 pass
75
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pokemongo_bot/cell_workers/collect_level_up_reward.py b/pokemongo_bot/cell_workers/collect_level_up_reward.py
--- a/pokemongo_bot/cell_workers/collect_level_up_reward.py
+++ b/pokemongo_bot/cell_workers/collect_level_up_reward.py
@@ -62,13 +62,11 @@
item['name'] = got_item
count = 'item_count' in item and item['item_count'] or 0
inventory.items().get(item['item_id']).add(count)
- try:
- self.emit_event(
- 'level_up_reward',
- formatted='Received level up reward: {items}',
- data={
- 'items': ', '.join(["{}x {}".format(data[x], x) for x in data])
- }
- )
- except TypeError:
- pass
+ self.emit_event(
+ 'level_up_reward',
+ formatted='Received level up reward: {items}',
+ data={
+ # [{'item_id': 3, 'name': u'Ultraball', 'item_count': 10}, {'item_id': 103, 'name': u'Hyper Potion', 'item_count': 10}]
+ 'items': ', '.join(["{}x {}".format(x['item_count'], x['name']) for x in data])
+ }
+ )
| {"golden_diff": "diff --git a/pokemongo_bot/cell_workers/collect_level_up_reward.py b/pokemongo_bot/cell_workers/collect_level_up_reward.py\n--- a/pokemongo_bot/cell_workers/collect_level_up_reward.py\n+++ b/pokemongo_bot/cell_workers/collect_level_up_reward.py\n@@ -62,13 +62,11 @@\n item['name'] = got_item\n count = 'item_count' in item and item['item_count'] or 0\n inventory.items().get(item['item_id']).add(count)\n- try:\n- self.emit_event(\n- 'level_up_reward',\n- formatted='Received level up reward: {items}',\n- data={\n- 'items': ', '.join([\"{}x {}\".format(data[x], x) for x in data])\n- }\n- )\n- except TypeError:\n- pass\n+ self.emit_event(\n+ 'level_up_reward',\n+ formatted='Received level up reward: {items}',\n+ data={\n+ # [{'item_id': 3, 'name': u'Ultraball', 'item_count': 10}, {'item_id': 103, 'name': u'Hyper Potion', 'item_count': 10}]\n+ 'items': ', '.join([\"{}x {}\".format(x['item_count'], x['name']) for x in data])\n+ }\n+ )\n", "issue": "Crash on Level Up\nI'm gonna guess an issue with:\nhttps://github.com/PokemonGoF/PokemonGo-Bot/pull/5016\n\nwhich is also the version im on\n\n```\nTraceback (most recent call last):\n File \"pokecli.py\", line 781, in <module>\n main()\n File \"pokecli.py\", line 139, in main\n bot.tick()\n File \"C:\\Users\\Steve\\Downloads\\PokemonGo-Bot\\pokemongo_bot\\__init__.py\", line 658, in tick\n if worker.work() == WorkerResult.RUNNING:\n File \"C:\\Users\\Steve\\Downloads\\PokemonGo-Bot\\pokemongo_bot\\cell_workers\\collect_level_up_reward.py\", line 37, in work\n self._collect_level_reward()\n File \"C:\\Users\\Steve\\Downloads\\PokemonGo-Bot\\pokemongo_bot\\cell_workers\\collect_level_up_reward.py\", line 70, in _collect_level_reward\n 'items': ', '.join([\"{}x {}\".format(data[x], x) for x in data])\nTypeError: list indices must be integers, not dict\n```\n\n", "before_files": [{"content": "import sys\n\nfrom pokemongo_bot.base_task import BaseTask\nfrom pokemongo_bot import inventory\n\n\nclass CollectLevelUpReward(BaseTask):\n SUPPORTED_TASK_API_VERSION = 1\n\n current_level = 0\n previous_level = 0\n\n def initialize(self):\n self._process_config()\n self.current_level = inventory.player().level\n self.previous_level = 0\n\n def work(self):\n if self._should_run():\n self.current_level = inventory.player().level\n\n if self.collect_reward:\n # let's check level reward on bot initialization\n # to be able get rewards for old bots\n if self.previous_level == 0:\n self._collect_level_reward()\n # level up situation\n elif self.current_level > self.previous_level:\n self.emit_event(\n 'level_up',\n formatted='Level up from {previous_level} to {current_level}',\n data={\n 'previous_level': self.previous_level,\n 'current_level': self.current_level\n }\n )\n self._collect_level_reward()\n\n if self.level_limit != -1 and self.current_level >= self.level_limit:\n sys.exit(\"You have reached your target level! Exiting now.\")\n\n self.previous_level = self.current_level\n\n def _process_config(self):\n self.level_limit = self.config.get('level_limit', -1)\n self.collect_reward = self.config.get('collect_reward', True)\n\n def _should_run(self):\n return self.level_limit != -1 or self.collect_reward\n\n def _collect_level_reward(self):\n response_dict = self.bot.api.level_up_rewards(level=self.current_level)\n if 'status_code' in response_dict and response_dict['status_code'] == 1:\n data = (response_dict\n .get('responses', {})\n .get('LEVEL_UP_REWARDS', {})\n .get('items_awarded', []))\n\n for item in data:\n if 'item_id' in item and str(item['item_id']) in self.bot.item_list:\n got_item = self.bot.item_list[str(item['item_id'])]\n item['name'] = got_item\n count = 'item_count' in item and item['item_count'] or 0\n inventory.items().get(item['item_id']).add(count)\n try:\n self.emit_event(\n 'level_up_reward',\n formatted='Received level up reward: {items}',\n data={\n 'items': ', '.join([\"{}x {}\".format(data[x], x) for x in data])\n }\n )\n except TypeError:\n pass\n", "path": "pokemongo_bot/cell_workers/collect_level_up_reward.py"}], "after_files": [{"content": "import sys\n\nfrom pokemongo_bot.base_task import BaseTask\nfrom pokemongo_bot import inventory\n\n\nclass CollectLevelUpReward(BaseTask):\n SUPPORTED_TASK_API_VERSION = 1\n\n current_level = 0\n previous_level = 0\n\n def initialize(self):\n self._process_config()\n self.current_level = inventory.player().level\n self.previous_level = 0\n\n def work(self):\n if self._should_run():\n self.current_level = inventory.player().level\n\n if self.collect_reward:\n # let's check level reward on bot initialization\n # to be able get rewards for old bots\n if self.previous_level == 0:\n self._collect_level_reward()\n # level up situation\n elif self.current_level > self.previous_level:\n self.emit_event(\n 'level_up',\n formatted='Level up from {previous_level} to {current_level}',\n data={\n 'previous_level': self.previous_level,\n 'current_level': self.current_level\n }\n )\n self._collect_level_reward()\n\n if self.level_limit != -1 and self.current_level >= self.level_limit:\n sys.exit(\"You have reached your target level! Exiting now.\")\n\n self.previous_level = self.current_level\n\n def _process_config(self):\n self.level_limit = self.config.get('level_limit', -1)\n self.collect_reward = self.config.get('collect_reward', True)\n\n def _should_run(self):\n return self.level_limit != -1 or self.collect_reward\n\n def _collect_level_reward(self):\n response_dict = self.bot.api.level_up_rewards(level=self.current_level)\n if 'status_code' in response_dict and response_dict['status_code'] == 1:\n data = (response_dict\n .get('responses', {})\n .get('LEVEL_UP_REWARDS', {})\n .get('items_awarded', []))\n\n for item in data:\n if 'item_id' in item and str(item['item_id']) in self.bot.item_list:\n got_item = self.bot.item_list[str(item['item_id'])]\n item['name'] = got_item\n count = 'item_count' in item and item['item_count'] or 0\n inventory.items().get(item['item_id']).add(count)\n self.emit_event(\n 'level_up_reward',\n formatted='Received level up reward: {items}',\n data={\n # [{'item_id': 3, 'name': u'Ultraball', 'item_count': 10}, {'item_id': 103, 'name': u'Hyper Potion', 'item_count': 10}]\n 'items': ', '.join([\"{}x {}\".format(x['item_count'], x['name']) for x in data])\n }\n )\n", "path": "pokemongo_bot/cell_workers/collect_level_up_reward.py"}]} | 1,224 | 309 |
gh_patches_debug_10136 | rasdani/github-patches | git_diff | pytorch__vision-3591 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
This may be has extra "as"
https://github.com/pytorch/vision/blob/631ff9123edeedf044c4355ef99dc42a396feb5b/torchvision/models/detection/transform.py#L87
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchvision/models/detection/transform.py`
Content:
```
1 import math
2 import torch
3 from torch import nn, Tensor
4 from torch.nn import functional as F
5 import torchvision
6 from typing import List, Tuple, Dict, Optional
7
8 from .image_list import ImageList
9 from .roi_heads import paste_masks_in_image
10
11
12 @torch.jit.unused
13 def _resize_image_and_masks_onnx(image, self_min_size, self_max_size, target):
14 # type: (Tensor, float, float, Optional[Dict[str, Tensor]]) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]
15 from torch.onnx import operators
16 im_shape = operators.shape_as_tensor(image)[-2:]
17 min_size = torch.min(im_shape).to(dtype=torch.float32)
18 max_size = torch.max(im_shape).to(dtype=torch.float32)
19 scale_factor = torch.min(self_min_size / min_size, self_max_size / max_size)
20
21 image = torch.nn.functional.interpolate(
22 image[None], scale_factor=scale_factor, mode='bilinear', recompute_scale_factor=True,
23 align_corners=False)[0]
24
25 if target is None:
26 return image, target
27
28 if "masks" in target:
29 mask = target["masks"]
30 mask = F.interpolate(mask[:, None].float(), scale_factor=scale_factor, recompute_scale_factor=True)[:, 0].byte()
31 target["masks"] = mask
32 return image, target
33
34
35 def _resize_image_and_masks(image, self_min_size, self_max_size, target):
36 # type: (Tensor, float, float, Optional[Dict[str, Tensor]]) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]
37 im_shape = torch.tensor(image.shape[-2:])
38 min_size = float(torch.min(im_shape))
39 max_size = float(torch.max(im_shape))
40 scale_factor = self_min_size / min_size
41 if max_size * scale_factor > self_max_size:
42 scale_factor = self_max_size / max_size
43 image = torch.nn.functional.interpolate(
44 image[None], scale_factor=scale_factor, mode='bilinear', recompute_scale_factor=True,
45 align_corners=False)[0]
46
47 if target is None:
48 return image, target
49
50 if "masks" in target:
51 mask = target["masks"]
52 mask = F.interpolate(mask[:, None].float(), scale_factor=scale_factor, recompute_scale_factor=True)[:, 0].byte()
53 target["masks"] = mask
54 return image, target
55
56
57 class GeneralizedRCNNTransform(nn.Module):
58 """
59 Performs input / target transformation before feeding the data to a GeneralizedRCNN
60 model.
61
62 The transformations it perform are:
63 - input normalization (mean subtraction and std division)
64 - input / target resizing to match min_size / max_size
65
66 It returns a ImageList for the inputs, and a List[Dict[Tensor]] for the targets
67 """
68
69 def __init__(self, min_size, max_size, image_mean, image_std):
70 super(GeneralizedRCNNTransform, self).__init__()
71 if not isinstance(min_size, (list, tuple)):
72 min_size = (min_size,)
73 self.min_size = min_size
74 self.max_size = max_size
75 self.image_mean = image_mean
76 self.image_std = image_std
77
78 def forward(self,
79 images, # type: List[Tensor]
80 targets=None # type: Optional[List[Dict[str, Tensor]]]
81 ):
82 # type: (...) -> Tuple[ImageList, Optional[List[Dict[str, Tensor]]]]
83 images = [img for img in images]
84 if targets is not None:
85 # make a copy of targets to avoid modifying it in-place
86 # once torchscript supports dict comprehension
87 # this can be simplified as as follows
88 # targets = [{k: v for k,v in t.items()} for t in targets]
89 targets_copy: List[Dict[str, Tensor]] = []
90 for t in targets:
91 data: Dict[str, Tensor] = {}
92 for k, v in t.items():
93 data[k] = v
94 targets_copy.append(data)
95 targets = targets_copy
96 for i in range(len(images)):
97 image = images[i]
98 target_index = targets[i] if targets is not None else None
99
100 if image.dim() != 3:
101 raise ValueError("images is expected to be a list of 3d tensors "
102 "of shape [C, H, W], got {}".format(image.shape))
103 image = self.normalize(image)
104 image, target_index = self.resize(image, target_index)
105 images[i] = image
106 if targets is not None and target_index is not None:
107 targets[i] = target_index
108
109 image_sizes = [img.shape[-2:] for img in images]
110 images = self.batch_images(images)
111 image_sizes_list: List[Tuple[int, int]] = []
112 for image_size in image_sizes:
113 assert len(image_size) == 2
114 image_sizes_list.append((image_size[0], image_size[1]))
115
116 image_list = ImageList(images, image_sizes_list)
117 return image_list, targets
118
119 def normalize(self, image):
120 if not image.is_floating_point():
121 raise TypeError(
122 f"Expected input images to be of floating type (in range [0, 1]), "
123 f"but found type {image.dtype} instead"
124 )
125 dtype, device = image.dtype, image.device
126 mean = torch.as_tensor(self.image_mean, dtype=dtype, device=device)
127 std = torch.as_tensor(self.image_std, dtype=dtype, device=device)
128 return (image - mean[:, None, None]) / std[:, None, None]
129
130 def torch_choice(self, k):
131 # type: (List[int]) -> int
132 """
133 Implements `random.choice` via torch ops so it can be compiled with
134 TorchScript. Remove if https://github.com/pytorch/pytorch/issues/25803
135 is fixed.
136 """
137 index = int(torch.empty(1).uniform_(0., float(len(k))).item())
138 return k[index]
139
140 def resize(self, image, target):
141 # type: (Tensor, Optional[Dict[str, Tensor]]) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]
142 h, w = image.shape[-2:]
143 if self.training:
144 size = float(self.torch_choice(self.min_size))
145 else:
146 # FIXME assume for now that testing uses the largest scale
147 size = float(self.min_size[-1])
148 if torchvision._is_tracing():
149 image, target = _resize_image_and_masks_onnx(image, size, float(self.max_size), target)
150 else:
151 image, target = _resize_image_and_masks(image, size, float(self.max_size), target)
152
153 if target is None:
154 return image, target
155
156 bbox = target["boxes"]
157 bbox = resize_boxes(bbox, (h, w), image.shape[-2:])
158 target["boxes"] = bbox
159
160 if "keypoints" in target:
161 keypoints = target["keypoints"]
162 keypoints = resize_keypoints(keypoints, (h, w), image.shape[-2:])
163 target["keypoints"] = keypoints
164 return image, target
165
166 # _onnx_batch_images() is an implementation of
167 # batch_images() that is supported by ONNX tracing.
168 @torch.jit.unused
169 def _onnx_batch_images(self, images, size_divisible=32):
170 # type: (List[Tensor], int) -> Tensor
171 max_size = []
172 for i in range(images[0].dim()):
173 max_size_i = torch.max(torch.stack([img.shape[i] for img in images]).to(torch.float32)).to(torch.int64)
174 max_size.append(max_size_i)
175 stride = size_divisible
176 max_size[1] = (torch.ceil((max_size[1].to(torch.float32)) / stride) * stride).to(torch.int64)
177 max_size[2] = (torch.ceil((max_size[2].to(torch.float32)) / stride) * stride).to(torch.int64)
178 max_size = tuple(max_size)
179
180 # work around for
181 # pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
182 # which is not yet supported in onnx
183 padded_imgs = []
184 for img in images:
185 padding = [(s1 - s2) for s1, s2 in zip(max_size, tuple(img.shape))]
186 padded_img = torch.nn.functional.pad(img, (0, padding[2], 0, padding[1], 0, padding[0]))
187 padded_imgs.append(padded_img)
188
189 return torch.stack(padded_imgs)
190
191 def max_by_axis(self, the_list):
192 # type: (List[List[int]]) -> List[int]
193 maxes = the_list[0]
194 for sublist in the_list[1:]:
195 for index, item in enumerate(sublist):
196 maxes[index] = max(maxes[index], item)
197 return maxes
198
199 def batch_images(self, images, size_divisible=32):
200 # type: (List[Tensor], int) -> Tensor
201 if torchvision._is_tracing():
202 # batch_images() does not export well to ONNX
203 # call _onnx_batch_images() instead
204 return self._onnx_batch_images(images, size_divisible)
205
206 max_size = self.max_by_axis([list(img.shape) for img in images])
207 stride = float(size_divisible)
208 max_size = list(max_size)
209 max_size[1] = int(math.ceil(float(max_size[1]) / stride) * stride)
210 max_size[2] = int(math.ceil(float(max_size[2]) / stride) * stride)
211
212 batch_shape = [len(images)] + max_size
213 batched_imgs = images[0].new_full(batch_shape, 0)
214 for img, pad_img in zip(images, batched_imgs):
215 pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
216
217 return batched_imgs
218
219 def postprocess(self,
220 result, # type: List[Dict[str, Tensor]]
221 image_shapes, # type: List[Tuple[int, int]]
222 original_image_sizes # type: List[Tuple[int, int]]
223 ):
224 # type: (...) -> List[Dict[str, Tensor]]
225 if self.training:
226 return result
227 for i, (pred, im_s, o_im_s) in enumerate(zip(result, image_shapes, original_image_sizes)):
228 boxes = pred["boxes"]
229 boxes = resize_boxes(boxes, im_s, o_im_s)
230 result[i]["boxes"] = boxes
231 if "masks" in pred:
232 masks = pred["masks"]
233 masks = paste_masks_in_image(masks, boxes, o_im_s)
234 result[i]["masks"] = masks
235 if "keypoints" in pred:
236 keypoints = pred["keypoints"]
237 keypoints = resize_keypoints(keypoints, im_s, o_im_s)
238 result[i]["keypoints"] = keypoints
239 return result
240
241 def __repr__(self):
242 format_string = self.__class__.__name__ + '('
243 _indent = '\n '
244 format_string += "{0}Normalize(mean={1}, std={2})".format(_indent, self.image_mean, self.image_std)
245 format_string += "{0}Resize(min_size={1}, max_size={2}, mode='bilinear')".format(_indent, self.min_size,
246 self.max_size)
247 format_string += '\n)'
248 return format_string
249
250
251 def resize_keypoints(keypoints, original_size, new_size):
252 # type: (Tensor, List[int], List[int]) -> Tensor
253 ratios = [
254 torch.tensor(s, dtype=torch.float32, device=keypoints.device) /
255 torch.tensor(s_orig, dtype=torch.float32, device=keypoints.device)
256 for s, s_orig in zip(new_size, original_size)
257 ]
258 ratio_h, ratio_w = ratios
259 resized_data = keypoints.clone()
260 if torch._C._get_tracing_state():
261 resized_data_0 = resized_data[:, :, 0] * ratio_w
262 resized_data_1 = resized_data[:, :, 1] * ratio_h
263 resized_data = torch.stack((resized_data_0, resized_data_1, resized_data[:, :, 2]), dim=2)
264 else:
265 resized_data[..., 0] *= ratio_w
266 resized_data[..., 1] *= ratio_h
267 return resized_data
268
269
270 def resize_boxes(boxes, original_size, new_size):
271 # type: (Tensor, List[int], List[int]) -> Tensor
272 ratios = [
273 torch.tensor(s, dtype=torch.float32, device=boxes.device) /
274 torch.tensor(s_orig, dtype=torch.float32, device=boxes.device)
275 for s, s_orig in zip(new_size, original_size)
276 ]
277 ratio_height, ratio_width = ratios
278 xmin, ymin, xmax, ymax = boxes.unbind(1)
279
280 xmin = xmin * ratio_width
281 xmax = xmax * ratio_width
282 ymin = ymin * ratio_height
283 ymax = ymax * ratio_height
284 return torch.stack((xmin, ymin, xmax, ymax), dim=1)
285
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchvision/models/detection/transform.py b/torchvision/models/detection/transform.py
--- a/torchvision/models/detection/transform.py
+++ b/torchvision/models/detection/transform.py
@@ -84,7 +84,7 @@
if targets is not None:
# make a copy of targets to avoid modifying it in-place
# once torchscript supports dict comprehension
- # this can be simplified as as follows
+ # this can be simplified as follows
# targets = [{k: v for k,v in t.items()} for t in targets]
targets_copy: List[Dict[str, Tensor]] = []
for t in targets:
| {"golden_diff": "diff --git a/torchvision/models/detection/transform.py b/torchvision/models/detection/transform.py\n--- a/torchvision/models/detection/transform.py\n+++ b/torchvision/models/detection/transform.py\n@@ -84,7 +84,7 @@\n if targets is not None:\n # make a copy of targets to avoid modifying it in-place\n # once torchscript supports dict comprehension\n- # this can be simplified as as follows\n+ # this can be simplified as follows\n # targets = [{k: v for k,v in t.items()} for t in targets]\n targets_copy: List[Dict[str, Tensor]] = []\n for t in targets:\n", "issue": "This may be has extra \"as\"\nhttps://github.com/pytorch/vision/blob/631ff9123edeedf044c4355ef99dc42a396feb5b/torchvision/models/detection/transform.py#L87\n", "before_files": [{"content": "import math\nimport torch\nfrom torch import nn, Tensor\nfrom torch.nn import functional as F\nimport torchvision\nfrom typing import List, Tuple, Dict, Optional\n\nfrom .image_list import ImageList\nfrom .roi_heads import paste_masks_in_image\n\n\[email protected]\ndef _resize_image_and_masks_onnx(image, self_min_size, self_max_size, target):\n # type: (Tensor, float, float, Optional[Dict[str, Tensor]]) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]\n from torch.onnx import operators\n im_shape = operators.shape_as_tensor(image)[-2:]\n min_size = torch.min(im_shape).to(dtype=torch.float32)\n max_size = torch.max(im_shape).to(dtype=torch.float32)\n scale_factor = torch.min(self_min_size / min_size, self_max_size / max_size)\n\n image = torch.nn.functional.interpolate(\n image[None], scale_factor=scale_factor, mode='bilinear', recompute_scale_factor=True,\n align_corners=False)[0]\n\n if target is None:\n return image, target\n\n if \"masks\" in target:\n mask = target[\"masks\"]\n mask = F.interpolate(mask[:, None].float(), scale_factor=scale_factor, recompute_scale_factor=True)[:, 0].byte()\n target[\"masks\"] = mask\n return image, target\n\n\ndef _resize_image_and_masks(image, self_min_size, self_max_size, target):\n # type: (Tensor, float, float, Optional[Dict[str, Tensor]]) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]\n im_shape = torch.tensor(image.shape[-2:])\n min_size = float(torch.min(im_shape))\n max_size = float(torch.max(im_shape))\n scale_factor = self_min_size / min_size\n if max_size * scale_factor > self_max_size:\n scale_factor = self_max_size / max_size\n image = torch.nn.functional.interpolate(\n image[None], scale_factor=scale_factor, mode='bilinear', recompute_scale_factor=True,\n align_corners=False)[0]\n\n if target is None:\n return image, target\n\n if \"masks\" in target:\n mask = target[\"masks\"]\n mask = F.interpolate(mask[:, None].float(), scale_factor=scale_factor, recompute_scale_factor=True)[:, 0].byte()\n target[\"masks\"] = mask\n return image, target\n\n\nclass GeneralizedRCNNTransform(nn.Module):\n \"\"\"\n Performs input / target transformation before feeding the data to a GeneralizedRCNN\n model.\n\n The transformations it perform are:\n - input normalization (mean subtraction and std division)\n - input / target resizing to match min_size / max_size\n\n It returns a ImageList for the inputs, and a List[Dict[Tensor]] for the targets\n \"\"\"\n\n def __init__(self, min_size, max_size, image_mean, image_std):\n super(GeneralizedRCNNTransform, self).__init__()\n if not isinstance(min_size, (list, tuple)):\n min_size = (min_size,)\n self.min_size = min_size\n self.max_size = max_size\n self.image_mean = image_mean\n self.image_std = image_std\n\n def forward(self,\n images, # type: List[Tensor]\n targets=None # type: Optional[List[Dict[str, Tensor]]]\n ):\n # type: (...) -> Tuple[ImageList, Optional[List[Dict[str, Tensor]]]]\n images = [img for img in images]\n if targets is not None:\n # make a copy of targets to avoid modifying it in-place\n # once torchscript supports dict comprehension\n # this can be simplified as as follows\n # targets = [{k: v for k,v in t.items()} for t in targets]\n targets_copy: List[Dict[str, Tensor]] = []\n for t in targets:\n data: Dict[str, Tensor] = {}\n for k, v in t.items():\n data[k] = v\n targets_copy.append(data)\n targets = targets_copy\n for i in range(len(images)):\n image = images[i]\n target_index = targets[i] if targets is not None else None\n\n if image.dim() != 3:\n raise ValueError(\"images is expected to be a list of 3d tensors \"\n \"of shape [C, H, W], got {}\".format(image.shape))\n image = self.normalize(image)\n image, target_index = self.resize(image, target_index)\n images[i] = image\n if targets is not None and target_index is not None:\n targets[i] = target_index\n\n image_sizes = [img.shape[-2:] for img in images]\n images = self.batch_images(images)\n image_sizes_list: List[Tuple[int, int]] = []\n for image_size in image_sizes:\n assert len(image_size) == 2\n image_sizes_list.append((image_size[0], image_size[1]))\n\n image_list = ImageList(images, image_sizes_list)\n return image_list, targets\n\n def normalize(self, image):\n if not image.is_floating_point():\n raise TypeError(\n f\"Expected input images to be of floating type (in range [0, 1]), \"\n f\"but found type {image.dtype} instead\"\n )\n dtype, device = image.dtype, image.device\n mean = torch.as_tensor(self.image_mean, dtype=dtype, device=device)\n std = torch.as_tensor(self.image_std, dtype=dtype, device=device)\n return (image - mean[:, None, None]) / std[:, None, None]\n\n def torch_choice(self, k):\n # type: (List[int]) -> int\n \"\"\"\n Implements `random.choice` via torch ops so it can be compiled with\n TorchScript. Remove if https://github.com/pytorch/pytorch/issues/25803\n is fixed.\n \"\"\"\n index = int(torch.empty(1).uniform_(0., float(len(k))).item())\n return k[index]\n\n def resize(self, image, target):\n # type: (Tensor, Optional[Dict[str, Tensor]]) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]\n h, w = image.shape[-2:]\n if self.training:\n size = float(self.torch_choice(self.min_size))\n else:\n # FIXME assume for now that testing uses the largest scale\n size = float(self.min_size[-1])\n if torchvision._is_tracing():\n image, target = _resize_image_and_masks_onnx(image, size, float(self.max_size), target)\n else:\n image, target = _resize_image_and_masks(image, size, float(self.max_size), target)\n\n if target is None:\n return image, target\n\n bbox = target[\"boxes\"]\n bbox = resize_boxes(bbox, (h, w), image.shape[-2:])\n target[\"boxes\"] = bbox\n\n if \"keypoints\" in target:\n keypoints = target[\"keypoints\"]\n keypoints = resize_keypoints(keypoints, (h, w), image.shape[-2:])\n target[\"keypoints\"] = keypoints\n return image, target\n\n # _onnx_batch_images() is an implementation of\n # batch_images() that is supported by ONNX tracing.\n @torch.jit.unused\n def _onnx_batch_images(self, images, size_divisible=32):\n # type: (List[Tensor], int) -> Tensor\n max_size = []\n for i in range(images[0].dim()):\n max_size_i = torch.max(torch.stack([img.shape[i] for img in images]).to(torch.float32)).to(torch.int64)\n max_size.append(max_size_i)\n stride = size_divisible\n max_size[1] = (torch.ceil((max_size[1].to(torch.float32)) / stride) * stride).to(torch.int64)\n max_size[2] = (torch.ceil((max_size[2].to(torch.float32)) / stride) * stride).to(torch.int64)\n max_size = tuple(max_size)\n\n # work around for\n # pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)\n # which is not yet supported in onnx\n padded_imgs = []\n for img in images:\n padding = [(s1 - s2) for s1, s2 in zip(max_size, tuple(img.shape))]\n padded_img = torch.nn.functional.pad(img, (0, padding[2], 0, padding[1], 0, padding[0]))\n padded_imgs.append(padded_img)\n\n return torch.stack(padded_imgs)\n\n def max_by_axis(self, the_list):\n # type: (List[List[int]]) -> List[int]\n maxes = the_list[0]\n for sublist in the_list[1:]:\n for index, item in enumerate(sublist):\n maxes[index] = max(maxes[index], item)\n return maxes\n\n def batch_images(self, images, size_divisible=32):\n # type: (List[Tensor], int) -> Tensor\n if torchvision._is_tracing():\n # batch_images() does not export well to ONNX\n # call _onnx_batch_images() instead\n return self._onnx_batch_images(images, size_divisible)\n\n max_size = self.max_by_axis([list(img.shape) for img in images])\n stride = float(size_divisible)\n max_size = list(max_size)\n max_size[1] = int(math.ceil(float(max_size[1]) / stride) * stride)\n max_size[2] = int(math.ceil(float(max_size[2]) / stride) * stride)\n\n batch_shape = [len(images)] + max_size\n batched_imgs = images[0].new_full(batch_shape, 0)\n for img, pad_img in zip(images, batched_imgs):\n pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)\n\n return batched_imgs\n\n def postprocess(self,\n result, # type: List[Dict[str, Tensor]]\n image_shapes, # type: List[Tuple[int, int]]\n original_image_sizes # type: List[Tuple[int, int]]\n ):\n # type: (...) -> List[Dict[str, Tensor]]\n if self.training:\n return result\n for i, (pred, im_s, o_im_s) in enumerate(zip(result, image_shapes, original_image_sizes)):\n boxes = pred[\"boxes\"]\n boxes = resize_boxes(boxes, im_s, o_im_s)\n result[i][\"boxes\"] = boxes\n if \"masks\" in pred:\n masks = pred[\"masks\"]\n masks = paste_masks_in_image(masks, boxes, o_im_s)\n result[i][\"masks\"] = masks\n if \"keypoints\" in pred:\n keypoints = pred[\"keypoints\"]\n keypoints = resize_keypoints(keypoints, im_s, o_im_s)\n result[i][\"keypoints\"] = keypoints\n return result\n\n def __repr__(self):\n format_string = self.__class__.__name__ + '('\n _indent = '\\n '\n format_string += \"{0}Normalize(mean={1}, std={2})\".format(_indent, self.image_mean, self.image_std)\n format_string += \"{0}Resize(min_size={1}, max_size={2}, mode='bilinear')\".format(_indent, self.min_size,\n self.max_size)\n format_string += '\\n)'\n return format_string\n\n\ndef resize_keypoints(keypoints, original_size, new_size):\n # type: (Tensor, List[int], List[int]) -> Tensor\n ratios = [\n torch.tensor(s, dtype=torch.float32, device=keypoints.device) /\n torch.tensor(s_orig, dtype=torch.float32, device=keypoints.device)\n for s, s_orig in zip(new_size, original_size)\n ]\n ratio_h, ratio_w = ratios\n resized_data = keypoints.clone()\n if torch._C._get_tracing_state():\n resized_data_0 = resized_data[:, :, 0] * ratio_w\n resized_data_1 = resized_data[:, :, 1] * ratio_h\n resized_data = torch.stack((resized_data_0, resized_data_1, resized_data[:, :, 2]), dim=2)\n else:\n resized_data[..., 0] *= ratio_w\n resized_data[..., 1] *= ratio_h\n return resized_data\n\n\ndef resize_boxes(boxes, original_size, new_size):\n # type: (Tensor, List[int], List[int]) -> Tensor\n ratios = [\n torch.tensor(s, dtype=torch.float32, device=boxes.device) /\n torch.tensor(s_orig, dtype=torch.float32, device=boxes.device)\n for s, s_orig in zip(new_size, original_size)\n ]\n ratio_height, ratio_width = ratios\n xmin, ymin, xmax, ymax = boxes.unbind(1)\n\n xmin = xmin * ratio_width\n xmax = xmax * ratio_width\n ymin = ymin * ratio_height\n ymax = ymax * ratio_height\n return torch.stack((xmin, ymin, xmax, ymax), dim=1)\n", "path": "torchvision/models/detection/transform.py"}], "after_files": [{"content": "import math\nimport torch\nfrom torch import nn, Tensor\nfrom torch.nn import functional as F\nimport torchvision\nfrom typing import List, Tuple, Dict, Optional\n\nfrom .image_list import ImageList\nfrom .roi_heads import paste_masks_in_image\n\n\[email protected]\ndef _resize_image_and_masks_onnx(image, self_min_size, self_max_size, target):\n # type: (Tensor, float, float, Optional[Dict[str, Tensor]]) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]\n from torch.onnx import operators\n im_shape = operators.shape_as_tensor(image)[-2:]\n min_size = torch.min(im_shape).to(dtype=torch.float32)\n max_size = torch.max(im_shape).to(dtype=torch.float32)\n scale_factor = torch.min(self_min_size / min_size, self_max_size / max_size)\n\n image = torch.nn.functional.interpolate(\n image[None], scale_factor=scale_factor, mode='bilinear', recompute_scale_factor=True,\n align_corners=False)[0]\n\n if target is None:\n return image, target\n\n if \"masks\" in target:\n mask = target[\"masks\"]\n mask = F.interpolate(mask[:, None].float(), scale_factor=scale_factor, recompute_scale_factor=True)[:, 0].byte()\n target[\"masks\"] = mask\n return image, target\n\n\ndef _resize_image_and_masks(image, self_min_size, self_max_size, target):\n # type: (Tensor, float, float, Optional[Dict[str, Tensor]]) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]\n im_shape = torch.tensor(image.shape[-2:])\n min_size = float(torch.min(im_shape))\n max_size = float(torch.max(im_shape))\n scale_factor = self_min_size / min_size\n if max_size * scale_factor > self_max_size:\n scale_factor = self_max_size / max_size\n image = torch.nn.functional.interpolate(\n image[None], scale_factor=scale_factor, mode='bilinear', recompute_scale_factor=True,\n align_corners=False)[0]\n\n if target is None:\n return image, target\n\n if \"masks\" in target:\n mask = target[\"masks\"]\n mask = F.interpolate(mask[:, None].float(), scale_factor=scale_factor, recompute_scale_factor=True)[:, 0].byte()\n target[\"masks\"] = mask\n return image, target\n\n\nclass GeneralizedRCNNTransform(nn.Module):\n \"\"\"\n Performs input / target transformation before feeding the data to a GeneralizedRCNN\n model.\n\n The transformations it perform are:\n - input normalization (mean subtraction and std division)\n - input / target resizing to match min_size / max_size\n\n It returns a ImageList for the inputs, and a List[Dict[Tensor]] for the targets\n \"\"\"\n\n def __init__(self, min_size, max_size, image_mean, image_std):\n super(GeneralizedRCNNTransform, self).__init__()\n if not isinstance(min_size, (list, tuple)):\n min_size = (min_size,)\n self.min_size = min_size\n self.max_size = max_size\n self.image_mean = image_mean\n self.image_std = image_std\n\n def forward(self,\n images, # type: List[Tensor]\n targets=None # type: Optional[List[Dict[str, Tensor]]]\n ):\n # type: (...) -> Tuple[ImageList, Optional[List[Dict[str, Tensor]]]]\n images = [img for img in images]\n if targets is not None:\n # make a copy of targets to avoid modifying it in-place\n # once torchscript supports dict comprehension\n # this can be simplified as follows\n # targets = [{k: v for k,v in t.items()} for t in targets]\n targets_copy: List[Dict[str, Tensor]] = []\n for t in targets:\n data: Dict[str, Tensor] = {}\n for k, v in t.items():\n data[k] = v\n targets_copy.append(data)\n targets = targets_copy\n for i in range(len(images)):\n image = images[i]\n target_index = targets[i] if targets is not None else None\n\n if image.dim() != 3:\n raise ValueError(\"images is expected to be a list of 3d tensors \"\n \"of shape [C, H, W], got {}\".format(image.shape))\n image = self.normalize(image)\n image, target_index = self.resize(image, target_index)\n images[i] = image\n if targets is not None and target_index is not None:\n targets[i] = target_index\n\n image_sizes = [img.shape[-2:] for img in images]\n images = self.batch_images(images)\n image_sizes_list: List[Tuple[int, int]] = []\n for image_size in image_sizes:\n assert len(image_size) == 2\n image_sizes_list.append((image_size[0], image_size[1]))\n\n image_list = ImageList(images, image_sizes_list)\n return image_list, targets\n\n def normalize(self, image):\n if not image.is_floating_point():\n raise TypeError(\n f\"Expected input images to be of floating type (in range [0, 1]), \"\n f\"but found type {image.dtype} instead\"\n )\n dtype, device = image.dtype, image.device\n mean = torch.as_tensor(self.image_mean, dtype=dtype, device=device)\n std = torch.as_tensor(self.image_std, dtype=dtype, device=device)\n return (image - mean[:, None, None]) / std[:, None, None]\n\n def torch_choice(self, k):\n # type: (List[int]) -> int\n \"\"\"\n Implements `random.choice` via torch ops so it can be compiled with\n TorchScript. Remove if https://github.com/pytorch/pytorch/issues/25803\n is fixed.\n \"\"\"\n index = int(torch.empty(1).uniform_(0., float(len(k))).item())\n return k[index]\n\n def resize(self, image, target):\n # type: (Tensor, Optional[Dict[str, Tensor]]) -> Tuple[Tensor, Optional[Dict[str, Tensor]]]\n h, w = image.shape[-2:]\n if self.training:\n size = float(self.torch_choice(self.min_size))\n else:\n # FIXME assume for now that testing uses the largest scale\n size = float(self.min_size[-1])\n if torchvision._is_tracing():\n image, target = _resize_image_and_masks_onnx(image, size, float(self.max_size), target)\n else:\n image, target = _resize_image_and_masks(image, size, float(self.max_size), target)\n\n if target is None:\n return image, target\n\n bbox = target[\"boxes\"]\n bbox = resize_boxes(bbox, (h, w), image.shape[-2:])\n target[\"boxes\"] = bbox\n\n if \"keypoints\" in target:\n keypoints = target[\"keypoints\"]\n keypoints = resize_keypoints(keypoints, (h, w), image.shape[-2:])\n target[\"keypoints\"] = keypoints\n return image, target\n\n # _onnx_batch_images() is an implementation of\n # batch_images() that is supported by ONNX tracing.\n @torch.jit.unused\n def _onnx_batch_images(self, images, size_divisible=32):\n # type: (List[Tensor], int) -> Tensor\n max_size = []\n for i in range(images[0].dim()):\n max_size_i = torch.max(torch.stack([img.shape[i] for img in images]).to(torch.float32)).to(torch.int64)\n max_size.append(max_size_i)\n stride = size_divisible\n max_size[1] = (torch.ceil((max_size[1].to(torch.float32)) / stride) * stride).to(torch.int64)\n max_size[2] = (torch.ceil((max_size[2].to(torch.float32)) / stride) * stride).to(torch.int64)\n max_size = tuple(max_size)\n\n # work around for\n # pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)\n # which is not yet supported in onnx\n padded_imgs = []\n for img in images:\n padding = [(s1 - s2) for s1, s2 in zip(max_size, tuple(img.shape))]\n padded_img = torch.nn.functional.pad(img, (0, padding[2], 0, padding[1], 0, padding[0]))\n padded_imgs.append(padded_img)\n\n return torch.stack(padded_imgs)\n\n def max_by_axis(self, the_list):\n # type: (List[List[int]]) -> List[int]\n maxes = the_list[0]\n for sublist in the_list[1:]:\n for index, item in enumerate(sublist):\n maxes[index] = max(maxes[index], item)\n return maxes\n\n def batch_images(self, images, size_divisible=32):\n # type: (List[Tensor], int) -> Tensor\n if torchvision._is_tracing():\n # batch_images() does not export well to ONNX\n # call _onnx_batch_images() instead\n return self._onnx_batch_images(images, size_divisible)\n\n max_size = self.max_by_axis([list(img.shape) for img in images])\n stride = float(size_divisible)\n max_size = list(max_size)\n max_size[1] = int(math.ceil(float(max_size[1]) / stride) * stride)\n max_size[2] = int(math.ceil(float(max_size[2]) / stride) * stride)\n\n batch_shape = [len(images)] + max_size\n batched_imgs = images[0].new_full(batch_shape, 0)\n for img, pad_img in zip(images, batched_imgs):\n pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)\n\n return batched_imgs\n\n def postprocess(self,\n result, # type: List[Dict[str, Tensor]]\n image_shapes, # type: List[Tuple[int, int]]\n original_image_sizes # type: List[Tuple[int, int]]\n ):\n # type: (...) -> List[Dict[str, Tensor]]\n if self.training:\n return result\n for i, (pred, im_s, o_im_s) in enumerate(zip(result, image_shapes, original_image_sizes)):\n boxes = pred[\"boxes\"]\n boxes = resize_boxes(boxes, im_s, o_im_s)\n result[i][\"boxes\"] = boxes\n if \"masks\" in pred:\n masks = pred[\"masks\"]\n masks = paste_masks_in_image(masks, boxes, o_im_s)\n result[i][\"masks\"] = masks\n if \"keypoints\" in pred:\n keypoints = pred[\"keypoints\"]\n keypoints = resize_keypoints(keypoints, im_s, o_im_s)\n result[i][\"keypoints\"] = keypoints\n return result\n\n def __repr__(self):\n format_string = self.__class__.__name__ + '('\n _indent = '\\n '\n format_string += \"{0}Normalize(mean={1}, std={2})\".format(_indent, self.image_mean, self.image_std)\n format_string += \"{0}Resize(min_size={1}, max_size={2}, mode='bilinear')\".format(_indent, self.min_size,\n self.max_size)\n format_string += '\\n)'\n return format_string\n\n\ndef resize_keypoints(keypoints, original_size, new_size):\n # type: (Tensor, List[int], List[int]) -> Tensor\n ratios = [\n torch.tensor(s, dtype=torch.float32, device=keypoints.device) /\n torch.tensor(s_orig, dtype=torch.float32, device=keypoints.device)\n for s, s_orig in zip(new_size, original_size)\n ]\n ratio_h, ratio_w = ratios\n resized_data = keypoints.clone()\n if torch._C._get_tracing_state():\n resized_data_0 = resized_data[:, :, 0] * ratio_w\n resized_data_1 = resized_data[:, :, 1] * ratio_h\n resized_data = torch.stack((resized_data_0, resized_data_1, resized_data[:, :, 2]), dim=2)\n else:\n resized_data[..., 0] *= ratio_w\n resized_data[..., 1] *= ratio_h\n return resized_data\n\n\ndef resize_boxes(boxes, original_size, new_size):\n # type: (Tensor, List[int], List[int]) -> Tensor\n ratios = [\n torch.tensor(s, dtype=torch.float32, device=boxes.device) /\n torch.tensor(s_orig, dtype=torch.float32, device=boxes.device)\n for s, s_orig in zip(new_size, original_size)\n ]\n ratio_height, ratio_width = ratios\n xmin, ymin, xmax, ymax = boxes.unbind(1)\n\n xmin = xmin * ratio_width\n xmax = xmax * ratio_width\n ymin = ymin * ratio_height\n ymax = ymax * ratio_height\n return torch.stack((xmin, ymin, xmax, ymax), dim=1)\n", "path": "torchvision/models/detection/transform.py"}]} | 3,950 | 150 |
gh_patches_debug_43038 | rasdani/github-patches | git_diff | pantsbuild__pants-16977 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support `parametrize(...)` assignments in the `__defaults__` macro
**Is your feature request related to a problem? Please describe.**
The `__defaults__` BUILD file construct helps eliminate the need for boilerplate target field assignments across an entire
directory tree. However, it does not support assigning the result of the `parametrize` macro to a field.
```
# BUILD
__defaults__({
python_sources: dict(resolve=parametrize(my_resolves())), # InvalidFieldTypeException: expects string, not Parametrize
})
```
**Describe the solution you'd like**
Support `parametrize` in this context in the natural way, i.e. interpret applicable targets as having the field assigned to
`parametrize(...)`.
**Describe alternatives you've considered**
Modifying `tailor` to populate this field on a per-directory basis might work.
**Additional context**
This is very useful when a single Python subproject is intended to run under several lockfiles.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/engine/internals/defaults.py`
Content:
```
1 # Copyright 2022 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3 """The `BuildFileDefaultsParserState.set_defaults` is used by the pants.engine.internals.Parser,
4 exposed as the `__defaults__` BUILD file symbol.
5
6 When parsing a BUILD (from the rule `pants.engine.internals.build_files.parse_address_family`) the
7 defaults from the closest parent BUILD file is passed as input to the parser, and the new defaults
8 resulting after the BUILD file have been parsed is returned in the `AddressFamily`.
9
10 These defaults are then applied when creating the `TargetAdaptor` targets by the `Registrar` in the
11 parser.
12 """
13 from __future__ import annotations
14
15 from dataclasses import dataclass
16 from typing import Any, Iterable, Mapping, Tuple, Union
17
18 from pants.engine.addresses import Address
19 from pants.engine.target import (
20 Field,
21 ImmutableValue,
22 InvalidFieldException,
23 RegisteredTargetTypes,
24 Target,
25 TargetGenerator,
26 )
27 from pants.engine.unions import UnionMembership
28 from pants.util.frozendict import FrozenDict
29
30 SetDefaultsValueT = Mapping[str, Any]
31 SetDefaultsKeyT = Union[str, Tuple[str, ...]]
32 SetDefaultsT = Mapping[SetDefaultsKeyT, SetDefaultsValueT]
33
34
35 class BuildFileDefaults(FrozenDict[str, FrozenDict[str, ImmutableValue]]):
36 """Map target types to default field values."""
37
38
39 @dataclass
40 class BuildFileDefaultsParserState:
41 address: Address
42 defaults: dict[str, Mapping[str, Any]]
43 registered_target_types: RegisteredTargetTypes
44 union_membership: UnionMembership
45
46 @classmethod
47 def create(
48 cls,
49 path: str,
50 defaults: BuildFileDefaults,
51 registered_target_types: RegisteredTargetTypes,
52 union_membership: UnionMembership,
53 ) -> BuildFileDefaultsParserState:
54 return cls(
55 address=Address(path, generated_name="__defaults__"),
56 defaults=dict(defaults),
57 registered_target_types=registered_target_types,
58 union_membership=union_membership,
59 )
60
61 def get_frozen_defaults(self) -> BuildFileDefaults:
62 types = self.registered_target_types.aliases_to_types
63 return BuildFileDefaults(
64 {
65 target_alias: FrozenDict(
66 {
67 field_type.alias: field_type.compute_value(
68 raw_value=default, address=self.address
69 )
70 for field_alias, default in fields.items()
71 for field_type in self._target_type_field_types(types[target_alias])
72 if field_alias in (field_type.alias, field_type.deprecated_alias)
73 }
74 )
75 for target_alias, fields in self.defaults.items()
76 }
77 )
78
79 def get(self, target_alias: str) -> Mapping[str, Any]:
80 # Used by `pants.engine.internals.parser.Parser._generate_symbols.Registrar.__call__`
81 return self.defaults.get(target_alias, {})
82
83 def set_defaults(
84 self,
85 *args: SetDefaultsT,
86 all: SetDefaultsValueT | None = None,
87 extend: bool = False,
88 **kwargs,
89 ) -> None:
90 defaults: dict[str, dict[str, Any]] = (
91 {} if not extend else {k: dict(v) for k, v in self.defaults.items()}
92 )
93
94 if all is not None:
95 self._process_defaults(
96 defaults,
97 {tuple(self.registered_target_types.aliases): all},
98 ignore_unknown_fields=True,
99 )
100
101 for arg in args:
102 self._process_defaults(defaults, arg)
103
104 # Update with new defaults, dropping targets without any default values.
105 for tgt, default in defaults.items():
106 if not default:
107 self.defaults.pop(tgt, None)
108 else:
109 self.defaults[tgt] = default
110
111 def _target_type_field_types(self, target_type: type[Target]) -> tuple[type[Field], ...]:
112 return (
113 *target_type.class_field_types(self.union_membership),
114 *(target_type.moved_fields if issubclass(target_type, TargetGenerator) else ()),
115 )
116
117 def _process_defaults(
118 self,
119 defaults: dict[str, dict[str, Any]],
120 targets_defaults: SetDefaultsT,
121 ignore_unknown_fields: bool = False,
122 ):
123 if not isinstance(targets_defaults, dict):
124 raise ValueError(
125 f"Expected dictionary mapping targets to default field values for {self.address} "
126 f"but got: {type(targets_defaults).__name__}."
127 )
128
129 types = self.registered_target_types.aliases_to_types
130 for target, default in targets_defaults.items():
131 if not isinstance(default, dict):
132 raise ValueError(
133 f"Invalid default field values in {self.address} for target type {target}, "
134 f"must be an `dict` but was {default!r} with type `{type(default).__name__}`."
135 )
136
137 targets: Iterable[str]
138 targets = target if isinstance(target, tuple) else (target,)
139 for target_alias in map(str, targets):
140 if target_alias in types:
141 target_type = types[target_alias]
142 else:
143 raise ValueError(f"Unrecognized target type {target_alias} in {self.address}.")
144
145 # Copy default dict if we may mutate it.
146 raw_values = dict(default) if ignore_unknown_fields else default
147
148 # Validate that field exists on target
149 valid_field_aliases = set(
150 target_type._get_field_aliases_to_field_types(
151 self._target_type_field_types(target_type)
152 ).keys()
153 )
154
155 for field_alias in default.keys():
156 if field_alias not in valid_field_aliases:
157 if ignore_unknown_fields:
158 del raw_values[field_alias]
159 else:
160 raise InvalidFieldException(
161 f"Unrecognized field `{field_alias}` for target {target_type.alias}. "
162 f"Valid fields are: {', '.join(sorted(valid_field_aliases))}.",
163 )
164
165 # TODO: support parametrization ? --needs special care due to Parametrize object not
166 # being hashable, and thus not acceptable in a FrozenDict instance.
167
168 # Merge all provided defaults for this call.
169 defaults.setdefault(target_type.alias, {}).update(raw_values)
170
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/python/pants/engine/internals/defaults.py b/src/python/pants/engine/internals/defaults.py
--- a/src/python/pants/engine/internals/defaults.py
+++ b/src/python/pants/engine/internals/defaults.py
@@ -13,9 +13,10 @@
from __future__ import annotations
from dataclasses import dataclass
-from typing import Any, Iterable, Mapping, Tuple, Union
+from typing import Any, Callable, Iterable, Mapping, Tuple, Union
from pants.engine.addresses import Address
+from pants.engine.internals.parametrize import Parametrize
from pants.engine.target import (
Field,
ImmutableValue,
@@ -26,6 +27,7 @@
)
from pants.engine.unions import UnionMembership
from pants.util.frozendict import FrozenDict
+from pants.util.meta import frozen_after_init
SetDefaultsValueT = Mapping[str, Any]
SetDefaultsKeyT = Union[str, Tuple[str, ...]]
@@ -36,6 +38,36 @@
"""Map target types to default field values."""
+@frozen_after_init
+@dataclass(unsafe_hash=True)
+class ParametrizeDefault(Parametrize):
+ """A frozen version of `Parametrize` for defaults.
+
+ This is needed since all defaults must be hashable, which the `Parametrize` class is not nor can
+ it be as it may get unhashable data as input and is unaware of the field type it is being
+ applied to.
+ """
+
+ args: tuple[str, ...]
+ kwargs: FrozenDict[str, ImmutableValue] # type: ignore[assignment]
+
+ def __init__(self, *args: str, **kwargs: ImmutableValue) -> None:
+ self.args = args
+ self.kwargs = FrozenDict(kwargs)
+
+ @classmethod
+ def create(
+ cls, freeze: Callable[[Any], ImmutableValue], parametrize: Parametrize
+ ) -> ParametrizeDefault:
+ return cls(
+ *map(freeze, parametrize.args),
+ **{kw: freeze(arg) for kw, arg in parametrize.kwargs.items()},
+ )
+
+ def __repr__(self) -> str:
+ return super().__repr__()
+
+
@dataclass
class BuildFileDefaultsParserState:
address: Address
@@ -58,15 +90,25 @@
union_membership=union_membership,
)
+ def _freeze_field_value(self, field_type: type[Field], value: Any) -> ImmutableValue:
+ if isinstance(value, ParametrizeDefault):
+ return value
+ elif isinstance(value, Parametrize):
+
+ def freeze(v: Any) -> ImmutableValue:
+ return self._freeze_field_value(field_type, v)
+
+ return ParametrizeDefault.create(freeze, value)
+ else:
+ return field_type.compute_value(raw_value=value, address=self.address)
+
def get_frozen_defaults(self) -> BuildFileDefaults:
types = self.registered_target_types.aliases_to_types
return BuildFileDefaults(
{
target_alias: FrozenDict(
{
- field_type.alias: field_type.compute_value(
- raw_value=default, address=self.address
- )
+ field_type.alias: self._freeze_field_value(field_type, default)
for field_alias, default in fields.items()
for field_type in self._target_type_field_types(types[target_alias])
if field_alias in (field_type.alias, field_type.deprecated_alias)
@@ -162,8 +204,5 @@
f"Valid fields are: {', '.join(sorted(valid_field_aliases))}.",
)
- # TODO: support parametrization ? --needs special care due to Parametrize object not
- # being hashable, and thus not acceptable in a FrozenDict instance.
-
# Merge all provided defaults for this call.
defaults.setdefault(target_type.alias, {}).update(raw_values)
| {"golden_diff": "diff --git a/src/python/pants/engine/internals/defaults.py b/src/python/pants/engine/internals/defaults.py\n--- a/src/python/pants/engine/internals/defaults.py\n+++ b/src/python/pants/engine/internals/defaults.py\n@@ -13,9 +13,10 @@\n from __future__ import annotations\n \n from dataclasses import dataclass\n-from typing import Any, Iterable, Mapping, Tuple, Union\n+from typing import Any, Callable, Iterable, Mapping, Tuple, Union\n \n from pants.engine.addresses import Address\n+from pants.engine.internals.parametrize import Parametrize\n from pants.engine.target import (\n Field,\n ImmutableValue,\n@@ -26,6 +27,7 @@\n )\n from pants.engine.unions import UnionMembership\n from pants.util.frozendict import FrozenDict\n+from pants.util.meta import frozen_after_init\n \n SetDefaultsValueT = Mapping[str, Any]\n SetDefaultsKeyT = Union[str, Tuple[str, ...]]\n@@ -36,6 +38,36 @@\n \"\"\"Map target types to default field values.\"\"\"\n \n \n+@frozen_after_init\n+@dataclass(unsafe_hash=True)\n+class ParametrizeDefault(Parametrize):\n+ \"\"\"A frozen version of `Parametrize` for defaults.\n+\n+ This is needed since all defaults must be hashable, which the `Parametrize` class is not nor can\n+ it be as it may get unhashable data as input and is unaware of the field type it is being\n+ applied to.\n+ \"\"\"\n+\n+ args: tuple[str, ...]\n+ kwargs: FrozenDict[str, ImmutableValue] # type: ignore[assignment]\n+\n+ def __init__(self, *args: str, **kwargs: ImmutableValue) -> None:\n+ self.args = args\n+ self.kwargs = FrozenDict(kwargs)\n+\n+ @classmethod\n+ def create(\n+ cls, freeze: Callable[[Any], ImmutableValue], parametrize: Parametrize\n+ ) -> ParametrizeDefault:\n+ return cls(\n+ *map(freeze, parametrize.args),\n+ **{kw: freeze(arg) for kw, arg in parametrize.kwargs.items()},\n+ )\n+\n+ def __repr__(self) -> str:\n+ return super().__repr__()\n+\n+\n @dataclass\n class BuildFileDefaultsParserState:\n address: Address\n@@ -58,15 +90,25 @@\n union_membership=union_membership,\n )\n \n+ def _freeze_field_value(self, field_type: type[Field], value: Any) -> ImmutableValue:\n+ if isinstance(value, ParametrizeDefault):\n+ return value\n+ elif isinstance(value, Parametrize):\n+\n+ def freeze(v: Any) -> ImmutableValue:\n+ return self._freeze_field_value(field_type, v)\n+\n+ return ParametrizeDefault.create(freeze, value)\n+ else:\n+ return field_type.compute_value(raw_value=value, address=self.address)\n+\n def get_frozen_defaults(self) -> BuildFileDefaults:\n types = self.registered_target_types.aliases_to_types\n return BuildFileDefaults(\n {\n target_alias: FrozenDict(\n {\n- field_type.alias: field_type.compute_value(\n- raw_value=default, address=self.address\n- )\n+ field_type.alias: self._freeze_field_value(field_type, default)\n for field_alias, default in fields.items()\n for field_type in self._target_type_field_types(types[target_alias])\n if field_alias in (field_type.alias, field_type.deprecated_alias)\n@@ -162,8 +204,5 @@\n f\"Valid fields are: {', '.join(sorted(valid_field_aliases))}.\",\n )\n \n- # TODO: support parametrization ? --needs special care due to Parametrize object not\n- # being hashable, and thus not acceptable in a FrozenDict instance.\n-\n # Merge all provided defaults for this call.\n defaults.setdefault(target_type.alias, {}).update(raw_values)\n", "issue": "Support `parametrize(...)` assignments in the `__defaults__` macro\n**Is your feature request related to a problem? Please describe.**\r\n\r\nThe `__defaults__` BUILD file construct helps eliminate the need for boilerplate target field assignments across an entire\r\ndirectory tree. However, it does not support assigning the result of the `parametrize` macro to a field.\r\n\r\n```\r\n# BUILD\r\n\r\n__defaults__({\r\n python_sources: dict(resolve=parametrize(my_resolves())), # InvalidFieldTypeException: expects string, not Parametrize\r\n})\r\n```\r\n\r\n**Describe the solution you'd like**\r\n\r\nSupport `parametrize` in this context in the natural way, i.e. interpret applicable targets as having the field assigned to\r\n`parametrize(...)`.\r\n\r\n**Describe alternatives you've considered**\r\n\r\nModifying `tailor` to populate this field on a per-directory basis might work.\r\n\r\n**Additional context**\r\n\r\nThis is very useful when a single Python subproject is intended to run under several lockfiles.\n", "before_files": [{"content": "# Copyright 2022 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\"\"\"The `BuildFileDefaultsParserState.set_defaults` is used by the pants.engine.internals.Parser,\nexposed as the `__defaults__` BUILD file symbol.\n\nWhen parsing a BUILD (from the rule `pants.engine.internals.build_files.parse_address_family`) the\ndefaults from the closest parent BUILD file is passed as input to the parser, and the new defaults\nresulting after the BUILD file have been parsed is returned in the `AddressFamily`.\n\nThese defaults are then applied when creating the `TargetAdaptor` targets by the `Registrar` in the\nparser.\n\"\"\"\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\nfrom typing import Any, Iterable, Mapping, Tuple, Union\n\nfrom pants.engine.addresses import Address\nfrom pants.engine.target import (\n Field,\n ImmutableValue,\n InvalidFieldException,\n RegisteredTargetTypes,\n Target,\n TargetGenerator,\n)\nfrom pants.engine.unions import UnionMembership\nfrom pants.util.frozendict import FrozenDict\n\nSetDefaultsValueT = Mapping[str, Any]\nSetDefaultsKeyT = Union[str, Tuple[str, ...]]\nSetDefaultsT = Mapping[SetDefaultsKeyT, SetDefaultsValueT]\n\n\nclass BuildFileDefaults(FrozenDict[str, FrozenDict[str, ImmutableValue]]):\n \"\"\"Map target types to default field values.\"\"\"\n\n\n@dataclass\nclass BuildFileDefaultsParserState:\n address: Address\n defaults: dict[str, Mapping[str, Any]]\n registered_target_types: RegisteredTargetTypes\n union_membership: UnionMembership\n\n @classmethod\n def create(\n cls,\n path: str,\n defaults: BuildFileDefaults,\n registered_target_types: RegisteredTargetTypes,\n union_membership: UnionMembership,\n ) -> BuildFileDefaultsParserState:\n return cls(\n address=Address(path, generated_name=\"__defaults__\"),\n defaults=dict(defaults),\n registered_target_types=registered_target_types,\n union_membership=union_membership,\n )\n\n def get_frozen_defaults(self) -> BuildFileDefaults:\n types = self.registered_target_types.aliases_to_types\n return BuildFileDefaults(\n {\n target_alias: FrozenDict(\n {\n field_type.alias: field_type.compute_value(\n raw_value=default, address=self.address\n )\n for field_alias, default in fields.items()\n for field_type in self._target_type_field_types(types[target_alias])\n if field_alias in (field_type.alias, field_type.deprecated_alias)\n }\n )\n for target_alias, fields in self.defaults.items()\n }\n )\n\n def get(self, target_alias: str) -> Mapping[str, Any]:\n # Used by `pants.engine.internals.parser.Parser._generate_symbols.Registrar.__call__`\n return self.defaults.get(target_alias, {})\n\n def set_defaults(\n self,\n *args: SetDefaultsT,\n all: SetDefaultsValueT | None = None,\n extend: bool = False,\n **kwargs,\n ) -> None:\n defaults: dict[str, dict[str, Any]] = (\n {} if not extend else {k: dict(v) for k, v in self.defaults.items()}\n )\n\n if all is not None:\n self._process_defaults(\n defaults,\n {tuple(self.registered_target_types.aliases): all},\n ignore_unknown_fields=True,\n )\n\n for arg in args:\n self._process_defaults(defaults, arg)\n\n # Update with new defaults, dropping targets without any default values.\n for tgt, default in defaults.items():\n if not default:\n self.defaults.pop(tgt, None)\n else:\n self.defaults[tgt] = default\n\n def _target_type_field_types(self, target_type: type[Target]) -> tuple[type[Field], ...]:\n return (\n *target_type.class_field_types(self.union_membership),\n *(target_type.moved_fields if issubclass(target_type, TargetGenerator) else ()),\n )\n\n def _process_defaults(\n self,\n defaults: dict[str, dict[str, Any]],\n targets_defaults: SetDefaultsT,\n ignore_unknown_fields: bool = False,\n ):\n if not isinstance(targets_defaults, dict):\n raise ValueError(\n f\"Expected dictionary mapping targets to default field values for {self.address} \"\n f\"but got: {type(targets_defaults).__name__}.\"\n )\n\n types = self.registered_target_types.aliases_to_types\n for target, default in targets_defaults.items():\n if not isinstance(default, dict):\n raise ValueError(\n f\"Invalid default field values in {self.address} for target type {target}, \"\n f\"must be an `dict` but was {default!r} with type `{type(default).__name__}`.\"\n )\n\n targets: Iterable[str]\n targets = target if isinstance(target, tuple) else (target,)\n for target_alias in map(str, targets):\n if target_alias in types:\n target_type = types[target_alias]\n else:\n raise ValueError(f\"Unrecognized target type {target_alias} in {self.address}.\")\n\n # Copy default dict if we may mutate it.\n raw_values = dict(default) if ignore_unknown_fields else default\n\n # Validate that field exists on target\n valid_field_aliases = set(\n target_type._get_field_aliases_to_field_types(\n self._target_type_field_types(target_type)\n ).keys()\n )\n\n for field_alias in default.keys():\n if field_alias not in valid_field_aliases:\n if ignore_unknown_fields:\n del raw_values[field_alias]\n else:\n raise InvalidFieldException(\n f\"Unrecognized field `{field_alias}` for target {target_type.alias}. \"\n f\"Valid fields are: {', '.join(sorted(valid_field_aliases))}.\",\n )\n\n # TODO: support parametrization ? --needs special care due to Parametrize object not\n # being hashable, and thus not acceptable in a FrozenDict instance.\n\n # Merge all provided defaults for this call.\n defaults.setdefault(target_type.alias, {}).update(raw_values)\n", "path": "src/python/pants/engine/internals/defaults.py"}], "after_files": [{"content": "# Copyright 2022 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\"\"\"The `BuildFileDefaultsParserState.set_defaults` is used by the pants.engine.internals.Parser,\nexposed as the `__defaults__` BUILD file symbol.\n\nWhen parsing a BUILD (from the rule `pants.engine.internals.build_files.parse_address_family`) the\ndefaults from the closest parent BUILD file is passed as input to the parser, and the new defaults\nresulting after the BUILD file have been parsed is returned in the `AddressFamily`.\n\nThese defaults are then applied when creating the `TargetAdaptor` targets by the `Registrar` in the\nparser.\n\"\"\"\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\nfrom typing import Any, Callable, Iterable, Mapping, Tuple, Union\n\nfrom pants.engine.addresses import Address\nfrom pants.engine.internals.parametrize import Parametrize\nfrom pants.engine.target import (\n Field,\n ImmutableValue,\n InvalidFieldException,\n RegisteredTargetTypes,\n Target,\n TargetGenerator,\n)\nfrom pants.engine.unions import UnionMembership\nfrom pants.util.frozendict import FrozenDict\nfrom pants.util.meta import frozen_after_init\n\nSetDefaultsValueT = Mapping[str, Any]\nSetDefaultsKeyT = Union[str, Tuple[str, ...]]\nSetDefaultsT = Mapping[SetDefaultsKeyT, SetDefaultsValueT]\n\n\nclass BuildFileDefaults(FrozenDict[str, FrozenDict[str, ImmutableValue]]):\n \"\"\"Map target types to default field values.\"\"\"\n\n\n@frozen_after_init\n@dataclass(unsafe_hash=True)\nclass ParametrizeDefault(Parametrize):\n \"\"\"A frozen version of `Parametrize` for defaults.\n\n This is needed since all defaults must be hashable, which the `Parametrize` class is not nor can\n it be as it may get unhashable data as input and is unaware of the field type it is being\n applied to.\n \"\"\"\n\n args: tuple[str, ...]\n kwargs: FrozenDict[str, ImmutableValue] # type: ignore[assignment]\n\n def __init__(self, *args: str, **kwargs: ImmutableValue) -> None:\n self.args = args\n self.kwargs = FrozenDict(kwargs)\n\n @classmethod\n def create(\n cls, freeze: Callable[[Any], ImmutableValue], parametrize: Parametrize\n ) -> ParametrizeDefault:\n return cls(\n *map(freeze, parametrize.args),\n **{kw: freeze(arg) for kw, arg in parametrize.kwargs.items()},\n )\n\n def __repr__(self) -> str:\n return super().__repr__()\n\n\n@dataclass\nclass BuildFileDefaultsParserState:\n address: Address\n defaults: dict[str, Mapping[str, Any]]\n registered_target_types: RegisteredTargetTypes\n union_membership: UnionMembership\n\n @classmethod\n def create(\n cls,\n path: str,\n defaults: BuildFileDefaults,\n registered_target_types: RegisteredTargetTypes,\n union_membership: UnionMembership,\n ) -> BuildFileDefaultsParserState:\n return cls(\n address=Address(path, generated_name=\"__defaults__\"),\n defaults=dict(defaults),\n registered_target_types=registered_target_types,\n union_membership=union_membership,\n )\n\n def _freeze_field_value(self, field_type: type[Field], value: Any) -> ImmutableValue:\n if isinstance(value, ParametrizeDefault):\n return value\n elif isinstance(value, Parametrize):\n\n def freeze(v: Any) -> ImmutableValue:\n return self._freeze_field_value(field_type, v)\n\n return ParametrizeDefault.create(freeze, value)\n else:\n return field_type.compute_value(raw_value=value, address=self.address)\n\n def get_frozen_defaults(self) -> BuildFileDefaults:\n types = self.registered_target_types.aliases_to_types\n return BuildFileDefaults(\n {\n target_alias: FrozenDict(\n {\n field_type.alias: self._freeze_field_value(field_type, default)\n for field_alias, default in fields.items()\n for field_type in self._target_type_field_types(types[target_alias])\n if field_alias in (field_type.alias, field_type.deprecated_alias)\n }\n )\n for target_alias, fields in self.defaults.items()\n }\n )\n\n def get(self, target_alias: str) -> Mapping[str, Any]:\n # Used by `pants.engine.internals.parser.Parser._generate_symbols.Registrar.__call__`\n return self.defaults.get(target_alias, {})\n\n def set_defaults(\n self,\n *args: SetDefaultsT,\n all: SetDefaultsValueT | None = None,\n extend: bool = False,\n **kwargs,\n ) -> None:\n defaults: dict[str, dict[str, Any]] = (\n {} if not extend else {k: dict(v) for k, v in self.defaults.items()}\n )\n\n if all is not None:\n self._process_defaults(\n defaults,\n {tuple(self.registered_target_types.aliases): all},\n ignore_unknown_fields=True,\n )\n\n for arg in args:\n self._process_defaults(defaults, arg)\n\n # Update with new defaults, dropping targets without any default values.\n for tgt, default in defaults.items():\n if not default:\n self.defaults.pop(tgt, None)\n else:\n self.defaults[tgt] = default\n\n def _target_type_field_types(self, target_type: type[Target]) -> tuple[type[Field], ...]:\n return (\n *target_type.class_field_types(self.union_membership),\n *(target_type.moved_fields if issubclass(target_type, TargetGenerator) else ()),\n )\n\n def _process_defaults(\n self,\n defaults: dict[str, dict[str, Any]],\n targets_defaults: SetDefaultsT,\n ignore_unknown_fields: bool = False,\n ):\n if not isinstance(targets_defaults, dict):\n raise ValueError(\n f\"Expected dictionary mapping targets to default field values for {self.address} \"\n f\"but got: {type(targets_defaults).__name__}.\"\n )\n\n types = self.registered_target_types.aliases_to_types\n for target, default in targets_defaults.items():\n if not isinstance(default, dict):\n raise ValueError(\n f\"Invalid default field values in {self.address} for target type {target}, \"\n f\"must be an `dict` but was {default!r} with type `{type(default).__name__}`.\"\n )\n\n targets: Iterable[str]\n targets = target if isinstance(target, tuple) else (target,)\n for target_alias in map(str, targets):\n if target_alias in types:\n target_type = types[target_alias]\n else:\n raise ValueError(f\"Unrecognized target type {target_alias} in {self.address}.\")\n\n # Copy default dict if we may mutate it.\n raw_values = dict(default) if ignore_unknown_fields else default\n\n # Validate that field exists on target\n valid_field_aliases = set(\n target_type._get_field_aliases_to_field_types(\n self._target_type_field_types(target_type)\n ).keys()\n )\n\n for field_alias in default.keys():\n if field_alias not in valid_field_aliases:\n if ignore_unknown_fields:\n del raw_values[field_alias]\n else:\n raise InvalidFieldException(\n f\"Unrecognized field `{field_alias}` for target {target_type.alias}. \"\n f\"Valid fields are: {', '.join(sorted(valid_field_aliases))}.\",\n )\n\n # Merge all provided defaults for this call.\n defaults.setdefault(target_type.alias, {}).update(raw_values)\n", "path": "src/python/pants/engine/internals/defaults.py"}]} | 2,174 | 860 |
gh_patches_debug_40980 | rasdani/github-patches | git_diff | openfun__marsha-2169 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Create a classroom invite link for an instructor
## Feature Request
**Is your feature request related to a problem or unsupported use case? Please describe.**
We already have for a classroom an invite link. This invite link is dedicated to people without access to the standalone site nor the LTI application. When they use it they are like a student and can only join the meeting when running.
We would like the same feature but for an instructor. When they use a dedicated invite link for them they can use the classroom dashboard.
**Describe the solution you'd like**
When an invite link for instructor is used, the user can use the classroom dashboard.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/backend/marsha/bbb/serializers.py`
Content:
```
1 """Structure of BBB related models API responses with Django Rest Framework serializers."""
2 from datetime import datetime
3 import mimetypes
4 from os.path import splitext
5 from urllib.parse import quote_plus
6
7 from django.conf import settings
8 from django.core.exceptions import ValidationError
9 from django.db import transaction
10 from django.urls import reverse
11 from django.utils import timezone
12
13 from rest_framework import serializers
14
15 from marsha.bbb.models import Classroom, ClassroomDocument, ClassroomRecording
16 from marsha.bbb.utils.bbb_utils import (
17 ApiMeetingException,
18 get_meeting_infos,
19 get_url as get_document_url,
20 )
21 from marsha.bbb.utils.tokens import create_classroom_stable_invite_jwt
22 from marsha.core.serializers import (
23 BaseInitiateUploadSerializer,
24 UploadableFileWithExtensionSerializerMixin,
25 )
26 from marsha.core.serializers.base import ReadOnlyModelSerializer
27 from marsha.core.serializers.playlist import PlaylistLiteSerializer
28
29
30 class ClassroomRecordingSerializer(ReadOnlyModelSerializer):
31 """A serializer to display a ClassroomRecording resource."""
32
33 class Meta: # noqa
34 model = ClassroomRecording
35 fields = (
36 "id",
37 "classroom",
38 "record_id",
39 "started_at",
40 "video_file_url",
41 )
42 read_only_fields = (
43 "id",
44 "classroom",
45 "record_id",
46 "started_at",
47 "video_file_url",
48 )
49
50 # Make sure classroom UUID is converted to a string during serialization
51 classroom = serializers.PrimaryKeyRelatedField(
52 read_only=True, pk_field=serializers.CharField()
53 )
54
55
56 class ClassroomSerializer(serializers.ModelSerializer):
57 """A serializer to display a Classroom resource."""
58
59 class Meta: # noqa
60 model = Classroom
61 fields = (
62 "id",
63 "lti_id",
64 "title",
65 "description",
66 "playlist",
67 "meeting_id",
68 "welcome_text",
69 "started",
70 "ended",
71 "starting_at",
72 "estimated_duration",
73 "recordings",
74 # specific generated fields
75 "infos",
76 "invite_token",
77 )
78 read_only_fields = (
79 "id",
80 "lti_id",
81 "playlist",
82 "meeting_id",
83 "started",
84 "ended",
85 "recordings",
86 "infos",
87 )
88
89 playlist = PlaylistLiteSerializer(read_only=True)
90 recordings = ClassroomRecordingSerializer(many=True, read_only=True)
91 infos = serializers.SerializerMethodField()
92 invite_token = serializers.SerializerMethodField()
93 recordings = serializers.SerializerMethodField()
94
95 def get_infos(self, obj):
96 """Meeting infos from BBB server."""
97 try:
98 return get_meeting_infos(classroom=obj)
99 except ApiMeetingException:
100 return None
101
102 def get_invite_token(self, obj):
103 """Get the invite token for the classroom."""
104 if self.context.get("is_admin", False):
105 return str(create_classroom_stable_invite_jwt(obj))
106 return None
107
108 def get_recordings(self, obj):
109 """Get the recordings for the classroom.
110
111 Only available for admins.
112 """
113 if self.context.get("is_admin", True):
114 return ClassroomRecordingSerializer(
115 obj.recordings.all(), many=True, context=self.context
116 ).data
117 return []
118
119 def update(self, instance, validated_data):
120 if any(
121 attribute in validated_data
122 for attribute in ["starting_at", "estimated_duration"]
123 ):
124 validated_data["ended"] = False
125 return super().update(instance, validated_data)
126
127 def validate_starting_at(self, value):
128 """Add extra controls for starting_at field."""
129 # Field starting_at has a new value
130 if value != self.instance.starting_at:
131 # New value is past, it can't be updated
132 if value is not None and value < timezone.now():
133 sent_date_with_timezone = datetime.fromisoformat(
134 self.context.get("request").data.get("starting_at")
135 )
136 raise serializers.ValidationError(
137 f"{sent_date_with_timezone} is not a valid date, date should be planned after!"
138 )
139
140 return value
141
142
143 class ClassroomLiteSerializer(ReadOnlyModelSerializer):
144 """Classroom lite serializer without playlist and infos fetch from the BBB api."""
145
146 class Meta: # noqa
147 model = Classroom
148 fields = (
149 "id",
150 "lti_id",
151 "title",
152 "description",
153 "meeting_id",
154 "welcome_text",
155 "started",
156 "ended",
157 "starting_at",
158 "estimated_duration",
159 )
160
161
162 class ClassroomSelectLTISerializer(ClassroomSerializer):
163 """A serializer to display a Classroom resource for LTI select content request."""
164
165 class Meta: # noqa
166 model = Classroom
167 fields = (
168 "id",
169 "lti_id",
170 "lti_url",
171 "title",
172 "description",
173 "playlist",
174 "meeting_id",
175 )
176
177 lti_url = serializers.SerializerMethodField()
178
179 def get_lti_url(self, obj):
180 """LTI Url of the Classroom.
181
182 Parameters
183 ----------
184 obj : Type[models.Document]
185 The document that we want to serialize
186
187 Returns
188 -------
189 String
190 the LTI url to be used by LTI consumers
191
192 """
193 return self.context["request"].build_absolute_uri(
194 reverse("classroom:classroom_lti_view", args=[obj.id]),
195 )
196
197
198 class ClassroomDocumentSerializer(
199 UploadableFileWithExtensionSerializerMixin, serializers.ModelSerializer
200 ):
201 """A serializer to display a ClassroomDocument resource."""
202
203 class Meta: # noqa
204 model = ClassroomDocument
205 fields = (
206 "classroom",
207 "filename",
208 "id",
209 "is_default",
210 "upload_state",
211 "uploaded_on",
212 "url",
213 )
214 read_only_fields = (
215 "classroom",
216 "id",
217 "upload_state",
218 "uploaded_on",
219 "url",
220 )
221
222 url = serializers.SerializerMethodField()
223 # Make sure classroom UUID is converted to a string during serialization
224 classroom = serializers.PrimaryKeyRelatedField(
225 read_only=True, pk_field=serializers.CharField()
226 )
227
228 def to_internal_value(self, data):
229 """Validate if the size is coherent with django settings."""
230 max_file_size = settings.CLASSROOM_DOCUMENT_SOURCE_MAX_SIZE
231 if "filename" in data and "size" not in data:
232 raise ValidationError({"size": ["File size is required"]})
233 if "size" in data and data.pop("size") > max_file_size:
234 raise ValidationError(
235 {"size": [f"File too large, max size allowed is {max_file_size} Bytes"]}
236 )
237 return super().to_internal_value(data)
238
239 def create(self, validated_data):
240 """Force the classroom field to the classroom of the JWT Token if any.
241
242 Parameters
243 ----------
244 validated_data : dictionary
245 Dictionary of the deserialized values of each field after validation.
246
247 Returns
248 -------
249 dictionary
250 The "validated_data" dictionary is returned after modification.
251
252 """
253 resource = self.context["request"].resource
254 classroom_id = self.context["request"].data.get("classroom")
255 if not validated_data.get("classroom_id"):
256 if resource:
257 validated_data["classroom_id"] = resource.id
258 elif classroom_id:
259 validated_data["classroom_id"] = classroom_id
260
261 if not ClassroomDocument.objects.filter(
262 classroom_id=validated_data["classroom_id"]
263 ).exists():
264 validated_data["is_default"] = True
265
266 return super().create(validated_data)
267
268 def update(self, instance, validated_data):
269 """Set the default attribute of the others classroom documents to false if
270 set to true for this one.
271
272 Parameters
273 ----------
274 instance : Type[models.ClassroomDocument]
275 The document that we want to update
276
277 validated_data : dictionary
278 Dictionary of the deserialized values of each field after validation.
279
280 Returns
281 -------
282 Type[models.ClassroomDocument]
283 The updated document
284
285 """
286 with transaction.atomic():
287 instance = super().update(instance, validated_data)
288 if validated_data.get("is_default"):
289 ClassroomDocument.objects.exclude(id=instance.id).filter(
290 classroom=instance.classroom, is_default=True
291 ).update(is_default=False)
292 return instance
293
294 def get_url(self, obj):
295 """Url of the ClassroomDocument.
296
297 Parameters
298 ----------
299 obj : Type[models.DepositedFile]
300 The classroom document that we want to serialize
301
302 Returns
303 -------
304 String or None
305 the url to fetch the classroom document on CloudFront
306 None if the classroom document is still not uploaded to S3 with success
307
308 """
309 if url := get_document_url(obj):
310 return (
311 f"{url}?response-content-disposition="
312 f"{quote_plus('attachment; filename=' + obj.filename)}"
313 )
314 return None
315
316
317 class ClassroomDocumentInitiateUploadSerializer(BaseInitiateUploadSerializer):
318 """An initiate-upload serializer dedicated to classroom document."""
319
320 @property
321 def max_upload_file_size(self):
322 """return the class room document max file size define in the settings.
323
324 The @property decorator is used to ease the use of @override_settings
325 in tests. Otherwise the setting is not changed and we can't easily test
326 an upload with a size higher than the one defined in the settings
327 """
328 return settings.CLASSROOM_DOCUMENT_SOURCE_MAX_SIZE
329
330 def validate(self, attrs):
331 """Validate if the mimetype is allowed or not."""
332 # mimetype is provided, we directly check it
333 if attrs["mimetype"] != "":
334 if attrs["mimetype"] not in settings.ALLOWED_CLASSROOM_DOCUMENT_MIME_TYPES:
335 raise serializers.ValidationError(
336 {"mimetype": f"{attrs['mimetype']} is not a supported mimetype"}
337 )
338 attrs["extension"] = mimetypes.guess_extension(attrs["mimetype"])
339
340 # mimetype is not provided, we have to guess it from the extension
341 else:
342 mimetypes.init()
343 extension = splitext(attrs["filename"])[1]
344 mimetype = mimetypes.types_map.get(extension)
345 if mimetype not in settings.ALLOWED_CLASSROOM_DOCUMENT_MIME_TYPES:
346 raise serializers.ValidationError(
347 {"mimetype": "mimetype not guessable"}
348 )
349 # extension is added to the data in order to be used later
350 attrs["extension"] = extension
351 attrs["mimetype"] = mimetype
352
353 return attrs
354
```
Path: `src/backend/marsha/bbb/utils/tokens.py`
Content:
```
1 """Specific classroom related simple JWT helpers."""
2 from datetime import timedelta
3
4 from django.conf import settings
5 from django.utils import timezone
6
7 from marsha.core.simple_jwt.tokens import ResourceAccessToken
8
9
10 def create_classroom_stable_invite_jwt(classroom):
11 """Create a resource JWT to be used in classroom invite links.
12
13 Parameters
14 ----------
15 classroom : Type[models.Classroom]
16 The classroom for which we want to create a JWT.
17
18 Returns
19 -------
20 ResourceAccessToken
21 The JWT.
22
23 """
24 resource_jwt = ResourceAccessToken.for_resource_id(
25 resource_id=str(classroom.id),
26 session_id=f"{classroom.id}-invite",
27 )
28
29 # Set a fixed JWT ID
30 resource_jwt.set_jti(
31 f"classroom-invite-{classroom.id}-{classroom.created_on.strftime('%Y-%m-%d')}"
32 )
33
34 # Set a fixed validity beginning: the classroom creation date
35 resource_jwt.set_iat(at_time=classroom.created_on)
36
37 # Determine the validity end:
38 # - if the classroom has a starting date, the JWT is valid
39 # until the starting date plus two days
40 # - if the classroom has no starting date, the JWT is valid
41 # for a month **starting now** (not on classroom creation)
42 if classroom.starting_at:
43 validity_end = classroom.starting_at + timedelta(days=2)
44 else:
45 validity_end = timezone.now().replace(
46 hour=0, minute=0, second=0, microsecond=0
47 ) + timedelta(days=settings.BBB_INVITE_JWT_DEFAULT_DAYS_DURATION)
48
49 resource_jwt.set_exp(
50 from_time=classroom.created_on,
51 lifetime=validity_end - classroom.created_on,
52 )
53
54 return resource_jwt
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/backend/marsha/bbb/serializers.py b/src/backend/marsha/bbb/serializers.py
--- a/src/backend/marsha/bbb/serializers.py
+++ b/src/backend/marsha/bbb/serializers.py
@@ -19,6 +19,7 @@
get_url as get_document_url,
)
from marsha.bbb.utils.tokens import create_classroom_stable_invite_jwt
+from marsha.core.models import INSTRUCTOR
from marsha.core.serializers import (
BaseInitiateUploadSerializer,
UploadableFileWithExtensionSerializerMixin,
@@ -74,6 +75,7 @@
# specific generated fields
"infos",
"invite_token",
+ "instructor_token",
)
read_only_fields = (
"id",
@@ -90,6 +92,7 @@
recordings = ClassroomRecordingSerializer(many=True, read_only=True)
infos = serializers.SerializerMethodField()
invite_token = serializers.SerializerMethodField()
+ instructor_token = serializers.SerializerMethodField()
recordings = serializers.SerializerMethodField()
def get_infos(self, obj):
@@ -105,6 +108,18 @@
return str(create_classroom_stable_invite_jwt(obj))
return None
+ def get_instructor_token(self, obj):
+ """Get the instructor token for the classroom."""
+ if self.context.get("is_admin", False):
+ return str(
+ create_classroom_stable_invite_jwt(
+ obj,
+ role=INSTRUCTOR,
+ permissions={"can_update": True, "can_access_dashboard": True},
+ )
+ )
+ return None
+
def get_recordings(self, obj):
"""Get the recordings for the classroom.
diff --git a/src/backend/marsha/bbb/utils/tokens.py b/src/backend/marsha/bbb/utils/tokens.py
--- a/src/backend/marsha/bbb/utils/tokens.py
+++ b/src/backend/marsha/bbb/utils/tokens.py
@@ -4,10 +4,11 @@
from django.conf import settings
from django.utils import timezone
+from marsha.core.models import NONE
from marsha.core.simple_jwt.tokens import ResourceAccessToken
-def create_classroom_stable_invite_jwt(classroom):
+def create_classroom_stable_invite_jwt(classroom, role=NONE, permissions=None):
"""Create a resource JWT to be used in classroom invite links.
Parameters
@@ -15,6 +16,12 @@
classroom : Type[models.Classroom]
The classroom for which we want to create a JWT.
+ role : str
+ The role to use in the JWT. If not set, the no role is used.
+
+ permissions : dict
+ The permissions to use in the JWT. If not set, no permissions are used.
+
Returns
-------
ResourceAccessToken
@@ -24,6 +31,8 @@
resource_jwt = ResourceAccessToken.for_resource_id(
resource_id=str(classroom.id),
session_id=f"{classroom.id}-invite",
+ roles=[role],
+ permissions=permissions or {},
)
# Set a fixed JWT ID
| {"golden_diff": "diff --git a/src/backend/marsha/bbb/serializers.py b/src/backend/marsha/bbb/serializers.py\n--- a/src/backend/marsha/bbb/serializers.py\n+++ b/src/backend/marsha/bbb/serializers.py\n@@ -19,6 +19,7 @@\n get_url as get_document_url,\n )\n from marsha.bbb.utils.tokens import create_classroom_stable_invite_jwt\n+from marsha.core.models import INSTRUCTOR\n from marsha.core.serializers import (\n BaseInitiateUploadSerializer,\n UploadableFileWithExtensionSerializerMixin,\n@@ -74,6 +75,7 @@\n # specific generated fields\n \"infos\",\n \"invite_token\",\n+ \"instructor_token\",\n )\n read_only_fields = (\n \"id\",\n@@ -90,6 +92,7 @@\n recordings = ClassroomRecordingSerializer(many=True, read_only=True)\n infos = serializers.SerializerMethodField()\n invite_token = serializers.SerializerMethodField()\n+ instructor_token = serializers.SerializerMethodField()\n recordings = serializers.SerializerMethodField()\n \n def get_infos(self, obj):\n@@ -105,6 +108,18 @@\n return str(create_classroom_stable_invite_jwt(obj))\n return None\n \n+ def get_instructor_token(self, obj):\n+ \"\"\"Get the instructor token for the classroom.\"\"\"\n+ if self.context.get(\"is_admin\", False):\n+ return str(\n+ create_classroom_stable_invite_jwt(\n+ obj,\n+ role=INSTRUCTOR,\n+ permissions={\"can_update\": True, \"can_access_dashboard\": True},\n+ )\n+ )\n+ return None\n+\n def get_recordings(self, obj):\n \"\"\"Get the recordings for the classroom.\n \ndiff --git a/src/backend/marsha/bbb/utils/tokens.py b/src/backend/marsha/bbb/utils/tokens.py\n--- a/src/backend/marsha/bbb/utils/tokens.py\n+++ b/src/backend/marsha/bbb/utils/tokens.py\n@@ -4,10 +4,11 @@\n from django.conf import settings\n from django.utils import timezone\n \n+from marsha.core.models import NONE\n from marsha.core.simple_jwt.tokens import ResourceAccessToken\n \n \n-def create_classroom_stable_invite_jwt(classroom):\n+def create_classroom_stable_invite_jwt(classroom, role=NONE, permissions=None):\n \"\"\"Create a resource JWT to be used in classroom invite links.\n \n Parameters\n@@ -15,6 +16,12 @@\n classroom : Type[models.Classroom]\n The classroom for which we want to create a JWT.\n \n+ role : str\n+ The role to use in the JWT. If not set, the no role is used.\n+\n+ permissions : dict\n+ The permissions to use in the JWT. If not set, no permissions are used.\n+\n Returns\n -------\n ResourceAccessToken\n@@ -24,6 +31,8 @@\n resource_jwt = ResourceAccessToken.for_resource_id(\n resource_id=str(classroom.id),\n session_id=f\"{classroom.id}-invite\",\n+ roles=[role],\n+ permissions=permissions or {},\n )\n \n # Set a fixed JWT ID\n", "issue": "Create a classroom invite link for an instructor\n## Feature Request\r\n\r\n**Is your feature request related to a problem or unsupported use case? Please describe.**\r\n\r\nWe already have for a classroom an invite link. This invite link is dedicated to people without access to the standalone site nor the LTI application. When they use it they are like a student and can only join the meeting when running.\r\nWe would like the same feature but for an instructor. When they use a dedicated invite link for them they can use the classroom dashboard.\r\n\r\n**Describe the solution you'd like**\r\n\r\nWhen an invite link for instructor is used, the user can use the classroom dashboard.\r\n\n", "before_files": [{"content": "\"\"\"Structure of BBB related models API responses with Django Rest Framework serializers.\"\"\"\nfrom datetime import datetime\nimport mimetypes\nfrom os.path import splitext\nfrom urllib.parse import quote_plus\n\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.db import transaction\nfrom django.urls import reverse\nfrom django.utils import timezone\n\nfrom rest_framework import serializers\n\nfrom marsha.bbb.models import Classroom, ClassroomDocument, ClassroomRecording\nfrom marsha.bbb.utils.bbb_utils import (\n ApiMeetingException,\n get_meeting_infos,\n get_url as get_document_url,\n)\nfrom marsha.bbb.utils.tokens import create_classroom_stable_invite_jwt\nfrom marsha.core.serializers import (\n BaseInitiateUploadSerializer,\n UploadableFileWithExtensionSerializerMixin,\n)\nfrom marsha.core.serializers.base import ReadOnlyModelSerializer\nfrom marsha.core.serializers.playlist import PlaylistLiteSerializer\n\n\nclass ClassroomRecordingSerializer(ReadOnlyModelSerializer):\n \"\"\"A serializer to display a ClassroomRecording resource.\"\"\"\n\n class Meta: # noqa\n model = ClassroomRecording\n fields = (\n \"id\",\n \"classroom\",\n \"record_id\",\n \"started_at\",\n \"video_file_url\",\n )\n read_only_fields = (\n \"id\",\n \"classroom\",\n \"record_id\",\n \"started_at\",\n \"video_file_url\",\n )\n\n # Make sure classroom UUID is converted to a string during serialization\n classroom = serializers.PrimaryKeyRelatedField(\n read_only=True, pk_field=serializers.CharField()\n )\n\n\nclass ClassroomSerializer(serializers.ModelSerializer):\n \"\"\"A serializer to display a Classroom resource.\"\"\"\n\n class Meta: # noqa\n model = Classroom\n fields = (\n \"id\",\n \"lti_id\",\n \"title\",\n \"description\",\n \"playlist\",\n \"meeting_id\",\n \"welcome_text\",\n \"started\",\n \"ended\",\n \"starting_at\",\n \"estimated_duration\",\n \"recordings\",\n # specific generated fields\n \"infos\",\n \"invite_token\",\n )\n read_only_fields = (\n \"id\",\n \"lti_id\",\n \"playlist\",\n \"meeting_id\",\n \"started\",\n \"ended\",\n \"recordings\",\n \"infos\",\n )\n\n playlist = PlaylistLiteSerializer(read_only=True)\n recordings = ClassroomRecordingSerializer(many=True, read_only=True)\n infos = serializers.SerializerMethodField()\n invite_token = serializers.SerializerMethodField()\n recordings = serializers.SerializerMethodField()\n\n def get_infos(self, obj):\n \"\"\"Meeting infos from BBB server.\"\"\"\n try:\n return get_meeting_infos(classroom=obj)\n except ApiMeetingException:\n return None\n\n def get_invite_token(self, obj):\n \"\"\"Get the invite token for the classroom.\"\"\"\n if self.context.get(\"is_admin\", False):\n return str(create_classroom_stable_invite_jwt(obj))\n return None\n\n def get_recordings(self, obj):\n \"\"\"Get the recordings for the classroom.\n\n Only available for admins.\n \"\"\"\n if self.context.get(\"is_admin\", True):\n return ClassroomRecordingSerializer(\n obj.recordings.all(), many=True, context=self.context\n ).data\n return []\n\n def update(self, instance, validated_data):\n if any(\n attribute in validated_data\n for attribute in [\"starting_at\", \"estimated_duration\"]\n ):\n validated_data[\"ended\"] = False\n return super().update(instance, validated_data)\n\n def validate_starting_at(self, value):\n \"\"\"Add extra controls for starting_at field.\"\"\"\n # Field starting_at has a new value\n if value != self.instance.starting_at:\n # New value is past, it can't be updated\n if value is not None and value < timezone.now():\n sent_date_with_timezone = datetime.fromisoformat(\n self.context.get(\"request\").data.get(\"starting_at\")\n )\n raise serializers.ValidationError(\n f\"{sent_date_with_timezone} is not a valid date, date should be planned after!\"\n )\n\n return value\n\n\nclass ClassroomLiteSerializer(ReadOnlyModelSerializer):\n \"\"\"Classroom lite serializer without playlist and infos fetch from the BBB api.\"\"\"\n\n class Meta: # noqa\n model = Classroom\n fields = (\n \"id\",\n \"lti_id\",\n \"title\",\n \"description\",\n \"meeting_id\",\n \"welcome_text\",\n \"started\",\n \"ended\",\n \"starting_at\",\n \"estimated_duration\",\n )\n\n\nclass ClassroomSelectLTISerializer(ClassroomSerializer):\n \"\"\"A serializer to display a Classroom resource for LTI select content request.\"\"\"\n\n class Meta: # noqa\n model = Classroom\n fields = (\n \"id\",\n \"lti_id\",\n \"lti_url\",\n \"title\",\n \"description\",\n \"playlist\",\n \"meeting_id\",\n )\n\n lti_url = serializers.SerializerMethodField()\n\n def get_lti_url(self, obj):\n \"\"\"LTI Url of the Classroom.\n\n Parameters\n ----------\n obj : Type[models.Document]\n The document that we want to serialize\n\n Returns\n -------\n String\n the LTI url to be used by LTI consumers\n\n \"\"\"\n return self.context[\"request\"].build_absolute_uri(\n reverse(\"classroom:classroom_lti_view\", args=[obj.id]),\n )\n\n\nclass ClassroomDocumentSerializer(\n UploadableFileWithExtensionSerializerMixin, serializers.ModelSerializer\n):\n \"\"\"A serializer to display a ClassroomDocument resource.\"\"\"\n\n class Meta: # noqa\n model = ClassroomDocument\n fields = (\n \"classroom\",\n \"filename\",\n \"id\",\n \"is_default\",\n \"upload_state\",\n \"uploaded_on\",\n \"url\",\n )\n read_only_fields = (\n \"classroom\",\n \"id\",\n \"upload_state\",\n \"uploaded_on\",\n \"url\",\n )\n\n url = serializers.SerializerMethodField()\n # Make sure classroom UUID is converted to a string during serialization\n classroom = serializers.PrimaryKeyRelatedField(\n read_only=True, pk_field=serializers.CharField()\n )\n\n def to_internal_value(self, data):\n \"\"\"Validate if the size is coherent with django settings.\"\"\"\n max_file_size = settings.CLASSROOM_DOCUMENT_SOURCE_MAX_SIZE\n if \"filename\" in data and \"size\" not in data:\n raise ValidationError({\"size\": [\"File size is required\"]})\n if \"size\" in data and data.pop(\"size\") > max_file_size:\n raise ValidationError(\n {\"size\": [f\"File too large, max size allowed is {max_file_size} Bytes\"]}\n )\n return super().to_internal_value(data)\n\n def create(self, validated_data):\n \"\"\"Force the classroom field to the classroom of the JWT Token if any.\n\n Parameters\n ----------\n validated_data : dictionary\n Dictionary of the deserialized values of each field after validation.\n\n Returns\n -------\n dictionary\n The \"validated_data\" dictionary is returned after modification.\n\n \"\"\"\n resource = self.context[\"request\"].resource\n classroom_id = self.context[\"request\"].data.get(\"classroom\")\n if not validated_data.get(\"classroom_id\"):\n if resource:\n validated_data[\"classroom_id\"] = resource.id\n elif classroom_id:\n validated_data[\"classroom_id\"] = classroom_id\n\n if not ClassroomDocument.objects.filter(\n classroom_id=validated_data[\"classroom_id\"]\n ).exists():\n validated_data[\"is_default\"] = True\n\n return super().create(validated_data)\n\n def update(self, instance, validated_data):\n \"\"\"Set the default attribute of the others classroom documents to false if\n set to true for this one.\n\n Parameters\n ----------\n instance : Type[models.ClassroomDocument]\n The document that we want to update\n\n validated_data : dictionary\n Dictionary of the deserialized values of each field after validation.\n\n Returns\n -------\n Type[models.ClassroomDocument]\n The updated document\n\n \"\"\"\n with transaction.atomic():\n instance = super().update(instance, validated_data)\n if validated_data.get(\"is_default\"):\n ClassroomDocument.objects.exclude(id=instance.id).filter(\n classroom=instance.classroom, is_default=True\n ).update(is_default=False)\n return instance\n\n def get_url(self, obj):\n \"\"\"Url of the ClassroomDocument.\n\n Parameters\n ----------\n obj : Type[models.DepositedFile]\n The classroom document that we want to serialize\n\n Returns\n -------\n String or None\n the url to fetch the classroom document on CloudFront\n None if the classroom document is still not uploaded to S3 with success\n\n \"\"\"\n if url := get_document_url(obj):\n return (\n f\"{url}?response-content-disposition=\"\n f\"{quote_plus('attachment; filename=' + obj.filename)}\"\n )\n return None\n\n\nclass ClassroomDocumentInitiateUploadSerializer(BaseInitiateUploadSerializer):\n \"\"\"An initiate-upload serializer dedicated to classroom document.\"\"\"\n\n @property\n def max_upload_file_size(self):\n \"\"\"return the class room document max file size define in the settings.\n\n The @property decorator is used to ease the use of @override_settings\n in tests. Otherwise the setting is not changed and we can't easily test\n an upload with a size higher than the one defined in the settings\n \"\"\"\n return settings.CLASSROOM_DOCUMENT_SOURCE_MAX_SIZE\n\n def validate(self, attrs):\n \"\"\"Validate if the mimetype is allowed or not.\"\"\"\n # mimetype is provided, we directly check it\n if attrs[\"mimetype\"] != \"\":\n if attrs[\"mimetype\"] not in settings.ALLOWED_CLASSROOM_DOCUMENT_MIME_TYPES:\n raise serializers.ValidationError(\n {\"mimetype\": f\"{attrs['mimetype']} is not a supported mimetype\"}\n )\n attrs[\"extension\"] = mimetypes.guess_extension(attrs[\"mimetype\"])\n\n # mimetype is not provided, we have to guess it from the extension\n else:\n mimetypes.init()\n extension = splitext(attrs[\"filename\"])[1]\n mimetype = mimetypes.types_map.get(extension)\n if mimetype not in settings.ALLOWED_CLASSROOM_DOCUMENT_MIME_TYPES:\n raise serializers.ValidationError(\n {\"mimetype\": \"mimetype not guessable\"}\n )\n # extension is added to the data in order to be used later\n attrs[\"extension\"] = extension\n attrs[\"mimetype\"] = mimetype\n\n return attrs\n", "path": "src/backend/marsha/bbb/serializers.py"}, {"content": "\"\"\"Specific classroom related simple JWT helpers.\"\"\"\nfrom datetime import timedelta\n\nfrom django.conf import settings\nfrom django.utils import timezone\n\nfrom marsha.core.simple_jwt.tokens import ResourceAccessToken\n\n\ndef create_classroom_stable_invite_jwt(classroom):\n \"\"\"Create a resource JWT to be used in classroom invite links.\n\n Parameters\n ----------\n classroom : Type[models.Classroom]\n The classroom for which we want to create a JWT.\n\n Returns\n -------\n ResourceAccessToken\n The JWT.\n\n \"\"\"\n resource_jwt = ResourceAccessToken.for_resource_id(\n resource_id=str(classroom.id),\n session_id=f\"{classroom.id}-invite\",\n )\n\n # Set a fixed JWT ID\n resource_jwt.set_jti(\n f\"classroom-invite-{classroom.id}-{classroom.created_on.strftime('%Y-%m-%d')}\"\n )\n\n # Set a fixed validity beginning: the classroom creation date\n resource_jwt.set_iat(at_time=classroom.created_on)\n\n # Determine the validity end:\n # - if the classroom has a starting date, the JWT is valid\n # until the starting date plus two days\n # - if the classroom has no starting date, the JWT is valid\n # for a month **starting now** (not on classroom creation)\n if classroom.starting_at:\n validity_end = classroom.starting_at + timedelta(days=2)\n else:\n validity_end = timezone.now().replace(\n hour=0, minute=0, second=0, microsecond=0\n ) + timedelta(days=settings.BBB_INVITE_JWT_DEFAULT_DAYS_DURATION)\n\n resource_jwt.set_exp(\n from_time=classroom.created_on,\n lifetime=validity_end - classroom.created_on,\n )\n\n return resource_jwt\n", "path": "src/backend/marsha/bbb/utils/tokens.py"}], "after_files": [{"content": "\"\"\"Structure of BBB related models API responses with Django Rest Framework serializers.\"\"\"\nfrom datetime import datetime\nimport mimetypes\nfrom os.path import splitext\nfrom urllib.parse import quote_plus\n\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.db import transaction\nfrom django.urls import reverse\nfrom django.utils import timezone\n\nfrom rest_framework import serializers\n\nfrom marsha.bbb.models import Classroom, ClassroomDocument, ClassroomRecording\nfrom marsha.bbb.utils.bbb_utils import (\n ApiMeetingException,\n get_meeting_infos,\n get_url as get_document_url,\n)\nfrom marsha.bbb.utils.tokens import create_classroom_stable_invite_jwt\nfrom marsha.core.models import INSTRUCTOR\nfrom marsha.core.serializers import (\n BaseInitiateUploadSerializer,\n UploadableFileWithExtensionSerializerMixin,\n)\nfrom marsha.core.serializers.base import ReadOnlyModelSerializer\nfrom marsha.core.serializers.playlist import PlaylistLiteSerializer\n\n\nclass ClassroomRecordingSerializer(ReadOnlyModelSerializer):\n \"\"\"A serializer to display a ClassroomRecording resource.\"\"\"\n\n class Meta: # noqa\n model = ClassroomRecording\n fields = (\n \"id\",\n \"classroom\",\n \"record_id\",\n \"started_at\",\n \"video_file_url\",\n )\n read_only_fields = (\n \"id\",\n \"classroom\",\n \"record_id\",\n \"started_at\",\n \"video_file_url\",\n )\n\n # Make sure classroom UUID is converted to a string during serialization\n classroom = serializers.PrimaryKeyRelatedField(\n read_only=True, pk_field=serializers.CharField()\n )\n\n\nclass ClassroomSerializer(serializers.ModelSerializer):\n \"\"\"A serializer to display a Classroom resource.\"\"\"\n\n class Meta: # noqa\n model = Classroom\n fields = (\n \"id\",\n \"lti_id\",\n \"title\",\n \"description\",\n \"playlist\",\n \"meeting_id\",\n \"welcome_text\",\n \"started\",\n \"ended\",\n \"starting_at\",\n \"estimated_duration\",\n \"recordings\",\n # specific generated fields\n \"infos\",\n \"invite_token\",\n \"instructor_token\",\n )\n read_only_fields = (\n \"id\",\n \"lti_id\",\n \"playlist\",\n \"meeting_id\",\n \"started\",\n \"ended\",\n \"recordings\",\n \"infos\",\n )\n\n playlist = PlaylistLiteSerializer(read_only=True)\n recordings = ClassroomRecordingSerializer(many=True, read_only=True)\n infos = serializers.SerializerMethodField()\n invite_token = serializers.SerializerMethodField()\n instructor_token = serializers.SerializerMethodField()\n recordings = serializers.SerializerMethodField()\n\n def get_infos(self, obj):\n \"\"\"Meeting infos from BBB server.\"\"\"\n try:\n return get_meeting_infos(classroom=obj)\n except ApiMeetingException:\n return None\n\n def get_invite_token(self, obj):\n \"\"\"Get the invite token for the classroom.\"\"\"\n if self.context.get(\"is_admin\", False):\n return str(create_classroom_stable_invite_jwt(obj))\n return None\n\n def get_instructor_token(self, obj):\n \"\"\"Get the instructor token for the classroom.\"\"\"\n if self.context.get(\"is_admin\", False):\n return str(\n create_classroom_stable_invite_jwt(\n obj,\n role=INSTRUCTOR,\n permissions={\"can_update\": True, \"can_access_dashboard\": True},\n )\n )\n return None\n\n def get_recordings(self, obj):\n \"\"\"Get the recordings for the classroom.\n\n Only available for admins.\n \"\"\"\n if self.context.get(\"is_admin\", True):\n return ClassroomRecordingSerializer(\n obj.recordings.all(), many=True, context=self.context\n ).data\n return []\n\n def update(self, instance, validated_data):\n if any(\n attribute in validated_data\n for attribute in [\"starting_at\", \"estimated_duration\"]\n ):\n validated_data[\"ended\"] = False\n return super().update(instance, validated_data)\n\n def validate_starting_at(self, value):\n \"\"\"Add extra controls for starting_at field.\"\"\"\n # Field starting_at has a new value\n if value != self.instance.starting_at:\n # New value is past, it can't be updated\n if value is not None and value < timezone.now():\n sent_date_with_timezone = datetime.fromisoformat(\n self.context.get(\"request\").data.get(\"starting_at\")\n )\n raise serializers.ValidationError(\n f\"{sent_date_with_timezone} is not a valid date, date should be planned after!\"\n )\n\n return value\n\n\nclass ClassroomLiteSerializer(ReadOnlyModelSerializer):\n \"\"\"Classroom lite serializer without playlist and infos fetch from the BBB api.\"\"\"\n\n class Meta: # noqa\n model = Classroom\n fields = (\n \"id\",\n \"lti_id\",\n \"title\",\n \"description\",\n \"meeting_id\",\n \"welcome_text\",\n \"started\",\n \"ended\",\n \"starting_at\",\n \"estimated_duration\",\n )\n\n\nclass ClassroomSelectLTISerializer(ClassroomSerializer):\n \"\"\"A serializer to display a Classroom resource for LTI select content request.\"\"\"\n\n class Meta: # noqa\n model = Classroom\n fields = (\n \"id\",\n \"lti_id\",\n \"lti_url\",\n \"title\",\n \"description\",\n \"playlist\",\n \"meeting_id\",\n )\n\n lti_url = serializers.SerializerMethodField()\n\n def get_lti_url(self, obj):\n \"\"\"LTI Url of the Classroom.\n\n Parameters\n ----------\n obj : Type[models.Document]\n The document that we want to serialize\n\n Returns\n -------\n String\n the LTI url to be used by LTI consumers\n\n \"\"\"\n return self.context[\"request\"].build_absolute_uri(\n reverse(\"classroom:classroom_lti_view\", args=[obj.id]),\n )\n\n\nclass ClassroomDocumentSerializer(\n UploadableFileWithExtensionSerializerMixin, serializers.ModelSerializer\n):\n \"\"\"A serializer to display a ClassroomDocument resource.\"\"\"\n\n class Meta: # noqa\n model = ClassroomDocument\n fields = (\n \"classroom\",\n \"filename\",\n \"id\",\n \"is_default\",\n \"upload_state\",\n \"uploaded_on\",\n \"url\",\n )\n read_only_fields = (\n \"classroom\",\n \"id\",\n \"upload_state\",\n \"uploaded_on\",\n \"url\",\n )\n\n url = serializers.SerializerMethodField()\n # Make sure classroom UUID is converted to a string during serialization\n classroom = serializers.PrimaryKeyRelatedField(\n read_only=True, pk_field=serializers.CharField()\n )\n\n def to_internal_value(self, data):\n \"\"\"Validate if the size is coherent with django settings.\"\"\"\n max_file_size = settings.CLASSROOM_DOCUMENT_SOURCE_MAX_SIZE\n if \"filename\" in data and \"size\" not in data:\n raise ValidationError({\"size\": [\"File size is required\"]})\n if \"size\" in data and data.pop(\"size\") > max_file_size:\n raise ValidationError(\n {\"size\": [f\"File too large, max size allowed is {max_file_size} Bytes\"]}\n )\n return super().to_internal_value(data)\n\n def create(self, validated_data):\n \"\"\"Force the classroom field to the classroom of the JWT Token if any.\n\n Parameters\n ----------\n validated_data : dictionary\n Dictionary of the deserialized values of each field after validation.\n\n Returns\n -------\n dictionary\n The \"validated_data\" dictionary is returned after modification.\n\n \"\"\"\n resource = self.context[\"request\"].resource\n classroom_id = self.context[\"request\"].data.get(\"classroom\")\n if not validated_data.get(\"classroom_id\"):\n if resource:\n validated_data[\"classroom_id\"] = resource.id\n elif classroom_id:\n validated_data[\"classroom_id\"] = classroom_id\n\n if not ClassroomDocument.objects.filter(\n classroom_id=validated_data[\"classroom_id\"]\n ).exists():\n validated_data[\"is_default\"] = True\n\n return super().create(validated_data)\n\n def update(self, instance, validated_data):\n \"\"\"Set the default attribute of the others classroom documents to false if\n set to true for this one.\n\n Parameters\n ----------\n instance : Type[models.ClassroomDocument]\n The document that we want to update\n\n validated_data : dictionary\n Dictionary of the deserialized values of each field after validation.\n\n Returns\n -------\n Type[models.ClassroomDocument]\n The updated document\n\n \"\"\"\n with transaction.atomic():\n instance = super().update(instance, validated_data)\n if validated_data.get(\"is_default\"):\n ClassroomDocument.objects.exclude(id=instance.id).filter(\n classroom=instance.classroom, is_default=True\n ).update(is_default=False)\n return instance\n\n def get_url(self, obj):\n \"\"\"Url of the ClassroomDocument.\n\n Parameters\n ----------\n obj : Type[models.DepositedFile]\n The classroom document that we want to serialize\n\n Returns\n -------\n String or None\n the url to fetch the classroom document on CloudFront\n None if the classroom document is still not uploaded to S3 with success\n\n \"\"\"\n if url := get_document_url(obj):\n return (\n f\"{url}?response-content-disposition=\"\n f\"{quote_plus('attachment; filename=' + obj.filename)}\"\n )\n return None\n\n\nclass ClassroomDocumentInitiateUploadSerializer(BaseInitiateUploadSerializer):\n \"\"\"An initiate-upload serializer dedicated to classroom document.\"\"\"\n\n @property\n def max_upload_file_size(self):\n \"\"\"return the class room document max file size define in the settings.\n\n The @property decorator is used to ease the use of @override_settings\n in tests. Otherwise the setting is not changed and we can't easily test\n an upload with a size higher than the one defined in the settings\n \"\"\"\n return settings.CLASSROOM_DOCUMENT_SOURCE_MAX_SIZE\n\n def validate(self, attrs):\n \"\"\"Validate if the mimetype is allowed or not.\"\"\"\n # mimetype is provided, we directly check it\n if attrs[\"mimetype\"] != \"\":\n if attrs[\"mimetype\"] not in settings.ALLOWED_CLASSROOM_DOCUMENT_MIME_TYPES:\n raise serializers.ValidationError(\n {\"mimetype\": f\"{attrs['mimetype']} is not a supported mimetype\"}\n )\n attrs[\"extension\"] = mimetypes.guess_extension(attrs[\"mimetype\"])\n\n # mimetype is not provided, we have to guess it from the extension\n else:\n mimetypes.init()\n extension = splitext(attrs[\"filename\"])[1]\n mimetype = mimetypes.types_map.get(extension)\n if mimetype not in settings.ALLOWED_CLASSROOM_DOCUMENT_MIME_TYPES:\n raise serializers.ValidationError(\n {\"mimetype\": \"mimetype not guessable\"}\n )\n # extension is added to the data in order to be used later\n attrs[\"extension\"] = extension\n attrs[\"mimetype\"] = mimetype\n\n return attrs\n", "path": "src/backend/marsha/bbb/serializers.py"}, {"content": "\"\"\"Specific classroom related simple JWT helpers.\"\"\"\nfrom datetime import timedelta\n\nfrom django.conf import settings\nfrom django.utils import timezone\n\nfrom marsha.core.models import NONE\nfrom marsha.core.simple_jwt.tokens import ResourceAccessToken\n\n\ndef create_classroom_stable_invite_jwt(classroom, role=NONE, permissions=None):\n \"\"\"Create a resource JWT to be used in classroom invite links.\n\n Parameters\n ----------\n classroom : Type[models.Classroom]\n The classroom for which we want to create a JWT.\n\n role : str\n The role to use in the JWT. If not set, the no role is used.\n\n permissions : dict\n The permissions to use in the JWT. If not set, no permissions are used.\n\n Returns\n -------\n ResourceAccessToken\n The JWT.\n\n \"\"\"\n resource_jwt = ResourceAccessToken.for_resource_id(\n resource_id=str(classroom.id),\n session_id=f\"{classroom.id}-invite\",\n roles=[role],\n permissions=permissions or {},\n )\n\n # Set a fixed JWT ID\n resource_jwt.set_jti(\n f\"classroom-invite-{classroom.id}-{classroom.created_on.strftime('%Y-%m-%d')}\"\n )\n\n # Set a fixed validity beginning: the classroom creation date\n resource_jwt.set_iat(at_time=classroom.created_on)\n\n # Determine the validity end:\n # - if the classroom has a starting date, the JWT is valid\n # until the starting date plus two days\n # - if the classroom has no starting date, the JWT is valid\n # for a month **starting now** (not on classroom creation)\n if classroom.starting_at:\n validity_end = classroom.starting_at + timedelta(days=2)\n else:\n validity_end = timezone.now().replace(\n hour=0, minute=0, second=0, microsecond=0\n ) + timedelta(days=settings.BBB_INVITE_JWT_DEFAULT_DAYS_DURATION)\n\n resource_jwt.set_exp(\n from_time=classroom.created_on,\n lifetime=validity_end - classroom.created_on,\n )\n\n return resource_jwt\n", "path": "src/backend/marsha/bbb/utils/tokens.py"}]} | 4,082 | 692 |
gh_patches_debug_664 | rasdani/github-patches | git_diff | fedora-infra__bodhi-507 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
setup.py test doesn't include extra_requires from fedmsg deps
```
======================================================================
ERROR: Failure: ImportError (No module named psutil)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/decause/.virtualenvs/bodhi-python2.7/lib/python2.7/site-packages/nose/loader.py", line 418, in loadTestsFromName
addr.filename, addr.module)
File "/home/decause/.virtualenvs/bodhi-python2.7/lib/python2.7/site-packages/nose/importer.py", line 47, in importFromPath
return self.importFromDir(dir_path, fqname)
File "/home/decause/.virtualenvs/bodhi-python2.7/lib/python2.7/site-packages/nose/importer.py", line 94, in importFromDir
mod = load_module(part_fqname, fh, filename, desc)
File "/home/decause/code/bodhi/bodhi/tests/test_masher.py", line 27, in <module>
from bodhi.consumers.masher import Masher, MasherThread
File "/home/decause/code/bodhi/bodhi/consumers/masher.py", line 30, in <module>
import fedmsg.consumers
File "/home/decause/code/bodhi/.eggs/fedmsg-0.16.0-py2.7.egg/fedmsg/consumers/__init__.py", line 25, in <module>
import psutil
ImportError: No module named psutil
----------------------------------------------------------------------
Ran 335 tests in 138.787s
FAILED (errors=1)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import __main__
2 __requires__ = __main__.__requires__ = 'WebOb>=1.4.1'
3 import pkg_resources
4
5 # The following two imports are required to shut up an
6 # atexit error when running tests with python 2.7
7 import logging
8 import multiprocessing
9
10 import os
11 import sys
12
13 from setuptools import setup, find_packages
14
15 here = os.path.abspath(os.path.dirname(__file__))
16 README = open(os.path.join(here, 'README.rst')).read()
17 CHANGES = open(os.path.join(here, 'CHANGES.txt')).read()
18
19 requires = [
20 'pyramid',
21 'pyramid_mako',
22 'pyramid_debugtoolbar',
23 'pyramid_tm',
24 'waitress',
25 'colander',
26 'cornice',
27
28 'python-openid',
29 'pyramid_fas_openid',
30 'packagedb-cli',
31
32 'sqlalchemy',
33 'zope.sqlalchemy',
34
35 'webhelpers',
36 'progressbar',
37
38 'bunch',
39
40 # for captchas
41 'cryptography',
42 'Pillow',
43
44 # Useful tools
45 'kitchen',
46 'python-fedora',
47 'pylibravatar',
48 'pyDNS',
49 'dogpile.cache',
50 'arrow',
51 'markdown',
52
53 # i18n, that we're not actually doing yet.
54 #'Babel',
55 #'lingua',
56
57 # External resources
58 'python-bugzilla',
59 'simplemediawiki',
60 'fedmsg',
61
62 'Sphinx',
63
64 # For the bodhi-client
65 'click',
66
67 'WebOb>=1.4.1',
68 ]
69
70 if sys.version_info[:3] < (2,7,0):
71 requires.append('importlib')
72
73 if sys.version_info[:3] < (2,5,0):
74 requires.append('pysqlite')
75
76 setup(name='bodhi',
77 version='2.0',
78 description='bodhi',
79 long_description=README + '\n\n' + CHANGES,
80 classifiers=[
81 "Programming Language :: Python",
82 "Framework :: Pyramid",
83 "Topic :: Internet :: WWW/HTTP",
84 "Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
85 ],
86 author='',
87 author_email='',
88 url='',
89 keywords='web fedora pyramid',
90 packages=find_packages(),
91 include_package_data=True,
92 zip_safe=False,
93 install_requires = requires,
94 tests_require = [
95 'nose',
96 'nose-cov',
97 'webtest',
98 'mock'
99 ],
100 test_suite="nose.collector",
101 message_extractors = { '.': [
102 #('**.py', 'lingua_python', None),
103 #('**.mak', 'lingua_xml', None),
104 ]},
105 entry_points = """\
106 [paste.app_factory]
107 main = bodhi:main
108 [console_scripts]
109 initialize_bodhi_db = bodhi.scripts.initializedb:main
110 bodhi = bodhi.cli:cli
111 bodhi-push = bodhi.push:push
112 bodhi-expire-overrides = bodhi.scripts.expire_overrides:main
113 [moksha.consumer]
114 masher = bodhi.consumers.masher:Masher
115 updates = bodhi.consumers.updates:UpdatesHandler
116 """,
117 paster_plugins=['pyramid'],
118 )
119
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -57,7 +57,9 @@
# External resources
'python-bugzilla',
'simplemediawiki',
- 'fedmsg',
+
+ # "python setup.py test" needs one of fedmsg's setup.py extra_requires
+ 'fedmsg[consumers]',
'Sphinx',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -57,7 +57,9 @@\n # External resources\n 'python-bugzilla',\n 'simplemediawiki',\n- 'fedmsg',\n+\n+ # \"python setup.py test\" needs one of fedmsg's setup.py extra_requires\n+ 'fedmsg[consumers]',\n \n 'Sphinx',\n", "issue": "setup.py test doesn't include extra_requires from fedmsg deps\n```\n======================================================================\nERROR: Failure: ImportError (No module named psutil)\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"/home/decause/.virtualenvs/bodhi-python2.7/lib/python2.7/site-packages/nose/loader.py\", line 418, in loadTestsFromName\n addr.filename, addr.module)\n File \"/home/decause/.virtualenvs/bodhi-python2.7/lib/python2.7/site-packages/nose/importer.py\", line 47, in importFromPath\n return self.importFromDir(dir_path, fqname)\n File \"/home/decause/.virtualenvs/bodhi-python2.7/lib/python2.7/site-packages/nose/importer.py\", line 94, in importFromDir\n mod = load_module(part_fqname, fh, filename, desc)\n File \"/home/decause/code/bodhi/bodhi/tests/test_masher.py\", line 27, in <module>\n from bodhi.consumers.masher import Masher, MasherThread\n File \"/home/decause/code/bodhi/bodhi/consumers/masher.py\", line 30, in <module>\n import fedmsg.consumers\n File \"/home/decause/code/bodhi/.eggs/fedmsg-0.16.0-py2.7.egg/fedmsg/consumers/__init__.py\", line 25, in <module>\n import psutil\nImportError: No module named psutil\n\n----------------------------------------------------------------------\nRan 335 tests in 138.787s\n\nFAILED (errors=1)\n```\n\n", "before_files": [{"content": "import __main__\n__requires__ = __main__.__requires__ = 'WebOb>=1.4.1'\nimport pkg_resources\n\n# The following two imports are required to shut up an\n# atexit error when running tests with python 2.7\nimport logging\nimport multiprocessing\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\nREADME = open(os.path.join(here, 'README.rst')).read()\nCHANGES = open(os.path.join(here, 'CHANGES.txt')).read()\n\nrequires = [\n 'pyramid',\n 'pyramid_mako',\n 'pyramid_debugtoolbar',\n 'pyramid_tm',\n 'waitress',\n 'colander',\n 'cornice',\n\n 'python-openid',\n 'pyramid_fas_openid',\n 'packagedb-cli',\n\n 'sqlalchemy',\n 'zope.sqlalchemy',\n\n 'webhelpers',\n 'progressbar',\n\n 'bunch',\n\n # for captchas\n 'cryptography',\n 'Pillow',\n\n # Useful tools\n 'kitchen',\n 'python-fedora',\n 'pylibravatar',\n 'pyDNS',\n 'dogpile.cache',\n 'arrow',\n 'markdown',\n\n # i18n, that we're not actually doing yet.\n #'Babel',\n #'lingua',\n\n # External resources\n 'python-bugzilla',\n 'simplemediawiki',\n 'fedmsg',\n\n 'Sphinx',\n\n # For the bodhi-client\n 'click',\n\n 'WebOb>=1.4.1',\n ]\n\nif sys.version_info[:3] < (2,7,0):\n requires.append('importlib')\n\nif sys.version_info[:3] < (2,5,0):\n requires.append('pysqlite')\n\nsetup(name='bodhi',\n version='2.0',\n description='bodhi',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n \"Programming Language :: Python\",\n \"Framework :: Pyramid\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n ],\n author='',\n author_email='',\n url='',\n keywords='web fedora pyramid',\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires = requires,\n tests_require = [\n 'nose',\n 'nose-cov',\n 'webtest',\n 'mock'\n ],\n test_suite=\"nose.collector\",\n message_extractors = { '.': [\n #('**.py', 'lingua_python', None),\n #('**.mak', 'lingua_xml', None),\n ]},\n entry_points = \"\"\"\\\n [paste.app_factory]\n main = bodhi:main\n [console_scripts]\n initialize_bodhi_db = bodhi.scripts.initializedb:main\n bodhi = bodhi.cli:cli\n bodhi-push = bodhi.push:push\n bodhi-expire-overrides = bodhi.scripts.expire_overrides:main\n [moksha.consumer]\n masher = bodhi.consumers.masher:Masher\n updates = bodhi.consumers.updates:UpdatesHandler\n \"\"\",\n paster_plugins=['pyramid'],\n )\n\n", "path": "setup.py"}], "after_files": [{"content": "import __main__\n__requires__ = __main__.__requires__ = 'WebOb>=1.4.1'\nimport pkg_resources\n\n# The following two imports are required to shut up an\n# atexit error when running tests with python 2.7\nimport logging\nimport multiprocessing\n\nimport os\nimport sys\n\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\nREADME = open(os.path.join(here, 'README.rst')).read()\nCHANGES = open(os.path.join(here, 'CHANGES.txt')).read()\n\nrequires = [\n 'pyramid',\n 'pyramid_mako',\n 'pyramid_debugtoolbar',\n 'pyramid_tm',\n 'waitress',\n 'colander',\n 'cornice',\n\n 'python-openid',\n 'pyramid_fas_openid',\n 'packagedb-cli',\n\n 'sqlalchemy',\n 'zope.sqlalchemy',\n\n 'webhelpers',\n 'progressbar',\n\n 'bunch',\n\n # for captchas\n 'cryptography',\n 'Pillow',\n\n # Useful tools\n 'kitchen',\n 'python-fedora',\n 'pylibravatar',\n 'pyDNS',\n 'dogpile.cache',\n 'arrow',\n 'markdown',\n\n # i18n, that we're not actually doing yet.\n #'Babel',\n #'lingua',\n\n # External resources\n 'python-bugzilla',\n 'simplemediawiki',\n\n # \"python setup.py test\" needs one of fedmsg's setup.py extra_requires\n 'fedmsg[consumers]',\n\n 'Sphinx',\n\n # For the bodhi-client\n 'click',\n\n 'WebOb>=1.4.1',\n ]\n\nif sys.version_info[:3] < (2,7,0):\n requires.append('importlib')\n\nif sys.version_info[:3] < (2,5,0):\n requires.append('pysqlite')\n\nsetup(name='bodhi',\n version='2.0',\n description='bodhi',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n \"Programming Language :: Python\",\n \"Framework :: Pyramid\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n ],\n author='',\n author_email='',\n url='',\n keywords='web fedora pyramid',\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires = requires,\n tests_require = [\n 'nose',\n 'nose-cov',\n 'webtest',\n 'mock'\n ],\n test_suite=\"nose.collector\",\n message_extractors = { '.': [\n #('**.py', 'lingua_python', None),\n #('**.mak', 'lingua_xml', None),\n ]},\n entry_points = \"\"\"\\\n [paste.app_factory]\n main = bodhi:main\n [console_scripts]\n initialize_bodhi_db = bodhi.scripts.initializedb:main\n bodhi = bodhi.cli:cli\n bodhi-push = bodhi.push:push\n bodhi-expire-overrides = bodhi.scripts.expire_overrides:main\n [moksha.consumer]\n masher = bodhi.consumers.masher:Masher\n updates = bodhi.consumers.updates:UpdatesHandler\n \"\"\",\n paster_plugins=['pyramid'],\n )\n\n", "path": "setup.py"}]} | 1,613 | 93 |
gh_patches_debug_25493 | rasdani/github-patches | git_diff | liqd__adhocracy4-211 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Subject with new line crashes email sending
Subject with new line crashes email sending
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `adhocracy4/emails/mixins.py`
Content:
```
1 from email.mime.image import MIMEImage
2
3 from django.contrib.staticfiles import finders
4 from .base import EmailBase
5
6
7 class PlatformEmailMixin:
8 """
9 Attaches the static file images/logo.png so it can be used in an html
10 email.
11 """
12 def get_attachments(self):
13 attachments = super().get_attachments()
14 filename = (
15 finders.find('images/email_logo.png')
16 or finders.find('images/email_logo.svg')
17 )
18 if filename:
19 if filename.endswith('.png'):
20 imagetype = 'png'
21 else:
22 imagetype = 'svg+xml'
23
24 with open(filename, 'rb') as f:
25 logo = MIMEImage(f.read(), imagetype)
26
27 logo.add_header('Content-ID', '<{}>'.format('logo'))
28 return attachments + [logo]
29 return attachments
30
31
32 class SyncEmailMixin(EmailBase):
33 """Send Emails synchronously."""
34
35 @classmethod
36 def send(cls, object, *args, **kwargs):
37 """Call dispatch immediately"""
38 return cls().dispatch(object, *args, **kwargs)
39
```
Path: `adhocracy4/emails/base.py`
Content:
```
1 from django.conf import settings
2 from django.contrib.contenttypes.models import ContentType
3 from django.contrib.sites import models as site_models
4 from django.core.mail.message import EmailMultiAlternatives
5 from django.template.loader import select_template
6 from django.utils import translation
7
8 from . import tasks
9
10
11 class EmailBase:
12 site_id = 1
13 object = None
14 template_name = None
15 fallback_language = 'en'
16 for_moderator = False
17
18 def get_site(self):
19 return site_models.Site.objects.get(pk=self.site_id)
20
21 def get_host(self):
22 site = self.get_site()
23 ssl_enabled = True
24 if site.domain.startswith('localhost:'):
25 ssl_enabled = False
26
27 url = 'http{ssl_flag}://{domain}'.format(
28 ssl_flag='s' if ssl_enabled else '',
29 domain=site.domain,
30 )
31 return url
32
33 def get_context(self):
34 object_context_key = self.object.__class__.__name__.lower()
35 return {
36 'email': self,
37 'site': self.get_site(),
38 object_context_key: self.object
39 }
40
41 def get_receivers(self):
42 return []
43
44 def get_attachments(self):
45 return []
46
47 def get_languages(self, receiver):
48 return [translation.get_language(), self.fallback_language]
49
50 def get_reply_to(self):
51 return None
52
53 @classmethod
54 def send(cls, object, *args, **kwargs):
55 """Send email asynchronously.
56
57 NOTE: args and kwargs must be JSON serializable.
58 """
59 ct = ContentType.objects.get_for_model(object)
60 tasks.send_async(
61 cls.__module__, cls.__name__,
62 ct.app_label, ct.model, object.pk,
63 args, kwargs)
64 return []
65
66 def render(self, template_name, context):
67 languages = self.get_languages(context['receiver'])
68 template = select_template([
69 '{}.{}.email'.format(template_name, lang)
70 for lang in languages
71 ])
72
73 # Get the actually chosen language from the template name
74 language = template.template.name.split('.', 2)[-2]
75
76 with translation.override(language):
77 parts = []
78 for part_type in ('subject', 'txt', 'html'):
79 context['part_type'] = part_type
80 parts.append(template.render(context))
81 context.pop('part_type')
82
83 return tuple(parts)
84
85 def dispatch(self, object, *args, **kwargs):
86 self.object = object
87 self.kwargs = kwargs
88 receivers = self.get_receivers()
89 context = self.get_context()
90 context.update(kwargs)
91 attachments = self.get_attachments()
92 template = self.template_name
93
94 mails = []
95 for receiver in receivers:
96 context['receiver'] = receiver
97 (subject, text, html) = self.render(template, context)
98 context.pop('receiver')
99
100 if hasattr(receiver, 'email'):
101 to_address = receiver.email
102 else:
103 to_address = receiver
104
105 mail = EmailMultiAlternatives(
106 subject=subject.strip(),
107 body=text,
108 from_email=settings.DEFAULT_FROM_EMAIL,
109 to=[to_address],
110 reply_to=self.get_reply_to(),
111 )
112
113 if len(attachments) > 0:
114 mail.mixed_subtype = 'related'
115
116 for attachment in attachments:
117 mail.attach(attachment)
118
119 mail.attach_alternative(html, 'text/html')
120 mail.send()
121 mails.append(mail)
122 return mails
123
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/adhocracy4/emails/base.py b/adhocracy4/emails/base.py
--- a/adhocracy4/emails/base.py
+++ b/adhocracy4/emails/base.py
@@ -1,3 +1,5 @@
+import re
+
from django.conf import settings
from django.contrib.contenttypes.models import ContentType
from django.contrib.sites import models as site_models
@@ -102,8 +104,10 @@
else:
to_address = receiver
+ subject_clean = re.sub(r'[\r\n]', '', subject).strip()
+
mail = EmailMultiAlternatives(
- subject=subject.strip(),
+ subject=subject_clean,
body=text,
from_email=settings.DEFAULT_FROM_EMAIL,
to=[to_address],
diff --git a/adhocracy4/emails/mixins.py b/adhocracy4/emails/mixins.py
--- a/adhocracy4/emails/mixins.py
+++ b/adhocracy4/emails/mixins.py
@@ -1,7 +1,6 @@
from email.mime.image import MIMEImage
from django.contrib.staticfiles import finders
-from .base import EmailBase
class PlatformEmailMixin:
@@ -29,7 +28,7 @@
return attachments
-class SyncEmailMixin(EmailBase):
+class SyncEmailMixin:
"""Send Emails synchronously."""
@classmethod
| {"golden_diff": "diff --git a/adhocracy4/emails/base.py b/adhocracy4/emails/base.py\n--- a/adhocracy4/emails/base.py\n+++ b/adhocracy4/emails/base.py\n@@ -1,3 +1,5 @@\n+import re\n+\n from django.conf import settings\n from django.contrib.contenttypes.models import ContentType\n from django.contrib.sites import models as site_models\n@@ -102,8 +104,10 @@\n else:\n to_address = receiver\n \n+ subject_clean = re.sub(r'[\\r\\n]', '', subject).strip()\n+\n mail = EmailMultiAlternatives(\n- subject=subject.strip(),\n+ subject=subject_clean,\n body=text,\n from_email=settings.DEFAULT_FROM_EMAIL,\n to=[to_address],\ndiff --git a/adhocracy4/emails/mixins.py b/adhocracy4/emails/mixins.py\n--- a/adhocracy4/emails/mixins.py\n+++ b/adhocracy4/emails/mixins.py\n@@ -1,7 +1,6 @@\n from email.mime.image import MIMEImage\n \n from django.contrib.staticfiles import finders\n-from .base import EmailBase\n \n \n class PlatformEmailMixin:\n@@ -29,7 +28,7 @@\n return attachments\n \n \n-class SyncEmailMixin(EmailBase):\n+class SyncEmailMixin:\n \"\"\"Send Emails synchronously.\"\"\"\n \n @classmethod\n", "issue": "Subject with new line crashes email sending\n\nSubject with new line crashes email sending\n\n", "before_files": [{"content": "from email.mime.image import MIMEImage\n\nfrom django.contrib.staticfiles import finders\nfrom .base import EmailBase\n\n\nclass PlatformEmailMixin:\n \"\"\"\n Attaches the static file images/logo.png so it can be used in an html\n email.\n \"\"\"\n def get_attachments(self):\n attachments = super().get_attachments()\n filename = (\n finders.find('images/email_logo.png')\n or finders.find('images/email_logo.svg')\n )\n if filename:\n if filename.endswith('.png'):\n imagetype = 'png'\n else:\n imagetype = 'svg+xml'\n\n with open(filename, 'rb') as f:\n logo = MIMEImage(f.read(), imagetype)\n\n logo.add_header('Content-ID', '<{}>'.format('logo'))\n return attachments + [logo]\n return attachments\n\n\nclass SyncEmailMixin(EmailBase):\n \"\"\"Send Emails synchronously.\"\"\"\n\n @classmethod\n def send(cls, object, *args, **kwargs):\n \"\"\"Call dispatch immediately\"\"\"\n return cls().dispatch(object, *args, **kwargs)\n", "path": "adhocracy4/emails/mixins.py"}, {"content": "from django.conf import settings\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.contrib.sites import models as site_models\nfrom django.core.mail.message import EmailMultiAlternatives\nfrom django.template.loader import select_template\nfrom django.utils import translation\n\nfrom . import tasks\n\n\nclass EmailBase:\n site_id = 1\n object = None\n template_name = None\n fallback_language = 'en'\n for_moderator = False\n\n def get_site(self):\n return site_models.Site.objects.get(pk=self.site_id)\n\n def get_host(self):\n site = self.get_site()\n ssl_enabled = True\n if site.domain.startswith('localhost:'):\n ssl_enabled = False\n\n url = 'http{ssl_flag}://{domain}'.format(\n ssl_flag='s' if ssl_enabled else '',\n domain=site.domain,\n )\n return url\n\n def get_context(self):\n object_context_key = self.object.__class__.__name__.lower()\n return {\n 'email': self,\n 'site': self.get_site(),\n object_context_key: self.object\n }\n\n def get_receivers(self):\n return []\n\n def get_attachments(self):\n return []\n\n def get_languages(self, receiver):\n return [translation.get_language(), self.fallback_language]\n\n def get_reply_to(self):\n return None\n\n @classmethod\n def send(cls, object, *args, **kwargs):\n \"\"\"Send email asynchronously.\n\n NOTE: args and kwargs must be JSON serializable.\n \"\"\"\n ct = ContentType.objects.get_for_model(object)\n tasks.send_async(\n cls.__module__, cls.__name__,\n ct.app_label, ct.model, object.pk,\n args, kwargs)\n return []\n\n def render(self, template_name, context):\n languages = self.get_languages(context['receiver'])\n template = select_template([\n '{}.{}.email'.format(template_name, lang)\n for lang in languages\n ])\n\n # Get the actually chosen language from the template name\n language = template.template.name.split('.', 2)[-2]\n\n with translation.override(language):\n parts = []\n for part_type in ('subject', 'txt', 'html'):\n context['part_type'] = part_type\n parts.append(template.render(context))\n context.pop('part_type')\n\n return tuple(parts)\n\n def dispatch(self, object, *args, **kwargs):\n self.object = object\n self.kwargs = kwargs\n receivers = self.get_receivers()\n context = self.get_context()\n context.update(kwargs)\n attachments = self.get_attachments()\n template = self.template_name\n\n mails = []\n for receiver in receivers:\n context['receiver'] = receiver\n (subject, text, html) = self.render(template, context)\n context.pop('receiver')\n\n if hasattr(receiver, 'email'):\n to_address = receiver.email\n else:\n to_address = receiver\n\n mail = EmailMultiAlternatives(\n subject=subject.strip(),\n body=text,\n from_email=settings.DEFAULT_FROM_EMAIL,\n to=[to_address],\n reply_to=self.get_reply_to(),\n )\n\n if len(attachments) > 0:\n mail.mixed_subtype = 'related'\n\n for attachment in attachments:\n mail.attach(attachment)\n\n mail.attach_alternative(html, 'text/html')\n mail.send()\n mails.append(mail)\n return mails\n", "path": "adhocracy4/emails/base.py"}], "after_files": [{"content": "from email.mime.image import MIMEImage\n\nfrom django.contrib.staticfiles import finders\n\n\nclass PlatformEmailMixin:\n \"\"\"\n Attaches the static file images/logo.png so it can be used in an html\n email.\n \"\"\"\n def get_attachments(self):\n attachments = super().get_attachments()\n filename = (\n finders.find('images/email_logo.png')\n or finders.find('images/email_logo.svg')\n )\n if filename:\n if filename.endswith('.png'):\n imagetype = 'png'\n else:\n imagetype = 'svg+xml'\n\n with open(filename, 'rb') as f:\n logo = MIMEImage(f.read(), imagetype)\n\n logo.add_header('Content-ID', '<{}>'.format('logo'))\n return attachments + [logo]\n return attachments\n\n\nclass SyncEmailMixin:\n \"\"\"Send Emails synchronously.\"\"\"\n\n @classmethod\n def send(cls, object, *args, **kwargs):\n \"\"\"Call dispatch immediately\"\"\"\n return cls().dispatch(object, *args, **kwargs)\n", "path": "adhocracy4/emails/mixins.py"}, {"content": "import re\n\nfrom django.conf import settings\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.contrib.sites import models as site_models\nfrom django.core.mail.message import EmailMultiAlternatives\nfrom django.template.loader import select_template\nfrom django.utils import translation\n\nfrom . import tasks\n\n\nclass EmailBase:\n site_id = 1\n object = None\n template_name = None\n fallback_language = 'en'\n for_moderator = False\n\n def get_site(self):\n return site_models.Site.objects.get(pk=self.site_id)\n\n def get_host(self):\n site = self.get_site()\n ssl_enabled = True\n if site.domain.startswith('localhost:'):\n ssl_enabled = False\n\n url = 'http{ssl_flag}://{domain}'.format(\n ssl_flag='s' if ssl_enabled else '',\n domain=site.domain,\n )\n return url\n\n def get_context(self):\n object_context_key = self.object.__class__.__name__.lower()\n return {\n 'email': self,\n 'site': self.get_site(),\n object_context_key: self.object\n }\n\n def get_receivers(self):\n return []\n\n def get_attachments(self):\n return []\n\n def get_languages(self, receiver):\n return [translation.get_language(), self.fallback_language]\n\n def get_reply_to(self):\n return None\n\n @classmethod\n def send(cls, object, *args, **kwargs):\n \"\"\"Send email asynchronously.\n\n NOTE: args and kwargs must be JSON serializable.\n \"\"\"\n ct = ContentType.objects.get_for_model(object)\n tasks.send_async(\n cls.__module__, cls.__name__,\n ct.app_label, ct.model, object.pk,\n args, kwargs)\n return []\n\n def render(self, template_name, context):\n languages = self.get_languages(context['receiver'])\n template = select_template([\n '{}.{}.email'.format(template_name, lang)\n for lang in languages\n ])\n\n # Get the actually chosen language from the template name\n language = template.template.name.split('.', 2)[-2]\n\n with translation.override(language):\n parts = []\n for part_type in ('subject', 'txt', 'html'):\n context['part_type'] = part_type\n parts.append(template.render(context))\n context.pop('part_type')\n\n return tuple(parts)\n\n def dispatch(self, object, *args, **kwargs):\n self.object = object\n self.kwargs = kwargs\n receivers = self.get_receivers()\n context = self.get_context()\n context.update(kwargs)\n attachments = self.get_attachments()\n template = self.template_name\n\n mails = []\n for receiver in receivers:\n context['receiver'] = receiver\n (subject, text, html) = self.render(template, context)\n context.pop('receiver')\n\n if hasattr(receiver, 'email'):\n to_address = receiver.email\n else:\n to_address = receiver\n\n subject_clean = re.sub(r'[\\r\\n]', '', subject).strip()\n\n mail = EmailMultiAlternatives(\n subject=subject_clean,\n body=text,\n from_email=settings.DEFAULT_FROM_EMAIL,\n to=[to_address],\n reply_to=self.get_reply_to(),\n )\n\n if len(attachments) > 0:\n mail.mixed_subtype = 'related'\n\n for attachment in attachments:\n mail.attach(attachment)\n\n mail.attach_alternative(html, 'text/html')\n mail.send()\n mails.append(mail)\n return mails\n", "path": "adhocracy4/emails/base.py"}]} | 1,586 | 303 |
gh_patches_debug_28410 | rasdani/github-patches | git_diff | mne-tools__mne-python-9092 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
split code block in examples/preprocessing/plot_virtual_evoked
right now, because all plots come from a single code block, they are plotted at the top of the example in a group of 4 (and consequently the plots are really small). By splitting the 4 plotting calls into different code blocks, they will plot larger / be easier to see & compare, without increasing run time of the example. Code blocks can be split with a line of 79 `#` marks (adding a bit of explanatory text too is usually a good idea)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/preprocessing/plot_virtual_evoked.py`
Content:
```
1 """
2 =======================
3 Remap MEG channel types
4 =======================
5
6 In this example, MEG data are remapped from one channel type to another.
7 This is useful to:
8
9 - visualize combined magnetometers and gradiometers as magnetometers
10 or gradiometers.
11 - run statistics from both magnetometers and gradiometers while
12 working with a single type of channels.
13 """
14
15 # Author: Mainak Jas <[email protected]>
16
17 # License: BSD (3-clause)
18
19 import mne
20 from mne.datasets import sample
21
22 print(__doc__)
23
24 # read the evoked
25 data_path = sample.data_path()
26 fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
27 evoked = mne.read_evokeds(fname, condition='Left Auditory', baseline=(None, 0))
28
29 # go from grad + mag to mag
30 virt_evoked = evoked.as_type('mag')
31 evoked.plot_topomap(ch_type='mag', title='mag (original)', time_unit='s')
32 virt_evoked.plot_topomap(ch_type='mag', time_unit='s',
33 title='mag (interpolated from mag + grad)')
34
35 # go from grad + mag to grad
36 virt_evoked = evoked.as_type('grad')
37 evoked.plot_topomap(ch_type='grad', title='grad (original)', time_unit='s')
38 virt_evoked.plot_topomap(ch_type='grad', time_unit='s',
39 title='grad (interpolated from mag + grad)')
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/examples/preprocessing/plot_virtual_evoked.py b/examples/preprocessing/plot_virtual_evoked.py
--- a/examples/preprocessing/plot_virtual_evoked.py
+++ b/examples/preprocessing/plot_virtual_evoked.py
@@ -26,14 +26,30 @@
fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname, condition='Left Auditory', baseline=(None, 0))
-# go from grad + mag to mag
+###############################################################################
+# First, let's call remap gradiometers to magnometers, and plot
+# the original and remapped topomaps of the magnetometers.
+
+# go from grad + mag to mag and plot original mag
virt_evoked = evoked.as_type('mag')
evoked.plot_topomap(ch_type='mag', title='mag (original)', time_unit='s')
+
+###############################################################################
+
+# plot interpolated grad + mag
virt_evoked.plot_topomap(ch_type='mag', time_unit='s',
title='mag (interpolated from mag + grad)')
-# go from grad + mag to grad
+###############################################################################
+# Now, we remap magnometers to gradiometers, and plot
+# the original and remapped topomaps of the gradiometers
+
+# go from grad + mag to grad and plot original grad
virt_evoked = evoked.as_type('grad')
evoked.plot_topomap(ch_type='grad', title='grad (original)', time_unit='s')
+
+###############################################################################
+
+# plot interpolated grad + mag
virt_evoked.plot_topomap(ch_type='grad', time_unit='s',
title='grad (interpolated from mag + grad)')
| {"golden_diff": "diff --git a/examples/preprocessing/plot_virtual_evoked.py b/examples/preprocessing/plot_virtual_evoked.py\n--- a/examples/preprocessing/plot_virtual_evoked.py\n+++ b/examples/preprocessing/plot_virtual_evoked.py\n@@ -26,14 +26,30 @@\n fname = data_path + '/MEG/sample/sample_audvis-ave.fif'\n evoked = mne.read_evokeds(fname, condition='Left Auditory', baseline=(None, 0))\n \n-# go from grad + mag to mag\n+###############################################################################\n+# First, let's call remap gradiometers to magnometers, and plot\n+# the original and remapped topomaps of the magnetometers.\n+\n+# go from grad + mag to mag and plot original mag\n virt_evoked = evoked.as_type('mag')\n evoked.plot_topomap(ch_type='mag', title='mag (original)', time_unit='s')\n+\n+###############################################################################\n+\n+# plot interpolated grad + mag\n virt_evoked.plot_topomap(ch_type='mag', time_unit='s',\n title='mag (interpolated from mag + grad)')\n \n-# go from grad + mag to grad\n+###############################################################################\n+# Now, we remap magnometers to gradiometers, and plot\n+# the original and remapped topomaps of the gradiometers\n+\n+# go from grad + mag to grad and plot original grad\n virt_evoked = evoked.as_type('grad')\n evoked.plot_topomap(ch_type='grad', title='grad (original)', time_unit='s')\n+\n+###############################################################################\n+\n+# plot interpolated grad + mag\n virt_evoked.plot_topomap(ch_type='grad', time_unit='s',\n title='grad (interpolated from mag + grad)')\n", "issue": "split code block in examples/preprocessing/plot_virtual_evoked\nright now, because all plots come from a single code block, they are plotted at the top of the example in a group of 4 (and consequently the plots are really small). By splitting the 4 plotting calls into different code blocks, they will plot larger / be easier to see & compare, without increasing run time of the example. Code blocks can be split with a line of 79 `#` marks (adding a bit of explanatory text too is usually a good idea)\n", "before_files": [{"content": "\"\"\"\n=======================\nRemap MEG channel types\n=======================\n\nIn this example, MEG data are remapped from one channel type to another.\nThis is useful to:\n\n - visualize combined magnetometers and gradiometers as magnetometers\n or gradiometers.\n - run statistics from both magnetometers and gradiometers while\n working with a single type of channels.\n\"\"\"\n\n# Author: Mainak Jas <[email protected]>\n\n# License: BSD (3-clause)\n\nimport mne\nfrom mne.datasets import sample\n\nprint(__doc__)\n\n# read the evoked\ndata_path = sample.data_path()\nfname = data_path + '/MEG/sample/sample_audvis-ave.fif'\nevoked = mne.read_evokeds(fname, condition='Left Auditory', baseline=(None, 0))\n\n# go from grad + mag to mag\nvirt_evoked = evoked.as_type('mag')\nevoked.plot_topomap(ch_type='mag', title='mag (original)', time_unit='s')\nvirt_evoked.plot_topomap(ch_type='mag', time_unit='s',\n title='mag (interpolated from mag + grad)')\n\n# go from grad + mag to grad\nvirt_evoked = evoked.as_type('grad')\nevoked.plot_topomap(ch_type='grad', title='grad (original)', time_unit='s')\nvirt_evoked.plot_topomap(ch_type='grad', time_unit='s',\n title='grad (interpolated from mag + grad)')\n", "path": "examples/preprocessing/plot_virtual_evoked.py"}], "after_files": [{"content": "\"\"\"\n=======================\nRemap MEG channel types\n=======================\n\nIn this example, MEG data are remapped from one channel type to another.\nThis is useful to:\n\n - visualize combined magnetometers and gradiometers as magnetometers\n or gradiometers.\n - run statistics from both magnetometers and gradiometers while\n working with a single type of channels.\n\"\"\"\n\n# Author: Mainak Jas <[email protected]>\n\n# License: BSD (3-clause)\n\nimport mne\nfrom mne.datasets import sample\n\nprint(__doc__)\n\n# read the evoked\ndata_path = sample.data_path()\nfname = data_path + '/MEG/sample/sample_audvis-ave.fif'\nevoked = mne.read_evokeds(fname, condition='Left Auditory', baseline=(None, 0))\n\n###############################################################################\n# First, let's call remap gradiometers to magnometers, and plot\n# the original and remapped topomaps of the magnetometers.\n\n# go from grad + mag to mag and plot original mag\nvirt_evoked = evoked.as_type('mag')\nevoked.plot_topomap(ch_type='mag', title='mag (original)', time_unit='s')\n\n###############################################################################\n\n# plot interpolated grad + mag\nvirt_evoked.plot_topomap(ch_type='mag', time_unit='s',\n title='mag (interpolated from mag + grad)')\n\n###############################################################################\n# Now, we remap magnometers to gradiometers, and plot\n# the original and remapped topomaps of the gradiometers\n\n# go from grad + mag to grad and plot original grad\nvirt_evoked = evoked.as_type('grad')\nevoked.plot_topomap(ch_type='grad', title='grad (original)', time_unit='s')\n\n###############################################################################\n\n# plot interpolated grad + mag\nvirt_evoked.plot_topomap(ch_type='grad', time_unit='s',\n title='grad (interpolated from mag + grad)')\n", "path": "examples/preprocessing/plot_virtual_evoked.py"}]} | 773 | 365 |
gh_patches_debug_57312 | rasdani/github-patches | git_diff | vllm-project__vllm-402 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Input and output orders are different
I was trying to do batch inference by feeding a list of texts into `llm.generate()`. But I found that the outputs have different orders from examples in the input list. For example, the result for the first example in the input list may be the second in the output list. Anything suggestions that I may have done something wrong? Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vllm/entrypoints/llm.py`
Content:
```
1 from typing import List, Optional, Union
2
3 from tqdm import tqdm
4 from transformers import PreTrainedTokenizer, PreTrainedTokenizerFast
5
6 from vllm.engine.arg_utils import EngineArgs
7 from vllm.engine.llm_engine import LLMEngine
8 from vllm.outputs import RequestOutput
9 from vllm.sampling_params import SamplingParams
10 from vllm.utils import Counter
11
12
13 class LLM:
14 """An LLM for generating texts from given prompts and sampling parameters.
15
16 This class includes a tokenizer, a language model (possibly distributed
17 across multiple GPUs), and GPU memory space allocated for intermediate
18 states (aka KV cache). Given a batch of prompts and sampling parameters,
19 this class generates texts from the model, using an intelligent batching
20 mechanism and efficient memory management.
21
22 NOTE: This class is intended to be used for offline inference. For online
23 serving, use the `AsyncLLMEngine` class instead.
24 NOTE: For the comprehensive list of arguments, see `EngineArgs`.
25
26 Args:
27 model: The name or path of a HuggingFace Transformers model.
28 tokenizer: The name or path of a HuggingFace Transformers tokenizer.
29 tokenizer_mode: The tokenizer mode. "auto" will use the fast tokenizer
30 if available, and "slow" will always use the slow tokenizer.
31 trust_remote_code: Trust remote code (e.g., from HuggingFace) when
32 downloading the model and tokenizer.
33 tensor_parallel_size: The number of GPUs to use for distributed
34 execution with tensor parallelism.
35 dtype: The data type for the model weights and activations. Currently,
36 we support `float32`, `float16`, and `bfloat16`. If `auto`, we use
37 the `torch_dtype` attribute specified in the model config file.
38 However, if the `torch_dtype` in the config is `float32`, we will
39 use `float16` instead.
40 seed: The seed to initialize the random number generator for sampling.
41 """
42
43 def __init__(
44 self,
45 model: str,
46 tokenizer: Optional[str] = None,
47 tokenizer_mode: str = "auto",
48 trust_remote_code: bool = False,
49 tensor_parallel_size: int = 1,
50 dtype: str = "auto",
51 seed: int = 0,
52 **kwargs,
53 ) -> None:
54 if "disable_log_stats" not in kwargs:
55 kwargs["disable_log_stats"] = True
56 engine_args = EngineArgs(
57 model=model,
58 tokenizer=tokenizer,
59 tokenizer_mode=tokenizer_mode,
60 trust_remote_code=trust_remote_code,
61 tensor_parallel_size=tensor_parallel_size,
62 dtype=dtype,
63 seed=seed,
64 **kwargs,
65 )
66 self.llm_engine = LLMEngine.from_engine_args(engine_args)
67 self.request_counter = Counter()
68
69 def get_tokenizer(
70 self) -> Union[PreTrainedTokenizer, PreTrainedTokenizerFast]:
71 return self.llm_engine.tokenizer
72
73 def set_tokenizer(
74 self,
75 tokenizer: Union[PreTrainedTokenizer, PreTrainedTokenizerFast],
76 ) -> None:
77 self.llm_engine.tokenizer = tokenizer
78
79 def generate(
80 self,
81 prompts: Optional[Union[str, List[str]]] = None,
82 sampling_params: Optional[SamplingParams] = None,
83 prompt_token_ids: Optional[List[List[int]]] = None,
84 use_tqdm: bool = True,
85 ) -> List[RequestOutput]:
86 """Generates the completions for the input prompts.
87
88 NOTE: This class automatically batches the given prompts, considering
89 the memory constraint. For the best performance, put all of your prompts
90 into a single list and pass it to this method.
91
92 Args:
93 prompts: A list of prompts to generate completions for.
94 sampling_params: The sampling parameters for text generation. If
95 None, we use the default sampling parameters.
96 prompt_token_ids: A list of token IDs for the prompts. If None, we
97 use the tokenizer to convert the prompts to token IDs.
98 use_tqdm: Whether to use tqdm to display the progress bar.
99
100 Returns:
101 A list of `RequestOutput` objects containing the generated
102 completions in the same order as the input prompts.
103 """
104 if prompts is None and prompt_token_ids is None:
105 raise ValueError("Either prompts or prompt_token_ids must be "
106 "provided.")
107 if isinstance(prompts, str):
108 # Convert a single prompt to a list.
109 prompts = [prompts]
110 if prompts is not None and prompt_token_ids is not None:
111 if len(prompts) != len(prompt_token_ids):
112 raise ValueError("The lengths of prompts and prompt_token_ids "
113 "must be the same.")
114 if sampling_params is None:
115 # Use default sampling params.
116 sampling_params = SamplingParams()
117
118 # Add requests to the engine.
119 if prompts is not None:
120 num_requests = len(prompts)
121 else:
122 num_requests = len(prompt_token_ids)
123 for i in range(num_requests):
124 prompt = prompts[i] if prompts is not None else None
125 if prompt_token_ids is None:
126 token_ids = None
127 else:
128 token_ids = prompt_token_ids[i]
129 self._add_request(prompt, sampling_params, token_ids)
130 return self._run_engine(use_tqdm)
131
132 def _add_request(
133 self,
134 prompt: Optional[str],
135 sampling_params: SamplingParams,
136 prompt_token_ids: Optional[List[int]],
137 ) -> None:
138 request_id = str(next(self.request_counter))
139 self.llm_engine.add_request(request_id, prompt, sampling_params,
140 prompt_token_ids)
141
142 def _run_engine(self, use_tqdm: bool) -> List[RequestOutput]:
143 # Initialize tqdm.
144 if use_tqdm:
145 num_requests = self.llm_engine.get_num_unfinished_requests()
146 pbar = tqdm(total=num_requests, desc="Processed prompts")
147 # Run the engine.
148 outputs: List[RequestOutput] = []
149 while self.llm_engine.has_unfinished_requests():
150 step_outputs = self.llm_engine.step()
151 for output in step_outputs:
152 if output.finished:
153 outputs.append(output)
154 if use_tqdm:
155 pbar.update(1)
156 if use_tqdm:
157 pbar.close()
158 return outputs
159
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vllm/entrypoints/llm.py b/vllm/entrypoints/llm.py
--- a/vllm/entrypoints/llm.py
+++ b/vllm/entrypoints/llm.py
@@ -155,4 +155,8 @@
pbar.update(1)
if use_tqdm:
pbar.close()
+ # Sort the outputs by request ID.
+ # This is necessary because some requests may be finished earlier than
+ # its previous requests.
+ outputs = sorted(outputs, key=lambda x: int(x.request_id))
return outputs
| {"golden_diff": "diff --git a/vllm/entrypoints/llm.py b/vllm/entrypoints/llm.py\n--- a/vllm/entrypoints/llm.py\n+++ b/vllm/entrypoints/llm.py\n@@ -155,4 +155,8 @@\n pbar.update(1)\n if use_tqdm:\n pbar.close()\n+ # Sort the outputs by request ID.\n+ # This is necessary because some requests may be finished earlier than\n+ # its previous requests.\n+ outputs = sorted(outputs, key=lambda x: int(x.request_id))\n return outputs\n", "issue": "Input and output orders are different\nI was trying to do batch inference by feeding a list of texts into `llm.generate()`. But I found that the outputs have different orders from examples in the input list. For example, the result for the first example in the input list may be the second in the output list. Anything suggestions that I may have done something wrong? Thanks!\n", "before_files": [{"content": "from typing import List, Optional, Union\n\nfrom tqdm import tqdm\nfrom transformers import PreTrainedTokenizer, PreTrainedTokenizerFast\n\nfrom vllm.engine.arg_utils import EngineArgs\nfrom vllm.engine.llm_engine import LLMEngine\nfrom vllm.outputs import RequestOutput\nfrom vllm.sampling_params import SamplingParams\nfrom vllm.utils import Counter\n\n\nclass LLM:\n \"\"\"An LLM for generating texts from given prompts and sampling parameters.\n\n This class includes a tokenizer, a language model (possibly distributed\n across multiple GPUs), and GPU memory space allocated for intermediate\n states (aka KV cache). Given a batch of prompts and sampling parameters,\n this class generates texts from the model, using an intelligent batching\n mechanism and efficient memory management.\n\n NOTE: This class is intended to be used for offline inference. For online\n serving, use the `AsyncLLMEngine` class instead.\n NOTE: For the comprehensive list of arguments, see `EngineArgs`.\n\n Args:\n model: The name or path of a HuggingFace Transformers model.\n tokenizer: The name or path of a HuggingFace Transformers tokenizer.\n tokenizer_mode: The tokenizer mode. \"auto\" will use the fast tokenizer\n if available, and \"slow\" will always use the slow tokenizer.\n trust_remote_code: Trust remote code (e.g., from HuggingFace) when\n downloading the model and tokenizer.\n tensor_parallel_size: The number of GPUs to use for distributed\n execution with tensor parallelism.\n dtype: The data type for the model weights and activations. Currently,\n we support `float32`, `float16`, and `bfloat16`. If `auto`, we use\n the `torch_dtype` attribute specified in the model config file.\n However, if the `torch_dtype` in the config is `float32`, we will\n use `float16` instead.\n seed: The seed to initialize the random number generator for sampling.\n \"\"\"\n\n def __init__(\n self,\n model: str,\n tokenizer: Optional[str] = None,\n tokenizer_mode: str = \"auto\",\n trust_remote_code: bool = False,\n tensor_parallel_size: int = 1,\n dtype: str = \"auto\",\n seed: int = 0,\n **kwargs,\n ) -> None:\n if \"disable_log_stats\" not in kwargs:\n kwargs[\"disable_log_stats\"] = True\n engine_args = EngineArgs(\n model=model,\n tokenizer=tokenizer,\n tokenizer_mode=tokenizer_mode,\n trust_remote_code=trust_remote_code,\n tensor_parallel_size=tensor_parallel_size,\n dtype=dtype,\n seed=seed,\n **kwargs,\n )\n self.llm_engine = LLMEngine.from_engine_args(engine_args)\n self.request_counter = Counter()\n\n def get_tokenizer(\n self) -> Union[PreTrainedTokenizer, PreTrainedTokenizerFast]:\n return self.llm_engine.tokenizer\n\n def set_tokenizer(\n self,\n tokenizer: Union[PreTrainedTokenizer, PreTrainedTokenizerFast],\n ) -> None:\n self.llm_engine.tokenizer = tokenizer\n\n def generate(\n self,\n prompts: Optional[Union[str, List[str]]] = None,\n sampling_params: Optional[SamplingParams] = None,\n prompt_token_ids: Optional[List[List[int]]] = None,\n use_tqdm: bool = True,\n ) -> List[RequestOutput]:\n \"\"\"Generates the completions for the input prompts.\n\n NOTE: This class automatically batches the given prompts, considering\n the memory constraint. For the best performance, put all of your prompts\n into a single list and pass it to this method.\n\n Args:\n prompts: A list of prompts to generate completions for.\n sampling_params: The sampling parameters for text generation. If\n None, we use the default sampling parameters.\n prompt_token_ids: A list of token IDs for the prompts. If None, we\n use the tokenizer to convert the prompts to token IDs.\n use_tqdm: Whether to use tqdm to display the progress bar.\n\n Returns:\n A list of `RequestOutput` objects containing the generated\n completions in the same order as the input prompts.\n \"\"\"\n if prompts is None and prompt_token_ids is None:\n raise ValueError(\"Either prompts or prompt_token_ids must be \"\n \"provided.\")\n if isinstance(prompts, str):\n # Convert a single prompt to a list.\n prompts = [prompts]\n if prompts is not None and prompt_token_ids is not None:\n if len(prompts) != len(prompt_token_ids):\n raise ValueError(\"The lengths of prompts and prompt_token_ids \"\n \"must be the same.\")\n if sampling_params is None:\n # Use default sampling params.\n sampling_params = SamplingParams()\n\n # Add requests to the engine.\n if prompts is not None:\n num_requests = len(prompts)\n else:\n num_requests = len(prompt_token_ids)\n for i in range(num_requests):\n prompt = prompts[i] if prompts is not None else None\n if prompt_token_ids is None:\n token_ids = None\n else:\n token_ids = prompt_token_ids[i]\n self._add_request(prompt, sampling_params, token_ids)\n return self._run_engine(use_tqdm)\n\n def _add_request(\n self,\n prompt: Optional[str],\n sampling_params: SamplingParams,\n prompt_token_ids: Optional[List[int]],\n ) -> None:\n request_id = str(next(self.request_counter))\n self.llm_engine.add_request(request_id, prompt, sampling_params,\n prompt_token_ids)\n\n def _run_engine(self, use_tqdm: bool) -> List[RequestOutput]:\n # Initialize tqdm.\n if use_tqdm:\n num_requests = self.llm_engine.get_num_unfinished_requests()\n pbar = tqdm(total=num_requests, desc=\"Processed prompts\")\n # Run the engine.\n outputs: List[RequestOutput] = []\n while self.llm_engine.has_unfinished_requests():\n step_outputs = self.llm_engine.step()\n for output in step_outputs:\n if output.finished:\n outputs.append(output)\n if use_tqdm:\n pbar.update(1)\n if use_tqdm:\n pbar.close()\n return outputs\n", "path": "vllm/entrypoints/llm.py"}], "after_files": [{"content": "from typing import List, Optional, Union\n\nfrom tqdm import tqdm\nfrom transformers import PreTrainedTokenizer, PreTrainedTokenizerFast\n\nfrom vllm.engine.arg_utils import EngineArgs\nfrom vllm.engine.llm_engine import LLMEngine\nfrom vllm.outputs import RequestOutput\nfrom vllm.sampling_params import SamplingParams\nfrom vllm.utils import Counter\n\n\nclass LLM:\n \"\"\"An LLM for generating texts from given prompts and sampling parameters.\n\n This class includes a tokenizer, a language model (possibly distributed\n across multiple GPUs), and GPU memory space allocated for intermediate\n states (aka KV cache). Given a batch of prompts and sampling parameters,\n this class generates texts from the model, using an intelligent batching\n mechanism and efficient memory management.\n\n NOTE: This class is intended to be used for offline inference. For online\n serving, use the `AsyncLLMEngine` class instead.\n NOTE: For the comprehensive list of arguments, see `EngineArgs`.\n\n Args:\n model: The name or path of a HuggingFace Transformers model.\n tokenizer: The name or path of a HuggingFace Transformers tokenizer.\n tokenizer_mode: The tokenizer mode. \"auto\" will use the fast tokenizer\n if available, and \"slow\" will always use the slow tokenizer.\n trust_remote_code: Trust remote code (e.g., from HuggingFace) when\n downloading the model and tokenizer.\n tensor_parallel_size: The number of GPUs to use for distributed\n execution with tensor parallelism.\n dtype: The data type for the model weights and activations. Currently,\n we support `float32`, `float16`, and `bfloat16`. If `auto`, we use\n the `torch_dtype` attribute specified in the model config file.\n However, if the `torch_dtype` in the config is `float32`, we will\n use `float16` instead.\n seed: The seed to initialize the random number generator for sampling.\n \"\"\"\n\n def __init__(\n self,\n model: str,\n tokenizer: Optional[str] = None,\n tokenizer_mode: str = \"auto\",\n trust_remote_code: bool = False,\n tensor_parallel_size: int = 1,\n dtype: str = \"auto\",\n seed: int = 0,\n **kwargs,\n ) -> None:\n if \"disable_log_stats\" not in kwargs:\n kwargs[\"disable_log_stats\"] = True\n engine_args = EngineArgs(\n model=model,\n tokenizer=tokenizer,\n tokenizer_mode=tokenizer_mode,\n trust_remote_code=trust_remote_code,\n tensor_parallel_size=tensor_parallel_size,\n dtype=dtype,\n seed=seed,\n **kwargs,\n )\n self.llm_engine = LLMEngine.from_engine_args(engine_args)\n self.request_counter = Counter()\n\n def get_tokenizer(\n self) -> Union[PreTrainedTokenizer, PreTrainedTokenizerFast]:\n return self.llm_engine.tokenizer\n\n def set_tokenizer(\n self,\n tokenizer: Union[PreTrainedTokenizer, PreTrainedTokenizerFast],\n ) -> None:\n self.llm_engine.tokenizer = tokenizer\n\n def generate(\n self,\n prompts: Optional[Union[str, List[str]]] = None,\n sampling_params: Optional[SamplingParams] = None,\n prompt_token_ids: Optional[List[List[int]]] = None,\n use_tqdm: bool = True,\n ) -> List[RequestOutput]:\n \"\"\"Generates the completions for the input prompts.\n\n NOTE: This class automatically batches the given prompts, considering\n the memory constraint. For the best performance, put all of your prompts\n into a single list and pass it to this method.\n\n Args:\n prompts: A list of prompts to generate completions for.\n sampling_params: The sampling parameters for text generation. If\n None, we use the default sampling parameters.\n prompt_token_ids: A list of token IDs for the prompts. If None, we\n use the tokenizer to convert the prompts to token IDs.\n use_tqdm: Whether to use tqdm to display the progress bar.\n\n Returns:\n A list of `RequestOutput` objects containing the generated\n completions in the same order as the input prompts.\n \"\"\"\n if prompts is None and prompt_token_ids is None:\n raise ValueError(\"Either prompts or prompt_token_ids must be \"\n \"provided.\")\n if isinstance(prompts, str):\n # Convert a single prompt to a list.\n prompts = [prompts]\n if prompts is not None and prompt_token_ids is not None:\n if len(prompts) != len(prompt_token_ids):\n raise ValueError(\"The lengths of prompts and prompt_token_ids \"\n \"must be the same.\")\n if sampling_params is None:\n # Use default sampling params.\n sampling_params = SamplingParams()\n\n # Add requests to the engine.\n if prompts is not None:\n num_requests = len(prompts)\n else:\n num_requests = len(prompt_token_ids)\n for i in range(num_requests):\n prompt = prompts[i] if prompts is not None else None\n if prompt_token_ids is None:\n token_ids = None\n else:\n token_ids = prompt_token_ids[i]\n self._add_request(prompt, sampling_params, token_ids)\n return self._run_engine(use_tqdm)\n\n def _add_request(\n self,\n prompt: Optional[str],\n sampling_params: SamplingParams,\n prompt_token_ids: Optional[List[int]],\n ) -> None:\n request_id = str(next(self.request_counter))\n self.llm_engine.add_request(request_id, prompt, sampling_params,\n prompt_token_ids)\n\n def _run_engine(self, use_tqdm: bool) -> List[RequestOutput]:\n # Initialize tqdm.\n if use_tqdm:\n num_requests = self.llm_engine.get_num_unfinished_requests()\n pbar = tqdm(total=num_requests, desc=\"Processed prompts\")\n # Run the engine.\n outputs: List[RequestOutput] = []\n while self.llm_engine.has_unfinished_requests():\n step_outputs = self.llm_engine.step()\n for output in step_outputs:\n if output.finished:\n outputs.append(output)\n if use_tqdm:\n pbar.update(1)\n if use_tqdm:\n pbar.close()\n # Sort the outputs by request ID.\n # This is necessary because some requests may be finished earlier than\n # its previous requests.\n outputs = sorted(outputs, key=lambda x: int(x.request_id))\n return outputs\n", "path": "vllm/entrypoints/llm.py"}]} | 2,070 | 137 |
gh_patches_debug_157 | rasdani/github-patches | git_diff | doccano__doccano-1907 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot access Django admin panel in a Heroku deployment
How to reproduce the behaviour
---------
The FAQ describes how to [create a user via the Django admin panel](https://github.com/doccano/doccano/blob/master/docs/faq.md#how-to-create-a-user) for a locally hosted Doccano. When run locally, I have no problem to reach the admin panel on `http://localhost:8000/admin/`, in Heroku however it is not working.
I have tried to reach it on
- `https://mydeployment.herokuapp.com/admin/`
- `https://mydeployment.herokuapp.com/admin/login`
- `https://mydeployment.herokuapp.com/admin/login/`
- `http://mydeployment.herokuapp.com/admin/`
Those urls all result in a `500 Internal Server Error`.
Am I missing something here, or is this perhaps a bug?
Your Environment
---------
<!-- Include details of your environment. -->
* Operating System: -
* Python Version Used: -
* When did you install doccano: A few days ago
* How did you install doccano (Heroku button etc): Heroku button
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/config/settings/heroku.py`
Content:
```
1 import django_heroku
2
3 from .base import * # noqa: F401,F403
4
5 django_heroku.settings(locals(), test_runner=False)
6
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/backend/config/settings/heroku.py b/backend/config/settings/heroku.py
--- a/backend/config/settings/heroku.py
+++ b/backend/config/settings/heroku.py
@@ -2,4 +2,4 @@
from .base import * # noqa: F401,F403
-django_heroku.settings(locals(), test_runner=False)
+django_heroku.settings(locals(), test_runner=False, staticfiles=False)
| {"golden_diff": "diff --git a/backend/config/settings/heroku.py b/backend/config/settings/heroku.py\n--- a/backend/config/settings/heroku.py\n+++ b/backend/config/settings/heroku.py\n@@ -2,4 +2,4 @@\n \n from .base import * # noqa: F401,F403\n \n-django_heroku.settings(locals(), test_runner=False)\n+django_heroku.settings(locals(), test_runner=False, staticfiles=False)\n", "issue": "Cannot access Django admin panel in a Heroku deployment\nHow to reproduce the behaviour\r\n---------\r\nThe FAQ describes how to [create a user via the Django admin panel](https://github.com/doccano/doccano/blob/master/docs/faq.md#how-to-create-a-user) for a locally hosted Doccano. When run locally, I have no problem to reach the admin panel on `http://localhost:8000/admin/`, in Heroku however it is not working.\r\n\r\nI have tried to reach it on\r\n- `https://mydeployment.herokuapp.com/admin/`\r\n- `https://mydeployment.herokuapp.com/admin/login`\r\n- `https://mydeployment.herokuapp.com/admin/login/`\r\n- `http://mydeployment.herokuapp.com/admin/`\r\n\r\nThose urls all result in a `500 Internal Server Error`.\r\nAm I missing something here, or is this perhaps a bug?\r\n\r\nYour Environment\r\n---------\r\n<!-- Include details of your environment. -->\r\n\r\n* Operating System: -\r\n* Python Version Used: -\r\n* When did you install doccano: A few days ago\r\n* How did you install doccano (Heroku button etc): Heroku button\r\n\n", "before_files": [{"content": "import django_heroku\n\nfrom .base import * # noqa: F401,F403\n\ndjango_heroku.settings(locals(), test_runner=False)\n", "path": "backend/config/settings/heroku.py"}], "after_files": [{"content": "import django_heroku\n\nfrom .base import * # noqa: F401,F403\n\ndjango_heroku.settings(locals(), test_runner=False, staticfiles=False)\n", "path": "backend/config/settings/heroku.py"}]} | 545 | 95 |
gh_patches_debug_4139 | rasdani/github-patches | git_diff | ephios-dev__ephios-364 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add caching
As of #278 some settings reside in the database, so we should defintely cache database requests.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ephios/settings.py`
Content:
```
1 import copy
2 import os
3 from email.utils import getaddresses
4 from importlib import metadata
5
6 import environ
7 from django.contrib.messages import constants
8
9 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
10
11 env = environ.Env()
12 # for syntax see https://django-environ.readthedocs.io/en/latest/
13 environ.Env.read_env(env_file=os.path.join(BASE_DIR, ".env"))
14
15 DATA_DIR = env.str("DATA_DIR", default=os.path.join(BASE_DIR, "data"))
16 if not os.path.exists(DATA_DIR):
17 os.mkdir(DATA_DIR)
18
19 SECRET_KEY = env.str("SECRET_KEY")
20 DEBUG = env.bool("DEBUG")
21 ALLOWED_HOSTS = env.list("ALLOWED_HOSTS")
22 SITE_URL = env.str("SITE_URL")
23 if SITE_URL.endswith("/"):
24 SITE_URL = SITE_URL[:-1]
25
26 if not DEBUG:
27 SESSION_COOKIE_SECURE = True
28 CSRF_COOKIE_SECURE = True
29 X_FRAME_OPTIONS = "DENY"
30 SECURE_CONTENT_TYPE_NOSNIFF = True
31 SECURE_BROWSER_XSS_FILTER = True
32 SECURE_SSL_REDIRECT = True
33 SECURE_HSTS_SECONDS = 3600
34 SECURE_HSTS_INCLUDE_SUBDOMAINS = True
35 SECURE_REFERRER_POLICY = "same-origin"
36
37 # Application definition
38
39 INSTALLED_APPS = [
40 "django.contrib.admin",
41 "django.contrib.auth",
42 "polymorphic",
43 "django.contrib.contenttypes",
44 "django.contrib.sessions",
45 "django.contrib.messages",
46 "django.contrib.staticfiles",
47 "guardian",
48 "django_select2",
49 "djangoformsetjs",
50 "compressor",
51 "recurrence",
52 "statici18n",
53 "dynamic_preferences.users.apps.UserPreferencesConfig",
54 "crispy_forms",
55 ]
56
57 EPHIOS_CORE_MODULES = [
58 "ephios.core",
59 "ephios.extra",
60 ]
61 INSTALLED_APPS += EPHIOS_CORE_MODULES
62
63 CORE_PLUGINS = [
64 "ephios.plugins.basesignup",
65 "ephios.plugins.pages",
66 "ephios.plugins.guests",
67 ]
68 PLUGINS = copy.copy(CORE_PLUGINS)
69 for ep in metadata.entry_points().get("ephios.plugins", []):
70 PLUGINS.append(ep.module)
71
72 INSTALLED_APPS += PLUGINS
73
74 INSTALLED_APPS += ["dynamic_preferences"] # must come after our apps to collect preferences
75
76 MIDDLEWARE = [
77 "django.middleware.security.SecurityMiddleware",
78 "django.contrib.sessions.middleware.SessionMiddleware",
79 "django.middleware.common.CommonMiddleware",
80 "django.middleware.csrf.CsrfViewMiddleware",
81 "django.contrib.auth.middleware.AuthenticationMiddleware",
82 "django.contrib.messages.middleware.MessageMiddleware",
83 "django.middleware.clickjacking.XFrameOptionsMiddleware",
84 "csp.middleware.CSPMiddleware",
85 ]
86
87 ROOT_URLCONF = "ephios.urls"
88
89 TEMPLATES = [
90 {
91 "BACKEND": "django.template.backends.django.DjangoTemplates",
92 "DIRS": [os.path.join(BASE_DIR, "ephios/templates")],
93 "APP_DIRS": True,
94 "OPTIONS": {
95 "context_processors": [
96 "django.template.context_processors.debug",
97 "django.template.context_processors.request",
98 "django.contrib.auth.context_processors.auth",
99 "django.contrib.messages.context_processors.messages",
100 "django.template.context_processors.i18n",
101 "dynamic_preferences.processors.global_preferences",
102 "ephios.core.context.ephios_base_context",
103 ],
104 },
105 },
106 ]
107
108 LOCALE_PATHS = (os.path.join(BASE_DIR, "ephios/locale"),)
109
110 WSGI_APPLICATION = "ephios.wsgi.application"
111
112 # Database
113 # https://docs.djangoproject.com/en/3.0/ref/settings/#databases
114
115 DATABASES = {"default": env.db_url()}
116
117 # Password validation
118 # https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators
119
120 AUTH_PASSWORD_VALIDATORS = [
121 {
122 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
123 },
124 {
125 "NAME": "ephios.core.utils.CustomMinimumLengthValidator",
126 },
127 {
128 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
129 },
130 {
131 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
132 },
133 ]
134
135 AUTHENTICATION_BACKENDS = (
136 "django.contrib.auth.backends.ModelBackend",
137 "guardian.backends.ObjectPermissionBackend",
138 )
139
140 AUTH_USER_MODEL = "core.UserProfile"
141 LOGIN_REDIRECT_URL = "/"
142 PASSWORD_RESET_TIMEOUT = 28 * 24 * 60 * 60 # seconds
143
144 # Internationalization
145 # https://docs.djangoproject.com/en/3.0/topics/i18n/
146
147 LANGUAGE_CODE = "de"
148
149 TIME_ZONE = "Europe/Berlin"
150
151 USE_I18N = True
152
153 USE_L10N = True
154
155 USE_TZ = True
156
157 # Static files (CSS, JavaScript, Images)
158 # https://docs.djangoproject.com/en/3.0/howto/static-files/
159
160 STATIC_URL = env.str("STATIC_URL")
161 STATIC_ROOT = env.str("STATIC_ROOT")
162 STATICFILES_DIRS = (os.path.join(BASE_DIR, "ephios/static"),)
163 STATICFILES_FINDERS = (
164 "django.contrib.staticfiles.finders.FileSystemFinder",
165 "django.contrib.staticfiles.finders.AppDirectoriesFinder",
166 "compressor.finders.CompressorFinder",
167 )
168 COMPRESS_ENABLED = not DEBUG
169
170 # mail configuration
171 EMAIL_CONFIG = env.email_url("EMAIL_URL")
172 vars().update(EMAIL_CONFIG)
173 DEFAULT_FROM_EMAIL = env.str("DEFAULT_FROM_EMAIL")
174 SERVER_EMAIL = env.str("SERVER_EMAIL")
175 ADMINS = getaddresses([env("ADMINS")])
176
177 # Guardian configuration
178 ANONYMOUS_USER_NAME = None
179 GUARDIAN_MONKEY_PATCH = False
180
181 # django-select2
182 # Prevent django-select from loading the select2 resources as we want to serve them locally
183 SELECT2_JS = ""
184 SELECT2_CSS = ""
185 SELECT2_I18N_PATH = ""
186
187 # django-debug-toolbar
188 if DEBUG:
189 INSTALLED_APPS.append("django_extensions")
190 INSTALLED_APPS.append("debug_toolbar")
191 MIDDLEWARE.insert(0, "debug_toolbar.middleware.DebugToolbarMiddleware")
192 INTERNAL_IPS = env.str("INTERNAL_IPS")
193
194 # django-csp
195 # Bootstrap requires embedded SVG files loaded via a data URI. This is not ideal, but will only be fixed in
196 # bootstrap v5 or v6. See https://github.com/twbs/bootstrap/issues/25394 for details on the problem and
197 # https://security.stackexchange.com/a/167244 on why allowing data: is considered okay
198 CSP_IMG_SRC = ("'self'", "data:")
199
200 # django-crispy-forms
201 CRISPY_TEMPLATE_PACK = "bootstrap4"
202
203 # django.contrib.messages
204 MESSAGE_TAGS = {
205 constants.DEBUG: "alert-info",
206 constants.INFO: "alert-info",
207 constants.SUCCESS: "alert-success",
208 constants.WARNING: "alert-warning",
209 constants.ERROR: "alert-danger",
210 }
211
212 # PWA
213 PWA_APP_ICONS = [
214 {"src": "/static/ephios/img/ephios-192x.png", "sizes": "192x192", "purpose": "any maskable"},
215 {"src": "/static/ephios/img/ephios-512x.png", "sizes": "512x512", "purpose": "any maskable"},
216 {"src": "/static/ephios/img/ephios-1024x.png", "sizes": "1024x1024", "purpose": "any maskable"},
217 ]
218
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ephios/settings.py b/ephios/settings.py
--- a/ephios/settings.py
+++ b/ephios/settings.py
@@ -114,6 +114,12 @@
DATABASES = {"default": env.db_url()}
+# Caches
+CACHES = {"default": env.cache_url(default="locmemcache://")}
+SESSION_ENGINE = "django.contrib.sessions.backends.cached_db"
+DJANGO_REDIS_IGNORE_EXCEPTIONS = True
+DJANGO_REDIS_LOG_IGNORED_EXCEPTIONS = True
+
# Password validation
# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators
| {"golden_diff": "diff --git a/ephios/settings.py b/ephios/settings.py\n--- a/ephios/settings.py\n+++ b/ephios/settings.py\n@@ -114,6 +114,12 @@\n \n DATABASES = {\"default\": env.db_url()}\n \n+# Caches\n+CACHES = {\"default\": env.cache_url(default=\"locmemcache://\")}\n+SESSION_ENGINE = \"django.contrib.sessions.backends.cached_db\"\n+DJANGO_REDIS_IGNORE_EXCEPTIONS = True\n+DJANGO_REDIS_LOG_IGNORED_EXCEPTIONS = True\n+\n # Password validation\n # https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators\n", "issue": "Add caching\nAs of #278 some settings reside in the database, so we should defintely cache database requests.\n", "before_files": [{"content": "import copy\nimport os\nfrom email.utils import getaddresses\nfrom importlib import metadata\n\nimport environ\nfrom django.contrib.messages import constants\n\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\nenv = environ.Env()\n# for syntax see https://django-environ.readthedocs.io/en/latest/\nenviron.Env.read_env(env_file=os.path.join(BASE_DIR, \".env\"))\n\nDATA_DIR = env.str(\"DATA_DIR\", default=os.path.join(BASE_DIR, \"data\"))\nif not os.path.exists(DATA_DIR):\n os.mkdir(DATA_DIR)\n\nSECRET_KEY = env.str(\"SECRET_KEY\")\nDEBUG = env.bool(\"DEBUG\")\nALLOWED_HOSTS = env.list(\"ALLOWED_HOSTS\")\nSITE_URL = env.str(\"SITE_URL\")\nif SITE_URL.endswith(\"/\"):\n SITE_URL = SITE_URL[:-1]\n\nif not DEBUG:\n SESSION_COOKIE_SECURE = True\n CSRF_COOKIE_SECURE = True\n X_FRAME_OPTIONS = \"DENY\"\n SECURE_CONTENT_TYPE_NOSNIFF = True\n SECURE_BROWSER_XSS_FILTER = True\n SECURE_SSL_REDIRECT = True\n SECURE_HSTS_SECONDS = 3600\n SECURE_HSTS_INCLUDE_SUBDOMAINS = True\n SECURE_REFERRER_POLICY = \"same-origin\"\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"polymorphic\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"guardian\",\n \"django_select2\",\n \"djangoformsetjs\",\n \"compressor\",\n \"recurrence\",\n \"statici18n\",\n \"dynamic_preferences.users.apps.UserPreferencesConfig\",\n \"crispy_forms\",\n]\n\nEPHIOS_CORE_MODULES = [\n \"ephios.core\",\n \"ephios.extra\",\n]\nINSTALLED_APPS += EPHIOS_CORE_MODULES\n\nCORE_PLUGINS = [\n \"ephios.plugins.basesignup\",\n \"ephios.plugins.pages\",\n \"ephios.plugins.guests\",\n]\nPLUGINS = copy.copy(CORE_PLUGINS)\nfor ep in metadata.entry_points().get(\"ephios.plugins\", []):\n PLUGINS.append(ep.module)\n\nINSTALLED_APPS += PLUGINS\n\nINSTALLED_APPS += [\"dynamic_preferences\"] # must come after our apps to collect preferences\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"csp.middleware.CSPMiddleware\",\n]\n\nROOT_URLCONF = \"ephios.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [os.path.join(BASE_DIR, \"ephios/templates\")],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n \"django.template.context_processors.i18n\",\n \"dynamic_preferences.processors.global_preferences\",\n \"ephios.core.context.ephios_base_context\",\n ],\n },\n },\n]\n\nLOCALE_PATHS = (os.path.join(BASE_DIR, \"ephios/locale\"),)\n\nWSGI_APPLICATION = \"ephios.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.0/ref/settings/#databases\n\nDATABASES = {\"default\": env.db_url()}\n\n# Password validation\n# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"ephios.core.utils.CustomMinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\nAUTHENTICATION_BACKENDS = (\n \"django.contrib.auth.backends.ModelBackend\",\n \"guardian.backends.ObjectPermissionBackend\",\n)\n\nAUTH_USER_MODEL = \"core.UserProfile\"\nLOGIN_REDIRECT_URL = \"/\"\nPASSWORD_RESET_TIMEOUT = 28 * 24 * 60 * 60 # seconds\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.0/topics/i18n/\n\nLANGUAGE_CODE = \"de\"\n\nTIME_ZONE = \"Europe/Berlin\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.0/howto/static-files/\n\nSTATIC_URL = env.str(\"STATIC_URL\")\nSTATIC_ROOT = env.str(\"STATIC_ROOT\")\nSTATICFILES_DIRS = (os.path.join(BASE_DIR, \"ephios/static\"),)\nSTATICFILES_FINDERS = (\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n \"compressor.finders.CompressorFinder\",\n)\nCOMPRESS_ENABLED = not DEBUG\n\n# mail configuration\nEMAIL_CONFIG = env.email_url(\"EMAIL_URL\")\nvars().update(EMAIL_CONFIG)\nDEFAULT_FROM_EMAIL = env.str(\"DEFAULT_FROM_EMAIL\")\nSERVER_EMAIL = env.str(\"SERVER_EMAIL\")\nADMINS = getaddresses([env(\"ADMINS\")])\n\n# Guardian configuration\nANONYMOUS_USER_NAME = None\nGUARDIAN_MONKEY_PATCH = False\n\n# django-select2\n# Prevent django-select from loading the select2 resources as we want to serve them locally\nSELECT2_JS = \"\"\nSELECT2_CSS = \"\"\nSELECT2_I18N_PATH = \"\"\n\n# django-debug-toolbar\nif DEBUG:\n INSTALLED_APPS.append(\"django_extensions\")\n INSTALLED_APPS.append(\"debug_toolbar\")\n MIDDLEWARE.insert(0, \"debug_toolbar.middleware.DebugToolbarMiddleware\")\n INTERNAL_IPS = env.str(\"INTERNAL_IPS\")\n\n# django-csp\n# Bootstrap requires embedded SVG files loaded via a data URI. This is not ideal, but will only be fixed in\n# bootstrap v5 or v6. See https://github.com/twbs/bootstrap/issues/25394 for details on the problem and\n# https://security.stackexchange.com/a/167244 on why allowing data: is considered okay\nCSP_IMG_SRC = (\"'self'\", \"data:\")\n\n# django-crispy-forms\nCRISPY_TEMPLATE_PACK = \"bootstrap4\"\n\n# django.contrib.messages\nMESSAGE_TAGS = {\n constants.DEBUG: \"alert-info\",\n constants.INFO: \"alert-info\",\n constants.SUCCESS: \"alert-success\",\n constants.WARNING: \"alert-warning\",\n constants.ERROR: \"alert-danger\",\n}\n\n# PWA\nPWA_APP_ICONS = [\n {\"src\": \"/static/ephios/img/ephios-192x.png\", \"sizes\": \"192x192\", \"purpose\": \"any maskable\"},\n {\"src\": \"/static/ephios/img/ephios-512x.png\", \"sizes\": \"512x512\", \"purpose\": \"any maskable\"},\n {\"src\": \"/static/ephios/img/ephios-1024x.png\", \"sizes\": \"1024x1024\", \"purpose\": \"any maskable\"},\n]\n", "path": "ephios/settings.py"}], "after_files": [{"content": "import copy\nimport os\nfrom email.utils import getaddresses\nfrom importlib import metadata\n\nimport environ\nfrom django.contrib.messages import constants\n\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\nenv = environ.Env()\n# for syntax see https://django-environ.readthedocs.io/en/latest/\nenviron.Env.read_env(env_file=os.path.join(BASE_DIR, \".env\"))\n\nDATA_DIR = env.str(\"DATA_DIR\", default=os.path.join(BASE_DIR, \"data\"))\nif not os.path.exists(DATA_DIR):\n os.mkdir(DATA_DIR)\n\nSECRET_KEY = env.str(\"SECRET_KEY\")\nDEBUG = env.bool(\"DEBUG\")\nALLOWED_HOSTS = env.list(\"ALLOWED_HOSTS\")\nSITE_URL = env.str(\"SITE_URL\")\nif SITE_URL.endswith(\"/\"):\n SITE_URL = SITE_URL[:-1]\n\nif not DEBUG:\n SESSION_COOKIE_SECURE = True\n CSRF_COOKIE_SECURE = True\n X_FRAME_OPTIONS = \"DENY\"\n SECURE_CONTENT_TYPE_NOSNIFF = True\n SECURE_BROWSER_XSS_FILTER = True\n SECURE_SSL_REDIRECT = True\n SECURE_HSTS_SECONDS = 3600\n SECURE_HSTS_INCLUDE_SUBDOMAINS = True\n SECURE_REFERRER_POLICY = \"same-origin\"\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"polymorphic\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"guardian\",\n \"django_select2\",\n \"djangoformsetjs\",\n \"compressor\",\n \"recurrence\",\n \"statici18n\",\n \"dynamic_preferences.users.apps.UserPreferencesConfig\",\n \"crispy_forms\",\n]\n\nEPHIOS_CORE_MODULES = [\n \"ephios.core\",\n \"ephios.extra\",\n]\nINSTALLED_APPS += EPHIOS_CORE_MODULES\n\nCORE_PLUGINS = [\n \"ephios.plugins.basesignup\",\n \"ephios.plugins.pages\",\n \"ephios.plugins.guests\",\n]\nPLUGINS = copy.copy(CORE_PLUGINS)\nfor ep in metadata.entry_points().get(\"ephios.plugins\", []):\n PLUGINS.append(ep.module)\n\nINSTALLED_APPS += PLUGINS\n\nINSTALLED_APPS += [\"dynamic_preferences\"] # must come after our apps to collect preferences\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"csp.middleware.CSPMiddleware\",\n]\n\nROOT_URLCONF = \"ephios.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [os.path.join(BASE_DIR, \"ephios/templates\")],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n \"django.template.context_processors.i18n\",\n \"dynamic_preferences.processors.global_preferences\",\n \"ephios.core.context.ephios_base_context\",\n ],\n },\n },\n]\n\nLOCALE_PATHS = (os.path.join(BASE_DIR, \"ephios/locale\"),)\n\nWSGI_APPLICATION = \"ephios.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.0/ref/settings/#databases\n\nDATABASES = {\"default\": env.db_url()}\n\n# Caches\nCACHES = {\"default\": env.cache_url(default=\"locmemcache://\")}\nSESSION_ENGINE = \"django.contrib.sessions.backends.cached_db\"\nDJANGO_REDIS_IGNORE_EXCEPTIONS = True\nDJANGO_REDIS_LOG_IGNORED_EXCEPTIONS = True\n\n# Password validation\n# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"ephios.core.utils.CustomMinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\nAUTHENTICATION_BACKENDS = (\n \"django.contrib.auth.backends.ModelBackend\",\n \"guardian.backends.ObjectPermissionBackend\",\n)\n\nAUTH_USER_MODEL = \"core.UserProfile\"\nLOGIN_REDIRECT_URL = \"/\"\nPASSWORD_RESET_TIMEOUT = 28 * 24 * 60 * 60 # seconds\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.0/topics/i18n/\n\nLANGUAGE_CODE = \"de\"\n\nTIME_ZONE = \"Europe/Berlin\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.0/howto/static-files/\n\nSTATIC_URL = env.str(\"STATIC_URL\")\nSTATIC_ROOT = env.str(\"STATIC_ROOT\")\nSTATICFILES_DIRS = (os.path.join(BASE_DIR, \"ephios/static\"),)\nSTATICFILES_FINDERS = (\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n \"compressor.finders.CompressorFinder\",\n)\nCOMPRESS_ENABLED = not DEBUG\n\n# mail configuration\nEMAIL_CONFIG = env.email_url(\"EMAIL_URL\")\nvars().update(EMAIL_CONFIG)\nDEFAULT_FROM_EMAIL = env.str(\"DEFAULT_FROM_EMAIL\")\nSERVER_EMAIL = env.str(\"SERVER_EMAIL\")\nADMINS = getaddresses([env(\"ADMINS\")])\n\n# Guardian configuration\nANONYMOUS_USER_NAME = None\nGUARDIAN_MONKEY_PATCH = False\n\n# django-select2\n# Prevent django-select from loading the select2 resources as we want to serve them locally\nSELECT2_JS = \"\"\nSELECT2_CSS = \"\"\nSELECT2_I18N_PATH = \"\"\n\n# django-debug-toolbar\nif DEBUG:\n INSTALLED_APPS.append(\"django_extensions\")\n INSTALLED_APPS.append(\"debug_toolbar\")\n MIDDLEWARE.insert(0, \"debug_toolbar.middleware.DebugToolbarMiddleware\")\n INTERNAL_IPS = env.str(\"INTERNAL_IPS\")\n\n# django-csp\n# Bootstrap requires embedded SVG files loaded via a data URI. This is not ideal, but will only be fixed in\n# bootstrap v5 or v6. See https://github.com/twbs/bootstrap/issues/25394 for details on the problem and\n# https://security.stackexchange.com/a/167244 on why allowing data: is considered okay\nCSP_IMG_SRC = (\"'self'\", \"data:\")\n\n# django-crispy-forms\nCRISPY_TEMPLATE_PACK = \"bootstrap4\"\n\n# django.contrib.messages\nMESSAGE_TAGS = {\n constants.DEBUG: \"alert-info\",\n constants.INFO: \"alert-info\",\n constants.SUCCESS: \"alert-success\",\n constants.WARNING: \"alert-warning\",\n constants.ERROR: \"alert-danger\",\n}\n\n# PWA\nPWA_APP_ICONS = [\n {\"src\": \"/static/ephios/img/ephios-192x.png\", \"sizes\": \"192x192\", \"purpose\": \"any maskable\"},\n {\"src\": \"/static/ephios/img/ephios-512x.png\", \"sizes\": \"512x512\", \"purpose\": \"any maskable\"},\n {\"src\": \"/static/ephios/img/ephios-1024x.png\", \"sizes\": \"1024x1024\", \"purpose\": \"any maskable\"},\n]\n", "path": "ephios/settings.py"}]} | 2,440 | 141 |
gh_patches_debug_32353 | rasdani/github-patches | git_diff | pypi__warehouse-6342 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Change username & API token prefix, to make Travis auth easier
Just wanted to share a potential pain point for folks using the new PyPI API tokens for authentication via a travis.yaml. The @token username needs to be wrapped in quotes and the `:` after `pypi` needs to be escaped to work, otherwise you hit a 403 error (thanks for the help tracking this down @ewdurbin).
If you're using the environment variables through Travis' UI, the following works:
```
deploy:
provider: pypi
user: "@token" # quotes required for travis
# server: https://test.pypi.org/legacy/ # uncomment to do a test deploy
password:
secure: $PASSWORD # stored in travis env var, with `:` after `pypi` escaped (pypi\:)
on:
branch: master
skip_existing: true
```
If you're encrypting the token via the command line, you can just wrap your token in quotes:
```travis encrypt "<your-pypi_token>" --add deploy.password [--com]```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/macaroons/services.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import datetime
14 import json
15 import uuid
16
17 import pymacaroons
18
19 from sqlalchemy.orm import joinedload
20 from sqlalchemy.orm.exc import NoResultFound
21 from zope.interface import implementer
22
23 from warehouse.accounts.models import User
24 from warehouse.macaroons.caveats import InvalidMacaroon, Verifier
25 from warehouse.macaroons.interfaces import IMacaroonService
26 from warehouse.macaroons.models import Macaroon
27
28
29 @implementer(IMacaroonService)
30 class DatabaseMacaroonService:
31 def __init__(self, db_session):
32 self.db = db_session
33
34 def _extract_raw_macaroon(self, raw_macaroon):
35 """
36 Returns the base64-encoded macaroon component of a PyPI macaroon,
37 dropping the prefix.
38
39 Returns None if the macaroon is None, has no prefix, or has the
40 wrong prefix.
41 """
42 if raw_macaroon is None:
43 return None
44
45 try:
46 prefix, raw_macaroon = raw_macaroon.split(":", 1)
47 except ValueError:
48 return None
49
50 if prefix != "pypi":
51 return None
52
53 return raw_macaroon
54
55 def find_macaroon(self, macaroon_id):
56 """
57 Returns a macaroon model from the DB by its identifier.
58 Returns None if no macaroon has the given ID.
59 """
60 try:
61 dm = (
62 self.db.query(Macaroon)
63 .options(joinedload("user"))
64 .filter(Macaroon.id == uuid.UUID(macaroon_id))
65 .one()
66 )
67 except NoResultFound:
68 return None
69
70 return dm
71
72 def find_userid(self, raw_macaroon):
73 """
74 Returns the id of the user associated with the given raw (serialized)
75 macaroon.
76 """
77 raw_macaroon = self._extract_raw_macaroon(raw_macaroon)
78 if raw_macaroon is None:
79 return None
80
81 m = pymacaroons.Macaroon.deserialize(raw_macaroon)
82 dm = self.find_macaroon(m.identifier.decode())
83
84 if dm is None:
85 return None
86
87 return dm.user.id
88
89 def verify(self, raw_macaroon, context, principals, permission):
90 """
91 Returns True if the given raw (serialized) macaroon is
92 valid for the context, principals, and requested permission.
93
94 Raises InvalidMacaroon if the macaroon is not valid.
95 """
96 raw_macaroon = self._extract_raw_macaroon(raw_macaroon)
97 if raw_macaroon is None:
98 raise InvalidMacaroon("malformed or nonexistent macaroon")
99
100 m = pymacaroons.Macaroon.deserialize(raw_macaroon)
101 dm = self.find_macaroon(m.identifier.decode())
102
103 if dm is None:
104 raise InvalidMacaroon("deleted or nonexistent macaroon")
105
106 verifier = Verifier(m, context, principals, permission)
107 if verifier.verify(dm.key):
108 dm.last_used = datetime.datetime.now()
109 return True
110
111 raise InvalidMacaroon("invalid macaroon")
112
113 def create_macaroon(self, location, user_id, description, caveats):
114 """
115 Returns a tuple of a new raw (serialized) macaroon and its DB model.
116 The description provided is not embedded into the macaroon, only stored
117 in the DB model.
118 """
119 user = self.db.query(User).filter(User.id == user_id).one()
120
121 dm = Macaroon(user=user, description=description, caveats=caveats)
122 self.db.add(dm)
123 self.db.flush()
124
125 m = pymacaroons.Macaroon(
126 location=location,
127 identifier=str(dm.id),
128 key=dm.key,
129 version=pymacaroons.MACAROON_V2,
130 )
131 m.add_first_party_caveat(json.dumps(caveats))
132 serialized_macaroon = f"pypi:{m.serialize()}"
133 return serialized_macaroon, dm
134
135 def delete_macaroon(self, macaroon_id):
136 """
137 Deletes a macaroon from the DB by its identifier.
138 """
139 dm = self.find_macaroon(macaroon_id)
140 self.db.delete(dm)
141 self.db.flush()
142
143 def get_macaroon_by_description(self, user_id, description):
144 """
145 Returns a macaroon model from the DB with the given description,
146 if one exists for the given user.
147
148 Returns None if the user doesn't have a macaroon with this description.
149 """
150 try:
151 dm = (
152 self.db.query(Macaroon)
153 .options(joinedload("user"))
154 .filter(Macaroon.description == description)
155 .filter(Macaroon.user_id == user_id)
156 .one()
157 )
158 except NoResultFound:
159 return None
160
161 return dm
162
163
164 def database_macaroon_factory(context, request):
165 return DatabaseMacaroonService(request.db)
166
```
Path: `warehouse/macaroons/auth_policy.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import base64
14
15 from pyramid.authentication import CallbackAuthenticationPolicy
16 from pyramid.interfaces import IAuthenticationPolicy, IAuthorizationPolicy
17 from pyramid.security import Denied
18 from pyramid.threadlocal import get_current_request
19 from zope.interface import implementer
20
21 from warehouse.cache.http import add_vary_callback
22 from warehouse.macaroons.interfaces import IMacaroonService
23 from warehouse.macaroons.services import InvalidMacaroon
24
25
26 def _extract_basic_macaroon(auth):
27 """
28 A helper function for extracting a macaroon from a
29 HTTP Basic Authentication-style header.
30
31 Returns None if the header doesn't contain a structurally
32 valid macaroon, or the candidate (not yet verified) macaroon
33 in a serialized form.
34 """
35 try:
36 authorization = base64.b64decode(auth).decode()
37 auth_method, _, auth = authorization.partition(":")
38 except ValueError:
39 return None
40
41 if auth_method != "@token":
42 return None
43
44 return auth
45
46
47 def _extract_http_macaroon(request):
48 """
49 A helper function for the extraction of HTTP Macaroon from a given request.
50 Returns either a None if no macaroon could be found, or the string
51 that represents our serialized macaroon.
52 """
53 authorization = request.headers.get("Authorization")
54 if not authorization:
55 return None
56
57 try:
58 auth_method, auth = authorization.split(" ", 1)
59 except ValueError:
60 return None
61
62 if auth_method.lower() == "basic":
63 return _extract_basic_macaroon(auth)
64 elif auth_method.lower() == "token":
65 return auth
66
67 return None
68
69
70 @implementer(IAuthenticationPolicy)
71 class MacaroonAuthenticationPolicy(CallbackAuthenticationPolicy):
72 def __init__(self, callback=None):
73 self.callback = callback
74
75 def unauthenticated_userid(self, request):
76 # If we're calling into this API on a request, then we want to register
77 # a callback which will ensure that the response varies based on the
78 # Authorization header.
79 request.add_response_callback(add_vary_callback("Authorization"))
80
81 # We need to extract our Macaroon from the request.
82 macaroon = _extract_http_macaroon(request)
83 if macaroon is None:
84 return None
85
86 # Check to see if our Macaroon exists in the database, and if so
87 # fetch the user that is associated with it.
88 macaroon_service = request.find_service(IMacaroonService, context=None)
89 userid = macaroon_service.find_userid(macaroon)
90 if userid is not None:
91 return str(userid)
92
93 def remember(self, request, userid, **kw):
94 # This is a NO-OP because our Macaroon header policy doesn't allow
95 # the ability for authentication to "remember" the user id. This
96 # assumes it has been configured in clients somewhere out of band.
97 return []
98
99 def forget(self, request):
100 # This is a NO-OP because our Macaroon header policy doesn't allow
101 # the ability for authentication to "forget" the user id. This
102 # assumes it has been configured in clients somewhere out of band.
103 return []
104
105
106 @implementer(IAuthorizationPolicy)
107 class MacaroonAuthorizationPolicy:
108 def __init__(self, policy):
109 self.policy = policy
110
111 def permits(self, context, principals, permission):
112 # The Pyramid API doesn't let us access the request here, so we have to pull it
113 # out of the thread local instead.
114 # TODO: Work with Pyramid devs to figure out if there is a better way to support
115 # the worklow we are using here or not.
116 request = get_current_request()
117
118 # Our request could possibly be a None, if there isn't an active request, in
119 # that case we're going to always deny, because without a request, we can't
120 # determine if this request is authorized or not.
121 if request is None:
122 return Denied("There was no active request.")
123
124 # Re-extract our Macaroon from the request, it sucks to have to do this work
125 # twice, but I believe it is inevitable unless we pass the Macaroon back as
126 # a principal-- which doesn't seem to be the right fit for it.
127 macaroon = _extract_http_macaroon(request)
128
129 # This logic will only happen on requests that are being authenticated with
130 # Macaroons. Any other request will just fall back to the standard Authorization
131 # policy.
132 if macaroon is not None:
133 macaroon_service = request.find_service(IMacaroonService, context=None)
134
135 try:
136 macaroon_service.verify(macaroon, context, principals, permission)
137 except InvalidMacaroon as exc:
138 return Denied(f"The supplied token was invalid: {str(exc)!r}")
139
140 # If our Macaroon is verified, then we'll pass this request to our underlying
141 # Authorization policy, so it can handle its own authorization logic on
142 # the prinicpal.
143 return self.policy.permits(context, principals, permission)
144
145 def principals_allowed_by_permission(self, context, permission):
146 # We just dispatch this, because Macaroons don't restrict what principals are
147 # allowed by a particular permission, they just restrict specific requests
148 # to not have that permission.
149 return self.policy.principals_allowed_by_permission(context, permission)
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/warehouse/macaroons/auth_policy.py b/warehouse/macaroons/auth_policy.py
--- a/warehouse/macaroons/auth_policy.py
+++ b/warehouse/macaroons/auth_policy.py
@@ -38,7 +38,8 @@
except ValueError:
return None
- if auth_method != "@token":
+ # TODO: Remove @token as an acceptable token username (GH-6345)
+ if auth_method != "@token" and auth_method != "__token__":
return None
return auth
diff --git a/warehouse/macaroons/services.py b/warehouse/macaroons/services.py
--- a/warehouse/macaroons/services.py
+++ b/warehouse/macaroons/services.py
@@ -31,7 +31,7 @@
def __init__(self, db_session):
self.db = db_session
- def _extract_raw_macaroon(self, raw_macaroon):
+ def _extract_raw_macaroon(self, prefixed_macaroon):
"""
Returns the base64-encoded macaroon component of a PyPI macaroon,
dropping the prefix.
@@ -39,13 +39,13 @@
Returns None if the macaroon is None, has no prefix, or has the
wrong prefix.
"""
- if raw_macaroon is None:
+ if prefixed_macaroon is None:
return None
- try:
- prefix, raw_macaroon = raw_macaroon.split(":", 1)
- except ValueError:
- return None
+ prefix, split, raw_macaroon = prefixed_macaroon.partition("-")
+ # TODO: Remove ':' as an acceptable delimiter for tokens (GH-6345)
+ if prefix != "pypi" or not split:
+ prefix, _, raw_macaroon = prefixed_macaroon.partition(":")
if prefix != "pypi":
return None
@@ -129,7 +129,7 @@
version=pymacaroons.MACAROON_V2,
)
m.add_first_party_caveat(json.dumps(caveats))
- serialized_macaroon = f"pypi:{m.serialize()}"
+ serialized_macaroon = f"pypi-{m.serialize()}"
return serialized_macaroon, dm
def delete_macaroon(self, macaroon_id):
| {"golden_diff": "diff --git a/warehouse/macaroons/auth_policy.py b/warehouse/macaroons/auth_policy.py\n--- a/warehouse/macaroons/auth_policy.py\n+++ b/warehouse/macaroons/auth_policy.py\n@@ -38,7 +38,8 @@\n except ValueError:\n return None\n \n- if auth_method != \"@token\":\n+ # TODO: Remove @token as an acceptable token username (GH-6345)\n+ if auth_method != \"@token\" and auth_method != \"__token__\":\n return None\n \n return auth\ndiff --git a/warehouse/macaroons/services.py b/warehouse/macaroons/services.py\n--- a/warehouse/macaroons/services.py\n+++ b/warehouse/macaroons/services.py\n@@ -31,7 +31,7 @@\n def __init__(self, db_session):\n self.db = db_session\n \n- def _extract_raw_macaroon(self, raw_macaroon):\n+ def _extract_raw_macaroon(self, prefixed_macaroon):\n \"\"\"\n Returns the base64-encoded macaroon component of a PyPI macaroon,\n dropping the prefix.\n@@ -39,13 +39,13 @@\n Returns None if the macaroon is None, has no prefix, or has the\n wrong prefix.\n \"\"\"\n- if raw_macaroon is None:\n+ if prefixed_macaroon is None:\n return None\n \n- try:\n- prefix, raw_macaroon = raw_macaroon.split(\":\", 1)\n- except ValueError:\n- return None\n+ prefix, split, raw_macaroon = prefixed_macaroon.partition(\"-\")\n+ # TODO: Remove ':' as an acceptable delimiter for tokens (GH-6345)\n+ if prefix != \"pypi\" or not split:\n+ prefix, _, raw_macaroon = prefixed_macaroon.partition(\":\")\n \n if prefix != \"pypi\":\n return None\n@@ -129,7 +129,7 @@\n version=pymacaroons.MACAROON_V2,\n )\n m.add_first_party_caveat(json.dumps(caveats))\n- serialized_macaroon = f\"pypi:{m.serialize()}\"\n+ serialized_macaroon = f\"pypi-{m.serialize()}\"\n return serialized_macaroon, dm\n \n def delete_macaroon(self, macaroon_id):\n", "issue": "Change username & API token prefix, to make Travis auth easier\nJust wanted to share a potential pain point for folks using the new PyPI API tokens for authentication via a travis.yaml. The @token username needs to be wrapped in quotes and the `:` after `pypi` needs to be escaped to work, otherwise you hit a 403 error (thanks for the help tracking this down @ewdurbin).\r\n\r\nIf you're using the environment variables through Travis' UI, the following works:\r\n```\r\ndeploy:\r\n provider: pypi\r\n user: \"@token\" # quotes required for travis\r\n # server: https://test.pypi.org/legacy/ # uncomment to do a test deploy\r\n password:\r\n secure: $PASSWORD # stored in travis env var, with `:` after `pypi` escaped (pypi\\:)\r\n on:\r\n branch: master\r\n skip_existing: true\r\n```\r\n\r\nIf you're encrypting the token via the command line, you can just wrap your token in quotes:\r\n```travis encrypt \"<your-pypi_token>\" --add deploy.password [--com]```\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport datetime\nimport json\nimport uuid\n\nimport pymacaroons\n\nfrom sqlalchemy.orm import joinedload\nfrom sqlalchemy.orm.exc import NoResultFound\nfrom zope.interface import implementer\n\nfrom warehouse.accounts.models import User\nfrom warehouse.macaroons.caveats import InvalidMacaroon, Verifier\nfrom warehouse.macaroons.interfaces import IMacaroonService\nfrom warehouse.macaroons.models import Macaroon\n\n\n@implementer(IMacaroonService)\nclass DatabaseMacaroonService:\n def __init__(self, db_session):\n self.db = db_session\n\n def _extract_raw_macaroon(self, raw_macaroon):\n \"\"\"\n Returns the base64-encoded macaroon component of a PyPI macaroon,\n dropping the prefix.\n\n Returns None if the macaroon is None, has no prefix, or has the\n wrong prefix.\n \"\"\"\n if raw_macaroon is None:\n return None\n\n try:\n prefix, raw_macaroon = raw_macaroon.split(\":\", 1)\n except ValueError:\n return None\n\n if prefix != \"pypi\":\n return None\n\n return raw_macaroon\n\n def find_macaroon(self, macaroon_id):\n \"\"\"\n Returns a macaroon model from the DB by its identifier.\n Returns None if no macaroon has the given ID.\n \"\"\"\n try:\n dm = (\n self.db.query(Macaroon)\n .options(joinedload(\"user\"))\n .filter(Macaroon.id == uuid.UUID(macaroon_id))\n .one()\n )\n except NoResultFound:\n return None\n\n return dm\n\n def find_userid(self, raw_macaroon):\n \"\"\"\n Returns the id of the user associated with the given raw (serialized)\n macaroon.\n \"\"\"\n raw_macaroon = self._extract_raw_macaroon(raw_macaroon)\n if raw_macaroon is None:\n return None\n\n m = pymacaroons.Macaroon.deserialize(raw_macaroon)\n dm = self.find_macaroon(m.identifier.decode())\n\n if dm is None:\n return None\n\n return dm.user.id\n\n def verify(self, raw_macaroon, context, principals, permission):\n \"\"\"\n Returns True if the given raw (serialized) macaroon is\n valid for the context, principals, and requested permission.\n\n Raises InvalidMacaroon if the macaroon is not valid.\n \"\"\"\n raw_macaroon = self._extract_raw_macaroon(raw_macaroon)\n if raw_macaroon is None:\n raise InvalidMacaroon(\"malformed or nonexistent macaroon\")\n\n m = pymacaroons.Macaroon.deserialize(raw_macaroon)\n dm = self.find_macaroon(m.identifier.decode())\n\n if dm is None:\n raise InvalidMacaroon(\"deleted or nonexistent macaroon\")\n\n verifier = Verifier(m, context, principals, permission)\n if verifier.verify(dm.key):\n dm.last_used = datetime.datetime.now()\n return True\n\n raise InvalidMacaroon(\"invalid macaroon\")\n\n def create_macaroon(self, location, user_id, description, caveats):\n \"\"\"\n Returns a tuple of a new raw (serialized) macaroon and its DB model.\n The description provided is not embedded into the macaroon, only stored\n in the DB model.\n \"\"\"\n user = self.db.query(User).filter(User.id == user_id).one()\n\n dm = Macaroon(user=user, description=description, caveats=caveats)\n self.db.add(dm)\n self.db.flush()\n\n m = pymacaroons.Macaroon(\n location=location,\n identifier=str(dm.id),\n key=dm.key,\n version=pymacaroons.MACAROON_V2,\n )\n m.add_first_party_caveat(json.dumps(caveats))\n serialized_macaroon = f\"pypi:{m.serialize()}\"\n return serialized_macaroon, dm\n\n def delete_macaroon(self, macaroon_id):\n \"\"\"\n Deletes a macaroon from the DB by its identifier.\n \"\"\"\n dm = self.find_macaroon(macaroon_id)\n self.db.delete(dm)\n self.db.flush()\n\n def get_macaroon_by_description(self, user_id, description):\n \"\"\"\n Returns a macaroon model from the DB with the given description,\n if one exists for the given user.\n\n Returns None if the user doesn't have a macaroon with this description.\n \"\"\"\n try:\n dm = (\n self.db.query(Macaroon)\n .options(joinedload(\"user\"))\n .filter(Macaroon.description == description)\n .filter(Macaroon.user_id == user_id)\n .one()\n )\n except NoResultFound:\n return None\n\n return dm\n\n\ndef database_macaroon_factory(context, request):\n return DatabaseMacaroonService(request.db)\n", "path": "warehouse/macaroons/services.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport base64\n\nfrom pyramid.authentication import CallbackAuthenticationPolicy\nfrom pyramid.interfaces import IAuthenticationPolicy, IAuthorizationPolicy\nfrom pyramid.security import Denied\nfrom pyramid.threadlocal import get_current_request\nfrom zope.interface import implementer\n\nfrom warehouse.cache.http import add_vary_callback\nfrom warehouse.macaroons.interfaces import IMacaroonService\nfrom warehouse.macaroons.services import InvalidMacaroon\n\n\ndef _extract_basic_macaroon(auth):\n \"\"\"\n A helper function for extracting a macaroon from a\n HTTP Basic Authentication-style header.\n\n Returns None if the header doesn't contain a structurally\n valid macaroon, or the candidate (not yet verified) macaroon\n in a serialized form.\n \"\"\"\n try:\n authorization = base64.b64decode(auth).decode()\n auth_method, _, auth = authorization.partition(\":\")\n except ValueError:\n return None\n\n if auth_method != \"@token\":\n return None\n\n return auth\n\n\ndef _extract_http_macaroon(request):\n \"\"\"\n A helper function for the extraction of HTTP Macaroon from a given request.\n Returns either a None if no macaroon could be found, or the string\n that represents our serialized macaroon.\n \"\"\"\n authorization = request.headers.get(\"Authorization\")\n if not authorization:\n return None\n\n try:\n auth_method, auth = authorization.split(\" \", 1)\n except ValueError:\n return None\n\n if auth_method.lower() == \"basic\":\n return _extract_basic_macaroon(auth)\n elif auth_method.lower() == \"token\":\n return auth\n\n return None\n\n\n@implementer(IAuthenticationPolicy)\nclass MacaroonAuthenticationPolicy(CallbackAuthenticationPolicy):\n def __init__(self, callback=None):\n self.callback = callback\n\n def unauthenticated_userid(self, request):\n # If we're calling into this API on a request, then we want to register\n # a callback which will ensure that the response varies based on the\n # Authorization header.\n request.add_response_callback(add_vary_callback(\"Authorization\"))\n\n # We need to extract our Macaroon from the request.\n macaroon = _extract_http_macaroon(request)\n if macaroon is None:\n return None\n\n # Check to see if our Macaroon exists in the database, and if so\n # fetch the user that is associated with it.\n macaroon_service = request.find_service(IMacaroonService, context=None)\n userid = macaroon_service.find_userid(macaroon)\n if userid is not None:\n return str(userid)\n\n def remember(self, request, userid, **kw):\n # This is a NO-OP because our Macaroon header policy doesn't allow\n # the ability for authentication to \"remember\" the user id. This\n # assumes it has been configured in clients somewhere out of band.\n return []\n\n def forget(self, request):\n # This is a NO-OP because our Macaroon header policy doesn't allow\n # the ability for authentication to \"forget\" the user id. This\n # assumes it has been configured in clients somewhere out of band.\n return []\n\n\n@implementer(IAuthorizationPolicy)\nclass MacaroonAuthorizationPolicy:\n def __init__(self, policy):\n self.policy = policy\n\n def permits(self, context, principals, permission):\n # The Pyramid API doesn't let us access the request here, so we have to pull it\n # out of the thread local instead.\n # TODO: Work with Pyramid devs to figure out if there is a better way to support\n # the worklow we are using here or not.\n request = get_current_request()\n\n # Our request could possibly be a None, if there isn't an active request, in\n # that case we're going to always deny, because without a request, we can't\n # determine if this request is authorized or not.\n if request is None:\n return Denied(\"There was no active request.\")\n\n # Re-extract our Macaroon from the request, it sucks to have to do this work\n # twice, but I believe it is inevitable unless we pass the Macaroon back as\n # a principal-- which doesn't seem to be the right fit for it.\n macaroon = _extract_http_macaroon(request)\n\n # This logic will only happen on requests that are being authenticated with\n # Macaroons. Any other request will just fall back to the standard Authorization\n # policy.\n if macaroon is not None:\n macaroon_service = request.find_service(IMacaroonService, context=None)\n\n try:\n macaroon_service.verify(macaroon, context, principals, permission)\n except InvalidMacaroon as exc:\n return Denied(f\"The supplied token was invalid: {str(exc)!r}\")\n\n # If our Macaroon is verified, then we'll pass this request to our underlying\n # Authorization policy, so it can handle its own authorization logic on\n # the prinicpal.\n return self.policy.permits(context, principals, permission)\n\n def principals_allowed_by_permission(self, context, permission):\n # We just dispatch this, because Macaroons don't restrict what principals are\n # allowed by a particular permission, they just restrict specific requests\n # to not have that permission.\n return self.policy.principals_allowed_by_permission(context, permission)\n", "path": "warehouse/macaroons/auth_policy.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport datetime\nimport json\nimport uuid\n\nimport pymacaroons\n\nfrom sqlalchemy.orm import joinedload\nfrom sqlalchemy.orm.exc import NoResultFound\nfrom zope.interface import implementer\n\nfrom warehouse.accounts.models import User\nfrom warehouse.macaroons.caveats import InvalidMacaroon, Verifier\nfrom warehouse.macaroons.interfaces import IMacaroonService\nfrom warehouse.macaroons.models import Macaroon\n\n\n@implementer(IMacaroonService)\nclass DatabaseMacaroonService:\n def __init__(self, db_session):\n self.db = db_session\n\n def _extract_raw_macaroon(self, prefixed_macaroon):\n \"\"\"\n Returns the base64-encoded macaroon component of a PyPI macaroon,\n dropping the prefix.\n\n Returns None if the macaroon is None, has no prefix, or has the\n wrong prefix.\n \"\"\"\n if prefixed_macaroon is None:\n return None\n\n prefix, split, raw_macaroon = prefixed_macaroon.partition(\"-\")\n # TODO: Remove ':' as an acceptable delimiter for tokens (GH-6345)\n if prefix != \"pypi\" or not split:\n prefix, _, raw_macaroon = prefixed_macaroon.partition(\":\")\n\n if prefix != \"pypi\":\n return None\n\n return raw_macaroon\n\n def find_macaroon(self, macaroon_id):\n \"\"\"\n Returns a macaroon model from the DB by its identifier.\n Returns None if no macaroon has the given ID.\n \"\"\"\n try:\n dm = (\n self.db.query(Macaroon)\n .options(joinedload(\"user\"))\n .filter(Macaroon.id == uuid.UUID(macaroon_id))\n .one()\n )\n except NoResultFound:\n return None\n\n return dm\n\n def find_userid(self, raw_macaroon):\n \"\"\"\n Returns the id of the user associated with the given raw (serialized)\n macaroon.\n \"\"\"\n raw_macaroon = self._extract_raw_macaroon(raw_macaroon)\n if raw_macaroon is None:\n return None\n\n m = pymacaroons.Macaroon.deserialize(raw_macaroon)\n dm = self.find_macaroon(m.identifier.decode())\n\n if dm is None:\n return None\n\n return dm.user.id\n\n def verify(self, raw_macaroon, context, principals, permission):\n \"\"\"\n Returns True if the given raw (serialized) macaroon is\n valid for the context, principals, and requested permission.\n\n Raises InvalidMacaroon if the macaroon is not valid.\n \"\"\"\n raw_macaroon = self._extract_raw_macaroon(raw_macaroon)\n if raw_macaroon is None:\n raise InvalidMacaroon(\"malformed or nonexistent macaroon\")\n\n m = pymacaroons.Macaroon.deserialize(raw_macaroon)\n dm = self.find_macaroon(m.identifier.decode())\n\n if dm is None:\n raise InvalidMacaroon(\"deleted or nonexistent macaroon\")\n\n verifier = Verifier(m, context, principals, permission)\n if verifier.verify(dm.key):\n dm.last_used = datetime.datetime.now()\n return True\n\n raise InvalidMacaroon(\"invalid macaroon\")\n\n def create_macaroon(self, location, user_id, description, caveats):\n \"\"\"\n Returns a tuple of a new raw (serialized) macaroon and its DB model.\n The description provided is not embedded into the macaroon, only stored\n in the DB model.\n \"\"\"\n user = self.db.query(User).filter(User.id == user_id).one()\n\n dm = Macaroon(user=user, description=description, caveats=caveats)\n self.db.add(dm)\n self.db.flush()\n\n m = pymacaroons.Macaroon(\n location=location,\n identifier=str(dm.id),\n key=dm.key,\n version=pymacaroons.MACAROON_V2,\n )\n m.add_first_party_caveat(json.dumps(caveats))\n serialized_macaroon = f\"pypi-{m.serialize()}\"\n return serialized_macaroon, dm\n\n def delete_macaroon(self, macaroon_id):\n \"\"\"\n Deletes a macaroon from the DB by its identifier.\n \"\"\"\n dm = self.find_macaroon(macaroon_id)\n self.db.delete(dm)\n self.db.flush()\n\n def get_macaroon_by_description(self, user_id, description):\n \"\"\"\n Returns a macaroon model from the DB with the given description,\n if one exists for the given user.\n\n Returns None if the user doesn't have a macaroon with this description.\n \"\"\"\n try:\n dm = (\n self.db.query(Macaroon)\n .options(joinedload(\"user\"))\n .filter(Macaroon.description == description)\n .filter(Macaroon.user_id == user_id)\n .one()\n )\n except NoResultFound:\n return None\n\n return dm\n\n\ndef database_macaroon_factory(context, request):\n return DatabaseMacaroonService(request.db)\n", "path": "warehouse/macaroons/services.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport base64\n\nfrom pyramid.authentication import CallbackAuthenticationPolicy\nfrom pyramid.interfaces import IAuthenticationPolicy, IAuthorizationPolicy\nfrom pyramid.security import Denied\nfrom pyramid.threadlocal import get_current_request\nfrom zope.interface import implementer\n\nfrom warehouse.cache.http import add_vary_callback\nfrom warehouse.macaroons.interfaces import IMacaroonService\nfrom warehouse.macaroons.services import InvalidMacaroon\n\n\ndef _extract_basic_macaroon(auth):\n \"\"\"\n A helper function for extracting a macaroon from a\n HTTP Basic Authentication-style header.\n\n Returns None if the header doesn't contain a structurally\n valid macaroon, or the candidate (not yet verified) macaroon\n in a serialized form.\n \"\"\"\n try:\n authorization = base64.b64decode(auth).decode()\n auth_method, _, auth = authorization.partition(\":\")\n except ValueError:\n return None\n\n # TODO: Remove @token as an acceptable token username (GH-6345)\n if auth_method != \"@token\" and auth_method != \"__token__\":\n return None\n\n return auth\n\n\ndef _extract_http_macaroon(request):\n \"\"\"\n A helper function for the extraction of HTTP Macaroon from a given request.\n Returns either a None if no macaroon could be found, or the string\n that represents our serialized macaroon.\n \"\"\"\n authorization = request.headers.get(\"Authorization\")\n if not authorization:\n return None\n\n try:\n auth_method, auth = authorization.split(\" \", 1)\n except ValueError:\n return None\n\n if auth_method.lower() == \"basic\":\n return _extract_basic_macaroon(auth)\n elif auth_method.lower() == \"token\":\n return auth\n\n return None\n\n\n@implementer(IAuthenticationPolicy)\nclass MacaroonAuthenticationPolicy(CallbackAuthenticationPolicy):\n def __init__(self, callback=None):\n self.callback = callback\n\n def unauthenticated_userid(self, request):\n # If we're calling into this API on a request, then we want to register\n # a callback which will ensure that the response varies based on the\n # Authorization header.\n request.add_response_callback(add_vary_callback(\"Authorization\"))\n\n # We need to extract our Macaroon from the request.\n macaroon = _extract_http_macaroon(request)\n if macaroon is None:\n return None\n\n # Check to see if our Macaroon exists in the database, and if so\n # fetch the user that is associated with it.\n macaroon_service = request.find_service(IMacaroonService, context=None)\n userid = macaroon_service.find_userid(macaroon)\n if userid is not None:\n return str(userid)\n\n def remember(self, request, userid, **kw):\n # This is a NO-OP because our Macaroon header policy doesn't allow\n # the ability for authentication to \"remember\" the user id. This\n # assumes it has been configured in clients somewhere out of band.\n return []\n\n def forget(self, request):\n # This is a NO-OP because our Macaroon header policy doesn't allow\n # the ability for authentication to \"forget\" the user id. This\n # assumes it has been configured in clients somewhere out of band.\n return []\n\n\n@implementer(IAuthorizationPolicy)\nclass MacaroonAuthorizationPolicy:\n def __init__(self, policy):\n self.policy = policy\n\n def permits(self, context, principals, permission):\n # The Pyramid API doesn't let us access the request here, so we have to pull it\n # out of the thread local instead.\n # TODO: Work with Pyramid devs to figure out if there is a better way to support\n # the worklow we are using here or not.\n request = get_current_request()\n\n # Our request could possibly be a None, if there isn't an active request, in\n # that case we're going to always deny, because without a request, we can't\n # determine if this request is authorized or not.\n if request is None:\n return Denied(\"There was no active request.\")\n\n # Re-extract our Macaroon from the request, it sucks to have to do this work\n # twice, but I believe it is inevitable unless we pass the Macaroon back as\n # a principal-- which doesn't seem to be the right fit for it.\n macaroon = _extract_http_macaroon(request)\n\n # This logic will only happen on requests that are being authenticated with\n # Macaroons. Any other request will just fall back to the standard Authorization\n # policy.\n if macaroon is not None:\n macaroon_service = request.find_service(IMacaroonService, context=None)\n\n try:\n macaroon_service.verify(macaroon, context, principals, permission)\n except InvalidMacaroon as exc:\n return Denied(f\"The supplied token was invalid: {str(exc)!r}\")\n\n # If our Macaroon is verified, then we'll pass this request to our underlying\n # Authorization policy, so it can handle its own authorization logic on\n # the prinicpal.\n return self.policy.permits(context, principals, permission)\n\n def principals_allowed_by_permission(self, context, permission):\n # We just dispatch this, because Macaroons don't restrict what principals are\n # allowed by a particular permission, they just restrict specific requests\n # to not have that permission.\n return self.policy.principals_allowed_by_permission(context, permission)\n", "path": "warehouse/macaroons/auth_policy.py"}]} | 3,751 | 522 |
gh_patches_debug_30592 | rasdani/github-patches | git_diff | mne-tools__mne-python-4380 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove deprecated imp module
Currently, `mne/commands/utils.py` still uses the deprecated `imp` module, which has long been replaced with `importlib`. According to [this answer on SO](https://stackoverflow.com/a/67692/1112283), the current solution works only on Python 3.5/3.6, and there is a (deprecated) alternative for Python 3.3/3.4. All versions < 3.3 need to use `imp`.
How should this be handled in MNE?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mne/commands/utils.py`
Content:
```
1 """Some utility functions for commands (e.g. for cmdline handling)."""
2
3 # Authors: Yaroslav Halchenko <[email protected]>
4 #
5 # License: BSD (3-clause)
6
7 import imp
8 import os
9 import re
10 from optparse import OptionParser
11
12 import mne
13
14
15 def get_optparser(cmdpath, usage=None):
16 """Create OptionParser with cmd specific settings (e.g. prog value)."""
17 command = os.path.basename(cmdpath)
18 if re.match('mne_(.*).py', command):
19 command = command[4:-3]
20 elif re.match('mne_(.*).pyc', command):
21 command = command[4:-4]
22
23 # Fetch description
24 if cmdpath.endswith('.pyc'):
25 mod = imp.load_compiled('__temp', cmdpath)
26 else:
27 mod = imp.load_source('__temp', cmdpath)
28 if mod.__doc__:
29 doc, description, epilog = mod.__doc__, None, None
30
31 doc_lines = doc.split('\n')
32 description = doc_lines[0]
33 if len(doc_lines) > 1:
34 epilog = '\n'.join(doc_lines[1:])
35
36 # monkey patch OptionParser to not wrap epilog
37 OptionParser.format_epilog = lambda self, formatter: self.epilog
38 parser = OptionParser(prog="mne %s" % command,
39 version=mne.__version__,
40 description=description,
41 epilog=epilog, usage=usage)
42
43 return parser
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mne/commands/utils.py b/mne/commands/utils.py
--- a/mne/commands/utils.py
+++ b/mne/commands/utils.py
@@ -4,7 +4,7 @@
#
# License: BSD (3-clause)
-import imp
+import sys
import os
import re
from optparse import OptionParser
@@ -12,6 +12,42 @@
import mne
+def load_module(name, path):
+ """Load module from .py/.pyc file.
+
+ Parameters
+ ----------
+ name : str
+ Name of the module.
+ path : str
+ Path to .py/.pyc file.
+
+ Returns
+ -------
+ mod : module
+ Imported module.
+ """
+ if sys.version_info < (3, 3):
+ import imp
+ if path.endswith('.pyc'):
+ return imp.load_compiled(name, path)
+ else:
+ return imp.load_source(name, path)
+ elif sys.version_info < (3, 5):
+ if path.endswith('.pyc'):
+ from importlib.machinery import SourcelessFileLoader
+ return SourcelessFileLoader(name, path).load_module()
+ else:
+ from importlib.machinery import SourceFileLoader
+ return SourceFileLoader(name, path).load_module()
+ else: # Python 3.5 or greater
+ from importlib.util import spec_from_file_location, module_from_spec
+ spec = spec_from_file_location(name, path)
+ mod = module_from_spec(spec)
+ spec.loader.exec_module(mod)
+ return mod
+
+
def get_optparser(cmdpath, usage=None):
"""Create OptionParser with cmd specific settings (e.g. prog value)."""
command = os.path.basename(cmdpath)
@@ -21,10 +57,7 @@
command = command[4:-4]
# Fetch description
- if cmdpath.endswith('.pyc'):
- mod = imp.load_compiled('__temp', cmdpath)
- else:
- mod = imp.load_source('__temp', cmdpath)
+ mod = load_module('__temp', cmdpath)
if mod.__doc__:
doc, description, epilog = mod.__doc__, None, None
| {"golden_diff": "diff --git a/mne/commands/utils.py b/mne/commands/utils.py\n--- a/mne/commands/utils.py\n+++ b/mne/commands/utils.py\n@@ -4,7 +4,7 @@\n #\n # License: BSD (3-clause)\n \n-import imp\n+import sys\n import os\n import re\n from optparse import OptionParser\n@@ -12,6 +12,42 @@\n import mne\n \n \n+def load_module(name, path):\n+ \"\"\"Load module from .py/.pyc file.\n+\n+ Parameters\n+ ----------\n+ name : str\n+ Name of the module.\n+ path : str\n+ Path to .py/.pyc file.\n+\n+ Returns\n+ -------\n+ mod : module\n+ Imported module.\n+ \"\"\"\n+ if sys.version_info < (3, 3):\n+ import imp\n+ if path.endswith('.pyc'):\n+ return imp.load_compiled(name, path)\n+ else:\n+ return imp.load_source(name, path)\n+ elif sys.version_info < (3, 5):\n+ if path.endswith('.pyc'):\n+ from importlib.machinery import SourcelessFileLoader\n+ return SourcelessFileLoader(name, path).load_module()\n+ else:\n+ from importlib.machinery import SourceFileLoader\n+ return SourceFileLoader(name, path).load_module()\n+ else: # Python 3.5 or greater\n+ from importlib.util import spec_from_file_location, module_from_spec\n+ spec = spec_from_file_location(name, path)\n+ mod = module_from_spec(spec)\n+ spec.loader.exec_module(mod)\n+ return mod\n+\n+\n def get_optparser(cmdpath, usage=None):\n \"\"\"Create OptionParser with cmd specific settings (e.g. prog value).\"\"\"\n command = os.path.basename(cmdpath)\n@@ -21,10 +57,7 @@\n command = command[4:-4]\n \n # Fetch description\n- if cmdpath.endswith('.pyc'):\n- mod = imp.load_compiled('__temp', cmdpath)\n- else:\n- mod = imp.load_source('__temp', cmdpath)\n+ mod = load_module('__temp', cmdpath)\n if mod.__doc__:\n doc, description, epilog = mod.__doc__, None, None\n", "issue": "Remove deprecated imp module\nCurrently, `mne/commands/utils.py` still uses the deprecated `imp` module, which has long been replaced with `importlib`. According to [this answer on SO](https://stackoverflow.com/a/67692/1112283), the current solution works only on Python 3.5/3.6, and there is a (deprecated) alternative for Python 3.3/3.4. All versions < 3.3 need to use `imp`.\r\n\r\nHow should this be handled in MNE?\n", "before_files": [{"content": "\"\"\"Some utility functions for commands (e.g. for cmdline handling).\"\"\"\n\n# Authors: Yaroslav Halchenko <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport imp\nimport os\nimport re\nfrom optparse import OptionParser\n\nimport mne\n\n\ndef get_optparser(cmdpath, usage=None):\n \"\"\"Create OptionParser with cmd specific settings (e.g. prog value).\"\"\"\n command = os.path.basename(cmdpath)\n if re.match('mne_(.*).py', command):\n command = command[4:-3]\n elif re.match('mne_(.*).pyc', command):\n command = command[4:-4]\n\n # Fetch description\n if cmdpath.endswith('.pyc'):\n mod = imp.load_compiled('__temp', cmdpath)\n else:\n mod = imp.load_source('__temp', cmdpath)\n if mod.__doc__:\n doc, description, epilog = mod.__doc__, None, None\n\n doc_lines = doc.split('\\n')\n description = doc_lines[0]\n if len(doc_lines) > 1:\n epilog = '\\n'.join(doc_lines[1:])\n\n # monkey patch OptionParser to not wrap epilog\n OptionParser.format_epilog = lambda self, formatter: self.epilog\n parser = OptionParser(prog=\"mne %s\" % command,\n version=mne.__version__,\n description=description,\n epilog=epilog, usage=usage)\n\n return parser\n", "path": "mne/commands/utils.py"}], "after_files": [{"content": "\"\"\"Some utility functions for commands (e.g. for cmdline handling).\"\"\"\n\n# Authors: Yaroslav Halchenko <[email protected]>\n#\n# License: BSD (3-clause)\n\nimport sys\nimport os\nimport re\nfrom optparse import OptionParser\n\nimport mne\n\n\ndef load_module(name, path):\n \"\"\"Load module from .py/.pyc file.\n\n Parameters\n ----------\n name : str\n Name of the module.\n path : str\n Path to .py/.pyc file.\n\n Returns\n -------\n mod : module\n Imported module.\n \"\"\"\n if sys.version_info < (3, 3):\n import imp\n if path.endswith('.pyc'):\n return imp.load_compiled(name, path)\n else:\n return imp.load_source(name, path)\n elif sys.version_info < (3, 5):\n if path.endswith('.pyc'):\n from importlib.machinery import SourcelessFileLoader\n return SourcelessFileLoader(name, path).load_module()\n else:\n from importlib.machinery import SourceFileLoader\n return SourceFileLoader(name, path).load_module()\n else: # Python 3.5 or greater\n from importlib.util import spec_from_file_location, module_from_spec\n spec = spec_from_file_location(name, path)\n mod = module_from_spec(spec)\n spec.loader.exec_module(mod)\n return mod\n\n\ndef get_optparser(cmdpath, usage=None):\n \"\"\"Create OptionParser with cmd specific settings (e.g. prog value).\"\"\"\n command = os.path.basename(cmdpath)\n if re.match('mne_(.*).py', command):\n command = command[4:-3]\n elif re.match('mne_(.*).pyc', command):\n command = command[4:-4]\n\n # Fetch description\n mod = load_module('__temp', cmdpath)\n if mod.__doc__:\n doc, description, epilog = mod.__doc__, None, None\n\n doc_lines = doc.split('\\n')\n description = doc_lines[0]\n if len(doc_lines) > 1:\n epilog = '\\n'.join(doc_lines[1:])\n\n # monkey patch OptionParser to not wrap epilog\n OptionParser.format_epilog = lambda self, formatter: self.epilog\n parser = OptionParser(prog=\"mne %s\" % command,\n version=mne.__version__,\n description=description,\n epilog=epilog, usage=usage)\n\n return parser\n", "path": "mne/commands/utils.py"}]} | 788 | 512 |
gh_patches_debug_18694 | rasdani/github-patches | git_diff | sublimelsp__LSP-1371 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Diagnostic not highlighted in view when range is empty
**Describe the bug**
A diagnostic that has an empty range is not visible in the view (only in the gutter and in diagnostics panel).
**To Reproduce**
Receive a diagnostic like:
```js
{
"diagnostics": [
{
"code": {
"target": "https://eslint.org/docs/rules/curly",
"value": "curly"
},
"message": "Expected { after 'if' condition.",
"range": {
"end": {
"character": 9,
"line": 607
},
"start": {
"character": 9,
"line": 607
}
},
"severity": 1,
"source": "eslint"
},
```
**Expected behavior**
See where on the line the problem is. Otherwise, it's difficult to know where code action can be triggered.
**Screenshots**

**Environment (please complete the following information):**
- OS: macOS
- Sublime Text version: 4087
- LSP version: 49ea7330301078035ad56b94218a9c6edb604498
- Language servers used: eslint
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugin/session_view.py`
Content:
```
1 from .core.protocol import Diagnostic
2 from .core.protocol import Request
3 from .core.sessions import Session
4 from .core.settings import userprefs
5 from .core.types import view2scope
6 from .core.typing import Any, Iterable, List, Tuple, Optional, Dict
7 from .core.views import DIAGNOSTIC_SEVERITY
8 from .core.windows import AbstractViewListener
9 from .session_buffer import SessionBuffer
10 from weakref import ref
11 from weakref import WeakValueDictionary
12 import sublime
13
14
15 class SessionView:
16 """
17 Holds state per session per view.
18 """
19
20 LANGUAGE_ID_KEY = "lsp_language"
21 SHOW_DEFINITIONS_KEY = "show_definitions"
22 HOVER_PROVIDER_KEY = "hoverProvider"
23 HOVER_PROVIDER_COUNT_KEY = "lsp_hover_provider_count"
24
25 _session_buffers = WeakValueDictionary() # type: WeakValueDictionary[Tuple[str, int], SessionBuffer]
26
27 def __init__(self, listener: AbstractViewListener, session: Session) -> None:
28 self.view = listener.view
29 self.session = session
30 self.active_requests = {} # type: Dict[int, Request]
31 settings = self.view.settings()
32 # TODO: Language ID must be UNIQUE!
33 languages = settings.get(self.LANGUAGE_ID_KEY)
34 self._language_id = ''
35 if not isinstance(languages, dict):
36 languages = {}
37 for language in session.config.languages:
38 if language.match_scope(view2scope(self.view)):
39 languages[session.config.name] = language.id
40 self._language_id = language.id
41 break
42 settings.set(self.LANGUAGE_ID_KEY, languages)
43 buffer_id = self.view.buffer_id()
44 key = (session.config.name, buffer_id)
45 session_buffer = self._session_buffers.get(key)
46 if session_buffer is None:
47 session_buffer = SessionBuffer(self, buffer_id, self._language_id)
48 self._session_buffers[key] = session_buffer
49 else:
50 session_buffer.add_session_view(self)
51 self.session_buffer = session_buffer
52 self.listener = ref(listener)
53 session.register_session_view_async(self)
54 session.config.set_view_status(self.view, "")
55 if self.session.has_capability(self.HOVER_PROVIDER_KEY):
56 self._increment_hover_count()
57
58 def __del__(self) -> None:
59 if self.session.has_capability(self.HOVER_PROVIDER_KEY):
60 self._decrement_hover_count()
61 # If the session is exiting then there's no point in sending textDocument/didClose and there's also no point
62 # in unregistering ourselves from the session.
63 if not self.session.exiting:
64 self.session.unregister_session_view_async(self)
65 self.session.config.erase_view_status(self.view)
66 settings = self.view.settings() # type: sublime.Settings
67 # TODO: Language ID must be UNIQUE!
68 languages = settings.get(self.LANGUAGE_ID_KEY)
69 if isinstance(languages, dict):
70 languages.pop(self.session.config.name, None)
71 if languages:
72 settings.set(self.LANGUAGE_ID_KEY, languages)
73 else:
74 settings.erase(self.LANGUAGE_ID_KEY)
75 for severity in range(1, len(DIAGNOSTIC_SEVERITY) + 1):
76 self.view.erase_regions(self.diagnostics_key(severity))
77
78 def _increment_hover_count(self) -> None:
79 settings = self.view.settings()
80 count = settings.get(self.HOVER_PROVIDER_COUNT_KEY, 0)
81 if isinstance(count, int):
82 count += 1
83 settings.set(self.HOVER_PROVIDER_COUNT_KEY, count)
84 settings.set(self.SHOW_DEFINITIONS_KEY, False)
85
86 def _decrement_hover_count(self) -> None:
87 settings = self.view.settings()
88 count = settings.get(self.HOVER_PROVIDER_COUNT_KEY)
89 if isinstance(count, int):
90 count -= 1
91 if count == 0:
92 settings.erase(self.HOVER_PROVIDER_COUNT_KEY)
93 settings.set(self.SHOW_DEFINITIONS_KEY, True)
94
95 def get_capability(self, capability_path: str) -> Optional[Any]:
96 return self.session_buffer.get_capability(capability_path)
97
98 def has_capability(self, capability_path: str) -> bool:
99 value = self.session_buffer.get_capability(capability_path)
100 return isinstance(value, dict) or bool(value)
101
102 def on_capability_added_async(self, capability_path: str, options: Dict[str, Any]) -> None:
103 if capability_path == self.HOVER_PROVIDER_KEY:
104 self._increment_hover_count()
105
106 def on_capability_removed_async(self, discarded: Dict[str, Any]) -> None:
107 if self.HOVER_PROVIDER_KEY in discarded:
108 self._decrement_hover_count()
109
110 def has_capability_async(self, capability_path: str) -> bool:
111 return self.session_buffer.has_capability(capability_path)
112
113 def shutdown_async(self) -> None:
114 listener = self.listener()
115 if listener:
116 listener.on_session_shutdown_async(self.session)
117
118 def diagnostics_key(self, severity: int) -> str:
119 return "lsp{}d{}".format(self.session.config.name, severity)
120
121 def present_diagnostics_async(self, flags: int) -> None:
122 data_per_severity = self.session_buffer.data_per_severity
123 for severity in reversed(range(1, len(DIAGNOSTIC_SEVERITY) + 1)):
124 key = self.diagnostics_key(severity)
125 data = data_per_severity.get(severity)
126 if data is None:
127 self.view.erase_regions(key)
128 elif ((severity <= userprefs().show_diagnostics_severity_level) and
129 (data.icon or flags != (sublime.DRAW_NO_FILL | sublime.DRAW_NO_OUTLINE))):
130 self.view.add_regions(key, data.regions, data.scope, data.icon, flags)
131 else:
132 self.view.erase_regions(key)
133 listener = self.listener()
134 if listener:
135 listener.on_diagnostics_updated_async()
136
137 def get_diagnostics_async(self) -> List[Diagnostic]:
138 return self.session_buffer.diagnostics
139
140 def on_request_started_async(self, request_id: int, request: Request) -> None:
141 self.active_requests[request_id] = request
142
143 def on_request_finished_async(self, request_id: int) -> None:
144 self.active_requests.pop(request_id, None)
145
146 def on_text_changed_async(self, changes: Iterable[sublime.TextChange]) -> None:
147 self.session_buffer.on_text_changed_async(self.view, changes)
148
149 def on_revert_async(self) -> None:
150 self.session_buffer.on_revert_async(self.view)
151
152 def on_reload_async(self) -> None:
153 self.session_buffer.on_reload_async(self.view)
154
155 def purge_changes_async(self) -> None:
156 self.session_buffer.purge_changes_async(self.view)
157
158 def on_pre_save_async(self, old_file_name: str) -> None:
159 self.session_buffer.on_pre_save_async(self.view, old_file_name)
160
161 def on_post_save_async(self) -> None:
162 self.session_buffer.on_post_save_async(self.view)
163
164 def __str__(self) -> str:
165 return '{}:{}'.format(self.session.config.name, self.view.id())
166
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugin/session_view.py b/plugin/session_view.py
--- a/plugin/session_view.py
+++ b/plugin/session_view.py
@@ -127,7 +127,15 @@
self.view.erase_regions(key)
elif ((severity <= userprefs().show_diagnostics_severity_level) and
(data.icon or flags != (sublime.DRAW_NO_FILL | sublime.DRAW_NO_OUTLINE))):
- self.view.add_regions(key, data.regions, data.scope, data.icon, flags)
+
+ def handle_same_regions(region: sublime.Region) -> sublime.Region:
+ # this allows showing diagnostics with same begin and end in the view
+ if region.a == region.b:
+ return sublime.Region(region.a, region.a + 1)
+ return region
+
+ underline_regions = list(map(handle_same_regions, data.regions))
+ self.view.add_regions(key, underline_regions, data.scope, data.icon, flags)
else:
self.view.erase_regions(key)
listener = self.listener()
| {"golden_diff": "diff --git a/plugin/session_view.py b/plugin/session_view.py\n--- a/plugin/session_view.py\n+++ b/plugin/session_view.py\n@@ -127,7 +127,15 @@\n self.view.erase_regions(key)\n elif ((severity <= userprefs().show_diagnostics_severity_level) and\n (data.icon or flags != (sublime.DRAW_NO_FILL | sublime.DRAW_NO_OUTLINE))):\n- self.view.add_regions(key, data.regions, data.scope, data.icon, flags)\n+\n+ def handle_same_regions(region: sublime.Region) -> sublime.Region:\n+ # this allows showing diagnostics with same begin and end in the view\n+ if region.a == region.b:\n+ return sublime.Region(region.a, region.a + 1)\n+ return region\n+\n+ underline_regions = list(map(handle_same_regions, data.regions))\n+ self.view.add_regions(key, underline_regions, data.scope, data.icon, flags)\n else:\n self.view.erase_regions(key)\n listener = self.listener()\n", "issue": "Diagnostic not highlighted in view when range is empty\n**Describe the bug**\r\nA diagnostic that has an empty range is not visible in the view (only in the gutter and in diagnostics panel).\r\n\r\n**To Reproduce**\r\nReceive a diagnostic like:\r\n```js\r\n{\r\n \"diagnostics\": [\r\n {\r\n \"code\": {\r\n \"target\": \"https://eslint.org/docs/rules/curly\",\r\n \"value\": \"curly\"\r\n },\r\n \"message\": \"Expected { after 'if' condition.\",\r\n \"range\": {\r\n \"end\": {\r\n \"character\": 9,\r\n \"line\": 607\r\n },\r\n \"start\": {\r\n \"character\": 9,\r\n \"line\": 607\r\n }\r\n },\r\n \"severity\": 1,\r\n \"source\": \"eslint\"\r\n },\r\n```\r\n\r\n**Expected behavior**\r\nSee where on the line the problem is. Otherwise, it's difficult to know where code action can be triggered.\r\n\r\n**Screenshots**\r\n\r\n\r\n\r\n**Environment (please complete the following information):**\r\n- OS: macOS\r\n- Sublime Text version: 4087\r\n- LSP version: 49ea7330301078035ad56b94218a9c6edb604498\r\n- Language servers used: eslint\n", "before_files": [{"content": "from .core.protocol import Diagnostic\nfrom .core.protocol import Request\nfrom .core.sessions import Session\nfrom .core.settings import userprefs\nfrom .core.types import view2scope\nfrom .core.typing import Any, Iterable, List, Tuple, Optional, Dict\nfrom .core.views import DIAGNOSTIC_SEVERITY\nfrom .core.windows import AbstractViewListener\nfrom .session_buffer import SessionBuffer\nfrom weakref import ref\nfrom weakref import WeakValueDictionary\nimport sublime\n\n\nclass SessionView:\n \"\"\"\n Holds state per session per view.\n \"\"\"\n\n LANGUAGE_ID_KEY = \"lsp_language\"\n SHOW_DEFINITIONS_KEY = \"show_definitions\"\n HOVER_PROVIDER_KEY = \"hoverProvider\"\n HOVER_PROVIDER_COUNT_KEY = \"lsp_hover_provider_count\"\n\n _session_buffers = WeakValueDictionary() # type: WeakValueDictionary[Tuple[str, int], SessionBuffer]\n\n def __init__(self, listener: AbstractViewListener, session: Session) -> None:\n self.view = listener.view\n self.session = session\n self.active_requests = {} # type: Dict[int, Request]\n settings = self.view.settings()\n # TODO: Language ID must be UNIQUE!\n languages = settings.get(self.LANGUAGE_ID_KEY)\n self._language_id = ''\n if not isinstance(languages, dict):\n languages = {}\n for language in session.config.languages:\n if language.match_scope(view2scope(self.view)):\n languages[session.config.name] = language.id\n self._language_id = language.id\n break\n settings.set(self.LANGUAGE_ID_KEY, languages)\n buffer_id = self.view.buffer_id()\n key = (session.config.name, buffer_id)\n session_buffer = self._session_buffers.get(key)\n if session_buffer is None:\n session_buffer = SessionBuffer(self, buffer_id, self._language_id)\n self._session_buffers[key] = session_buffer\n else:\n session_buffer.add_session_view(self)\n self.session_buffer = session_buffer\n self.listener = ref(listener)\n session.register_session_view_async(self)\n session.config.set_view_status(self.view, \"\")\n if self.session.has_capability(self.HOVER_PROVIDER_KEY):\n self._increment_hover_count()\n\n def __del__(self) -> None:\n if self.session.has_capability(self.HOVER_PROVIDER_KEY):\n self._decrement_hover_count()\n # If the session is exiting then there's no point in sending textDocument/didClose and there's also no point\n # in unregistering ourselves from the session.\n if not self.session.exiting:\n self.session.unregister_session_view_async(self)\n self.session.config.erase_view_status(self.view)\n settings = self.view.settings() # type: sublime.Settings\n # TODO: Language ID must be UNIQUE!\n languages = settings.get(self.LANGUAGE_ID_KEY)\n if isinstance(languages, dict):\n languages.pop(self.session.config.name, None)\n if languages:\n settings.set(self.LANGUAGE_ID_KEY, languages)\n else:\n settings.erase(self.LANGUAGE_ID_KEY)\n for severity in range(1, len(DIAGNOSTIC_SEVERITY) + 1):\n self.view.erase_regions(self.diagnostics_key(severity))\n\n def _increment_hover_count(self) -> None:\n settings = self.view.settings()\n count = settings.get(self.HOVER_PROVIDER_COUNT_KEY, 0)\n if isinstance(count, int):\n count += 1\n settings.set(self.HOVER_PROVIDER_COUNT_KEY, count)\n settings.set(self.SHOW_DEFINITIONS_KEY, False)\n\n def _decrement_hover_count(self) -> None:\n settings = self.view.settings()\n count = settings.get(self.HOVER_PROVIDER_COUNT_KEY)\n if isinstance(count, int):\n count -= 1\n if count == 0:\n settings.erase(self.HOVER_PROVIDER_COUNT_KEY)\n settings.set(self.SHOW_DEFINITIONS_KEY, True)\n\n def get_capability(self, capability_path: str) -> Optional[Any]:\n return self.session_buffer.get_capability(capability_path)\n\n def has_capability(self, capability_path: str) -> bool:\n value = self.session_buffer.get_capability(capability_path)\n return isinstance(value, dict) or bool(value)\n\n def on_capability_added_async(self, capability_path: str, options: Dict[str, Any]) -> None:\n if capability_path == self.HOVER_PROVIDER_KEY:\n self._increment_hover_count()\n\n def on_capability_removed_async(self, discarded: Dict[str, Any]) -> None:\n if self.HOVER_PROVIDER_KEY in discarded:\n self._decrement_hover_count()\n\n def has_capability_async(self, capability_path: str) -> bool:\n return self.session_buffer.has_capability(capability_path)\n\n def shutdown_async(self) -> None:\n listener = self.listener()\n if listener:\n listener.on_session_shutdown_async(self.session)\n\n def diagnostics_key(self, severity: int) -> str:\n return \"lsp{}d{}\".format(self.session.config.name, severity)\n\n def present_diagnostics_async(self, flags: int) -> None:\n data_per_severity = self.session_buffer.data_per_severity\n for severity in reversed(range(1, len(DIAGNOSTIC_SEVERITY) + 1)):\n key = self.diagnostics_key(severity)\n data = data_per_severity.get(severity)\n if data is None:\n self.view.erase_regions(key)\n elif ((severity <= userprefs().show_diagnostics_severity_level) and\n (data.icon or flags != (sublime.DRAW_NO_FILL | sublime.DRAW_NO_OUTLINE))):\n self.view.add_regions(key, data.regions, data.scope, data.icon, flags)\n else:\n self.view.erase_regions(key)\n listener = self.listener()\n if listener:\n listener.on_diagnostics_updated_async()\n\n def get_diagnostics_async(self) -> List[Diagnostic]:\n return self.session_buffer.diagnostics\n\n def on_request_started_async(self, request_id: int, request: Request) -> None:\n self.active_requests[request_id] = request\n\n def on_request_finished_async(self, request_id: int) -> None:\n self.active_requests.pop(request_id, None)\n\n def on_text_changed_async(self, changes: Iterable[sublime.TextChange]) -> None:\n self.session_buffer.on_text_changed_async(self.view, changes)\n\n def on_revert_async(self) -> None:\n self.session_buffer.on_revert_async(self.view)\n\n def on_reload_async(self) -> None:\n self.session_buffer.on_reload_async(self.view)\n\n def purge_changes_async(self) -> None:\n self.session_buffer.purge_changes_async(self.view)\n\n def on_pre_save_async(self, old_file_name: str) -> None:\n self.session_buffer.on_pre_save_async(self.view, old_file_name)\n\n def on_post_save_async(self) -> None:\n self.session_buffer.on_post_save_async(self.view)\n\n def __str__(self) -> str:\n return '{}:{}'.format(self.session.config.name, self.view.id())\n", "path": "plugin/session_view.py"}], "after_files": [{"content": "from .core.protocol import Diagnostic\nfrom .core.protocol import Request\nfrom .core.sessions import Session\nfrom .core.settings import userprefs\nfrom .core.types import view2scope\nfrom .core.typing import Any, Iterable, List, Tuple, Optional, Dict\nfrom .core.views import DIAGNOSTIC_SEVERITY\nfrom .core.windows import AbstractViewListener\nfrom .session_buffer import SessionBuffer\nfrom weakref import ref\nfrom weakref import WeakValueDictionary\nimport sublime\n\n\nclass SessionView:\n \"\"\"\n Holds state per session per view.\n \"\"\"\n\n LANGUAGE_ID_KEY = \"lsp_language\"\n SHOW_DEFINITIONS_KEY = \"show_definitions\"\n HOVER_PROVIDER_KEY = \"hoverProvider\"\n HOVER_PROVIDER_COUNT_KEY = \"lsp_hover_provider_count\"\n\n _session_buffers = WeakValueDictionary() # type: WeakValueDictionary[Tuple[str, int], SessionBuffer]\n\n def __init__(self, listener: AbstractViewListener, session: Session) -> None:\n self.view = listener.view\n self.session = session\n self.active_requests = {} # type: Dict[int, Request]\n settings = self.view.settings()\n # TODO: Language ID must be UNIQUE!\n languages = settings.get(self.LANGUAGE_ID_KEY)\n self._language_id = ''\n if not isinstance(languages, dict):\n languages = {}\n for language in session.config.languages:\n if language.match_scope(view2scope(self.view)):\n languages[session.config.name] = language.id\n self._language_id = language.id\n break\n settings.set(self.LANGUAGE_ID_KEY, languages)\n buffer_id = self.view.buffer_id()\n key = (session.config.name, buffer_id)\n session_buffer = self._session_buffers.get(key)\n if session_buffer is None:\n session_buffer = SessionBuffer(self, buffer_id, self._language_id)\n self._session_buffers[key] = session_buffer\n else:\n session_buffer.add_session_view(self)\n self.session_buffer = session_buffer\n self.listener = ref(listener)\n session.register_session_view_async(self)\n session.config.set_view_status(self.view, \"\")\n if self.session.has_capability(self.HOVER_PROVIDER_KEY):\n self._increment_hover_count()\n\n def __del__(self) -> None:\n if self.session.has_capability(self.HOVER_PROVIDER_KEY):\n self._decrement_hover_count()\n # If the session is exiting then there's no point in sending textDocument/didClose and there's also no point\n # in unregistering ourselves from the session.\n if not self.session.exiting:\n self.session.unregister_session_view_async(self)\n self.session.config.erase_view_status(self.view)\n settings = self.view.settings() # type: sublime.Settings\n # TODO: Language ID must be UNIQUE!\n languages = settings.get(self.LANGUAGE_ID_KEY)\n if isinstance(languages, dict):\n languages.pop(self.session.config.name, None)\n if languages:\n settings.set(self.LANGUAGE_ID_KEY, languages)\n else:\n settings.erase(self.LANGUAGE_ID_KEY)\n for severity in range(1, len(DIAGNOSTIC_SEVERITY) + 1):\n self.view.erase_regions(self.diagnostics_key(severity))\n\n def _increment_hover_count(self) -> None:\n settings = self.view.settings()\n count = settings.get(self.HOVER_PROVIDER_COUNT_KEY, 0)\n if isinstance(count, int):\n count += 1\n settings.set(self.HOVER_PROVIDER_COUNT_KEY, count)\n settings.set(self.SHOW_DEFINITIONS_KEY, False)\n\n def _decrement_hover_count(self) -> None:\n settings = self.view.settings()\n count = settings.get(self.HOVER_PROVIDER_COUNT_KEY)\n if isinstance(count, int):\n count -= 1\n if count == 0:\n settings.erase(self.HOVER_PROVIDER_COUNT_KEY)\n settings.set(self.SHOW_DEFINITIONS_KEY, True)\n\n def get_capability(self, capability_path: str) -> Optional[Any]:\n return self.session_buffer.get_capability(capability_path)\n\n def has_capability(self, capability_path: str) -> bool:\n value = self.session_buffer.get_capability(capability_path)\n return isinstance(value, dict) or bool(value)\n\n def on_capability_added_async(self, capability_path: str, options: Dict[str, Any]) -> None:\n if capability_path == self.HOVER_PROVIDER_KEY:\n self._increment_hover_count()\n\n def on_capability_removed_async(self, discarded: Dict[str, Any]) -> None:\n if self.HOVER_PROVIDER_KEY in discarded:\n self._decrement_hover_count()\n\n def has_capability_async(self, capability_path: str) -> bool:\n return self.session_buffer.has_capability(capability_path)\n\n def shutdown_async(self) -> None:\n listener = self.listener()\n if listener:\n listener.on_session_shutdown_async(self.session)\n\n def diagnostics_key(self, severity: int) -> str:\n return \"lsp{}d{}\".format(self.session.config.name, severity)\n\n def present_diagnostics_async(self, flags: int) -> None:\n data_per_severity = self.session_buffer.data_per_severity\n for severity in reversed(range(1, len(DIAGNOSTIC_SEVERITY) + 1)):\n key = self.diagnostics_key(severity)\n data = data_per_severity.get(severity)\n if data is None:\n self.view.erase_regions(key)\n elif ((severity <= userprefs().show_diagnostics_severity_level) and\n (data.icon or flags != (sublime.DRAW_NO_FILL | sublime.DRAW_NO_OUTLINE))):\n\n def handle_same_regions(region: sublime.Region) -> sublime.Region:\n # this allows showing diagnostics with same begin and end in the view\n if region.a == region.b:\n return sublime.Region(region.a, region.a + 1)\n return region\n\n underline_regions = list(map(handle_same_regions, data.regions))\n self.view.add_regions(key, underline_regions, data.scope, data.icon, flags)\n else:\n self.view.erase_regions(key)\n listener = self.listener()\n if listener:\n listener.on_diagnostics_updated_async()\n\n def get_diagnostics_async(self) -> List[Diagnostic]:\n return self.session_buffer.diagnostics\n\n def on_request_started_async(self, request_id: int, request: Request) -> None:\n self.active_requests[request_id] = request\n\n def on_request_finished_async(self, request_id: int) -> None:\n self.active_requests.pop(request_id, None)\n\n def on_text_changed_async(self, changes: Iterable[sublime.TextChange]) -> None:\n self.session_buffer.on_text_changed_async(self.view, changes)\n\n def on_revert_async(self) -> None:\n self.session_buffer.on_revert_async(self.view)\n\n def on_reload_async(self) -> None:\n self.session_buffer.on_reload_async(self.view)\n\n def purge_changes_async(self) -> None:\n self.session_buffer.purge_changes_async(self.view)\n\n def on_pre_save_async(self, old_file_name: str) -> None:\n self.session_buffer.on_pre_save_async(self.view, old_file_name)\n\n def on_post_save_async(self) -> None:\n self.session_buffer.on_post_save_async(self.view)\n\n def __str__(self) -> str:\n return '{}:{}'.format(self.session.config.name, self.view.id())\n", "path": "plugin/session_view.py"}]} | 2,499 | 220 |
gh_patches_debug_34857 | rasdani/github-patches | git_diff | nltk__nltk-346 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
decisiontree classify demonstration is broken
$ python ./classify/decisiontree.py
Training classifier...
Traceback (most recent call last):
File "./decisiontree.py", line 285, in <module>
demo()
File "./decisiontree.py", line 280, in demo
binary_names_demo_features)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nltk-2.0.3-py2.6.egg/nltk/classify/util.py", line 182, in names_demo
classifier = trainer( [(features(n), g) for (n,g) in train] )
File "./decisiontree.py", line 275, in f
return DecisionTreeClassifier.train(x, binary=True, verbose=True)
File "./decisiontree.py", line 155, in train
feature_names, labeled_featuresets, feature_values, verbose)
File "./decisiontree.py", line 255, in best_binary_stump
fname, fval, labeled_featuresets)
File "./decisiontree.py", line 244, in binary_stump
default = DecisionTreeClassifier(neg_fdist.max())
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nltk-2.0.3-py2.6.egg/nltk/probability.py", line 270, in max
raise ValueError('A FreqDist must have at least one sample before max is defined.')
ValueError: A FreqDist must have at least one sample before max is defined.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nltk/classify/decisiontree.py`
Content:
```
1 # Natural Language Toolkit: Decision Tree Classifiers
2 #
3 # Copyright (C) 2001-2013 NLTK Project
4 # Author: Edward Loper <[email protected]>
5 # URL: <http://www.nltk.org/>
6 # For license information, see LICENSE.TXT
7
8 """
9 A classifier model that decides which label to assign to a token on
10 the basis of a tree structure, where branches correspond to conditions
11 on feature values, and leaves correspond to label assignments.
12 """
13 from __future__ import print_function, unicode_literals
14
15 from collections import defaultdict
16
17 from nltk.probability import FreqDist, MLEProbDist, entropy
18 from nltk.classify.api import ClassifierI
19 from nltk.compat import python_2_unicode_compatible
20
21 @python_2_unicode_compatible
22 class DecisionTreeClassifier(ClassifierI):
23 def __init__(self, label, feature_name=None, decisions=None, default=None):
24 """
25 :param label: The most likely label for tokens that reach
26 this node in the decision tree. If this decision tree
27 has no children, then this label will be assigned to
28 any token that reaches this decision tree.
29 :param feature_name: The name of the feature that this
30 decision tree selects for.
31 :param decisions: A dictionary mapping from feature values
32 for the feature identified by ``feature_name`` to
33 child decision trees.
34 :param default: The child that will be used if the value of
35 feature ``feature_name`` does not match any of the keys in
36 ``decisions``. This is used when constructing binary
37 decision trees.
38 """
39 self._label = label
40 self._fname = feature_name
41 self._decisions = decisions
42 self._default = default
43
44 def labels(self):
45 labels = [self._label]
46 if self._decisions is not None:
47 for dt in self._decisions.values():
48 labels.extend(dt.labels())
49 if self._default is not None:
50 labels.extend(self._default.labels())
51 return list(set(labels))
52
53 def classify(self, featureset):
54 # Decision leaf:
55 if self._fname is None:
56 return self._label
57
58 # Decision tree:
59 fval = featureset.get(self._fname)
60 if fval in self._decisions:
61 return self._decisions[fval].classify(featureset)
62 elif self._default is not None:
63 return self._default.classify(featureset)
64 else:
65 return self._label
66
67 def error(self, labeled_featuresets):
68 errors = 0
69 for featureset, label in labeled_featuresets:
70 if self.classify(featureset) != label:
71 errors += 1
72 return float(errors)/len(labeled_featuresets)
73
74 def pp(self, width=70, prefix='', depth=4):
75 """
76 Return a string containing a pretty-printed version of this
77 decision tree. Each line in this string corresponds to a
78 single decision tree node or leaf, and indentation is used to
79 display the structure of the decision tree.
80 """
81 # [xx] display default!!
82 if self._fname is None:
83 n = width-len(prefix)-15
84 return '%s%s %s\n' % (prefix, '.'*n, self._label)
85 s = ''
86 for i, (fval, result) in enumerate(sorted(self._decisions.items())):
87 hdr = '%s%s=%s? ' % (prefix, self._fname, fval)
88 n = width-15-len(hdr)
89 s += '%s%s %s\n' % (hdr, '.'*(n), result._label)
90 if result._fname is not None and depth>1:
91 s += result.pp(width, prefix+' ', depth-1)
92 if self._default is not None:
93 n = width-len(prefix)-21
94 s += '%selse: %s %s\n' % (prefix, '.'*n, self._default._label)
95 if self._default._fname is not None and depth>1:
96 s += self._default.pp(width, prefix+' ', depth-1)
97 return s
98
99 def pseudocode(self, prefix='', depth=4):
100 """
101 Return a string representation of this decision tree that
102 expresses the decisions it makes as a nested set of pseudocode
103 if statements.
104 """
105 if self._fname is None:
106 return "%sreturn %r\n" % (prefix, self._label)
107 s = ''
108 for (fval, result) in sorted(self._decisions.items()):
109 s += '%sif %s == %r: ' % (prefix, self._fname, fval)
110 if result._fname is not None and depth>1:
111 s += '\n'+result.pseudocode(prefix+' ', depth-1)
112 else:
113 s += 'return %r\n' % result._label
114 if self._default is not None:
115 if len(self._decisions) == 1:
116 s += '%sif %s != %r: '% (prefix, self._fname,
117 self._decisions.keys()[0])
118 else:
119 s += '%selse: ' % (prefix,)
120 if self._default._fname is not None and depth>1:
121 s += '\n'+self._default.pseudocode(prefix+' ', depth-1)
122 else:
123 s += 'return %r\n' % self._default._label
124 return s
125
126 def __str__(self):
127 return self.pp()
128
129 @staticmethod
130 def train(labeled_featuresets, entropy_cutoff=0.05, depth_cutoff=100,
131 support_cutoff=10, binary=False, feature_values=None,
132 verbose=False):
133 """
134 :param binary: If true, then treat all feature/value pairs a
135 individual binary features, rather than using a single n-way
136 branch for each feature.
137 """
138 # Collect a list of all feature names.
139 feature_names = set()
140 for featureset, label in labeled_featuresets:
141 for fname in featureset:
142 feature_names.add(fname)
143
144 # Collect a list of the values each feature can take.
145 if feature_values is None and binary:
146 feature_values = defaultdict(set)
147 for featureset, label in labeled_featuresets:
148 for fname, fval in featureset.items():
149 feature_values[fname].add(fval)
150
151 # Start with a stump.
152 if not binary:
153 tree = DecisionTreeClassifier.best_stump(
154 feature_names, labeled_featuresets, verbose)
155 else:
156 tree = DecisionTreeClassifier.best_binary_stump(
157 feature_names, labeled_featuresets, feature_values, verbose)
158
159 # Refine the stump.
160 tree.refine(labeled_featuresets, entropy_cutoff, depth_cutoff-1,
161 support_cutoff, binary, feature_values, verbose)
162
163 # Return it
164 return tree
165
166 @staticmethod
167 def leaf(labeled_featuresets):
168 label = FreqDist(label for (featureset,label)
169 in labeled_featuresets).max()
170 return DecisionTreeClassifier(label)
171
172 @staticmethod
173 def stump(feature_name, labeled_featuresets):
174 label = FreqDist(label for (featureset,label)
175 in labeled_featuresets).max()
176
177 # Find the best label for each value.
178 freqs = defaultdict(FreqDist) # freq(label|value)
179 for featureset, label in labeled_featuresets:
180 feature_value = featureset.get(feature_name)
181 freqs[feature_value].inc(label)
182
183 decisions = dict((val, DecisionTreeClassifier(freqs[val].max()))
184 for val in freqs)
185 return DecisionTreeClassifier(label, feature_name, decisions)
186
187 def refine(self, labeled_featuresets, entropy_cutoff, depth_cutoff,
188 support_cutoff, binary=False, feature_values=None,
189 verbose=False):
190 if len(labeled_featuresets) <= support_cutoff: return
191 if self._fname is None: return
192 if depth_cutoff <= 0: return
193 for fval in self._decisions:
194 fval_featuresets = [(featureset,label) for (featureset,label)
195 in labeled_featuresets
196 if featureset.get(self._fname) == fval]
197
198 label_freqs = FreqDist(label for (featureset,label)
199 in fval_featuresets)
200 if entropy(MLEProbDist(label_freqs)) > entropy_cutoff:
201 self._decisions[fval] = DecisionTreeClassifier.train(
202 fval_featuresets, entropy_cutoff, depth_cutoff,
203 support_cutoff, binary, feature_values, verbose)
204 if self._default is not None:
205 default_featuresets = [(featureset, label) for (featureset, label)
206 in labeled_featuresets
207 if featureset.get(self._fname) not in
208 self._decisions]
209 label_freqs = FreqDist(label for (featureset,label)
210 in default_featuresets)
211 if entropy(MLEProbDist(label_freqs)) > entropy_cutoff:
212 self._default = DecisionTreeClassifier.train(
213 default_featuresets, entropy_cutoff, depth_cutoff,
214 support_cutoff, binary, feature_values, verbose)
215
216 @staticmethod
217 def best_stump(feature_names, labeled_featuresets, verbose=False):
218 best_stump = DecisionTreeClassifier.leaf(labeled_featuresets)
219 best_error = best_stump.error(labeled_featuresets)
220 for fname in feature_names:
221 stump = DecisionTreeClassifier.stump(fname, labeled_featuresets)
222 stump_error = stump.error(labeled_featuresets)
223 if stump_error < best_error:
224 best_error = stump_error
225 best_stump = stump
226 if verbose:
227 print(('best stump for %6d toks uses %-20s err=%6.4f' %
228 (len(labeled_featuresets), best_stump._fname, best_error)))
229 return best_stump
230
231 @staticmethod
232 def binary_stump(feature_name, feature_value, labeled_featuresets):
233 label = FreqDist(label for (featureset, label)
234 in labeled_featuresets).max()
235
236 # Find the best label for each value.
237 pos_fdist = FreqDist()
238 neg_fdist = FreqDist()
239 for featureset, label in labeled_featuresets:
240 if featureset.get(feature_name) == feature_value:
241 pos_fdist.inc(label)
242 else:
243 neg_fdist.inc(label)
244
245 decisions = {feature_value: DecisionTreeClassifier(pos_fdist.max())}
246 default = DecisionTreeClassifier(neg_fdist.max())
247 return DecisionTreeClassifier(label, feature_name, decisions, default)
248
249 @staticmethod
250 def best_binary_stump(feature_names, labeled_featuresets, feature_values,
251 verbose=False):
252 best_stump = DecisionTreeClassifier.leaf(labeled_featuresets)
253 best_error = best_stump.error(labeled_featuresets)
254 for fname in feature_names:
255 for fval in feature_values[fname]:
256 stump = DecisionTreeClassifier.binary_stump(
257 fname, fval, labeled_featuresets)
258 stump_error = stump.error(labeled_featuresets)
259 if stump_error < best_error:
260 best_error = stump_error
261 best_stump = stump
262 if best_stump._decisions:
263 descr = '%s=%s' % (best_stump._fname,
264 best_stump._decisions.keys()[0])
265 else:
266 descr = '(default)'
267 if verbose:
268 print(('best stump for %6d toks uses %-20s err=%6.4f' %
269 (len(labeled_featuresets), descr, best_error)))
270 return best_stump
271
272 ##//////////////////////////////////////////////////////
273 ## Demo
274 ##//////////////////////////////////////////////////////
275
276 def f(x):
277 return DecisionTreeClassifier.train(x, binary=True, verbose=True)
278
279 def demo():
280 from nltk.classify.util import names_demo, binary_names_demo_features
281 classifier = names_demo(f, #DecisionTreeClassifier.train,
282 binary_names_demo_features)
283 print(classifier.pp(depth=7))
284 print(classifier.pseudocode(depth=7))
285
286 if __name__ == '__main__':
287 demo()
288
289
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/nltk/classify/decisiontree.py b/nltk/classify/decisiontree.py
--- a/nltk/classify/decisiontree.py
+++ b/nltk/classify/decisiontree.py
@@ -114,7 +114,7 @@
if self._default is not None:
if len(self._decisions) == 1:
s += '%sif %s != %r: '% (prefix, self._fname,
- self._decisions.keys()[0])
+ list(self._decisions.keys())[0])
else:
s += '%selse: ' % (prefix,)
if self._default._fname is not None and depth>1:
@@ -131,7 +131,7 @@
support_cutoff=10, binary=False, feature_values=None,
verbose=False):
"""
- :param binary: If true, then treat all feature/value pairs a
+ :param binary: If true, then treat all feature/value pairs as
individual binary features, rather than using a single n-way
branch for each feature.
"""
@@ -242,8 +242,15 @@
else:
neg_fdist.inc(label)
- decisions = {feature_value: DecisionTreeClassifier(pos_fdist.max())}
- default = DecisionTreeClassifier(neg_fdist.max())
+
+ decisions = {}
+ default = label
+ # But hopefully we have observations!
+ if pos_fdist.N() > 0:
+ decisions = {feature_value: DecisionTreeClassifier(pos_fdist.max())}
+ if neg_fdist.N() > 0:
+ default = DecisionTreeClassifier(neg_fdist.max())
+
return DecisionTreeClassifier(label, feature_name, decisions, default)
@staticmethod
@@ -261,7 +268,7 @@
best_stump = stump
if best_stump._decisions:
descr = '%s=%s' % (best_stump._fname,
- best_stump._decisions.keys()[0])
+ list(best_stump._decisions.keys())[0])
else:
descr = '(default)'
if verbose:
| {"golden_diff": "diff --git a/nltk/classify/decisiontree.py b/nltk/classify/decisiontree.py\n--- a/nltk/classify/decisiontree.py\n+++ b/nltk/classify/decisiontree.py\n@@ -114,7 +114,7 @@\n if self._default is not None:\n if len(self._decisions) == 1:\n s += '%sif %s != %r: '% (prefix, self._fname,\n- self._decisions.keys()[0])\n+ list(self._decisions.keys())[0])\n else:\n s += '%selse: ' % (prefix,)\n if self._default._fname is not None and depth>1:\n@@ -131,7 +131,7 @@\n support_cutoff=10, binary=False, feature_values=None,\n verbose=False):\n \"\"\"\n- :param binary: If true, then treat all feature/value pairs a\n+ :param binary: If true, then treat all feature/value pairs as\n individual binary features, rather than using a single n-way\n branch for each feature.\n \"\"\"\n@@ -242,8 +242,15 @@\n else:\n neg_fdist.inc(label)\n \n- decisions = {feature_value: DecisionTreeClassifier(pos_fdist.max())}\n- default = DecisionTreeClassifier(neg_fdist.max())\n+\n+ decisions = {}\n+ default = label\n+ # But hopefully we have observations!\n+ if pos_fdist.N() > 0:\n+ decisions = {feature_value: DecisionTreeClassifier(pos_fdist.max())}\n+ if neg_fdist.N() > 0:\n+ default = DecisionTreeClassifier(neg_fdist.max())\n+\n return DecisionTreeClassifier(label, feature_name, decisions, default)\n \n @staticmethod\n@@ -261,7 +268,7 @@\n best_stump = stump\n if best_stump._decisions:\n descr = '%s=%s' % (best_stump._fname,\n- best_stump._decisions.keys()[0])\n+ list(best_stump._decisions.keys())[0])\n else:\n descr = '(default)'\n if verbose:\n", "issue": "decisiontree classify demonstration is broken\n$ python ./classify/decisiontree.py\nTraining classifier...\nTraceback (most recent call last):\n File \"./decisiontree.py\", line 285, in <module>\n demo()\n File \"./decisiontree.py\", line 280, in demo\n binary_names_demo_features)\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nltk-2.0.3-py2.6.egg/nltk/classify/util.py\", line 182, in names_demo\n classifier = trainer( [(features(n), g) for (n,g) in train] )\n File \"./decisiontree.py\", line 275, in f\n return DecisionTreeClassifier.train(x, binary=True, verbose=True)\n File \"./decisiontree.py\", line 155, in train\n feature_names, labeled_featuresets, feature_values, verbose)\n File \"./decisiontree.py\", line 255, in best_binary_stump\n fname, fval, labeled_featuresets)\n File \"./decisiontree.py\", line 244, in binary_stump\n default = DecisionTreeClassifier(neg_fdist.max())\n File \"/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/nltk-2.0.3-py2.6.egg/nltk/probability.py\", line 270, in max\n raise ValueError('A FreqDist must have at least one sample before max is defined.')\nValueError: A FreqDist must have at least one sample before max is defined.\n\n", "before_files": [{"content": "# Natural Language Toolkit: Decision Tree Classifiers\n#\n# Copyright (C) 2001-2013 NLTK Project\n# Author: Edward Loper <[email protected]>\n# URL: <http://www.nltk.org/>\n# For license information, see LICENSE.TXT\n\n\"\"\"\nA classifier model that decides which label to assign to a token on\nthe basis of a tree structure, where branches correspond to conditions\non feature values, and leaves correspond to label assignments.\n\"\"\"\nfrom __future__ import print_function, unicode_literals\n\nfrom collections import defaultdict\n\nfrom nltk.probability import FreqDist, MLEProbDist, entropy\nfrom nltk.classify.api import ClassifierI\nfrom nltk.compat import python_2_unicode_compatible\n\n@python_2_unicode_compatible\nclass DecisionTreeClassifier(ClassifierI):\n def __init__(self, label, feature_name=None, decisions=None, default=None):\n \"\"\"\n :param label: The most likely label for tokens that reach\n this node in the decision tree. If this decision tree\n has no children, then this label will be assigned to\n any token that reaches this decision tree.\n :param feature_name: The name of the feature that this\n decision tree selects for.\n :param decisions: A dictionary mapping from feature values\n for the feature identified by ``feature_name`` to\n child decision trees.\n :param default: The child that will be used if the value of\n feature ``feature_name`` does not match any of the keys in\n ``decisions``. This is used when constructing binary\n decision trees.\n \"\"\"\n self._label = label\n self._fname = feature_name\n self._decisions = decisions\n self._default = default\n\n def labels(self):\n labels = [self._label]\n if self._decisions is not None:\n for dt in self._decisions.values():\n labels.extend(dt.labels())\n if self._default is not None:\n labels.extend(self._default.labels())\n return list(set(labels))\n\n def classify(self, featureset):\n # Decision leaf:\n if self._fname is None:\n return self._label\n\n # Decision tree:\n fval = featureset.get(self._fname)\n if fval in self._decisions:\n return self._decisions[fval].classify(featureset)\n elif self._default is not None:\n return self._default.classify(featureset)\n else:\n return self._label\n\n def error(self, labeled_featuresets):\n errors = 0\n for featureset, label in labeled_featuresets:\n if self.classify(featureset) != label:\n errors += 1\n return float(errors)/len(labeled_featuresets)\n\n def pp(self, width=70, prefix='', depth=4):\n \"\"\"\n Return a string containing a pretty-printed version of this\n decision tree. Each line in this string corresponds to a\n single decision tree node or leaf, and indentation is used to\n display the structure of the decision tree.\n \"\"\"\n # [xx] display default!!\n if self._fname is None:\n n = width-len(prefix)-15\n return '%s%s %s\\n' % (prefix, '.'*n, self._label)\n s = ''\n for i, (fval, result) in enumerate(sorted(self._decisions.items())):\n hdr = '%s%s=%s? ' % (prefix, self._fname, fval)\n n = width-15-len(hdr)\n s += '%s%s %s\\n' % (hdr, '.'*(n), result._label)\n if result._fname is not None and depth>1:\n s += result.pp(width, prefix+' ', depth-1)\n if self._default is not None:\n n = width-len(prefix)-21\n s += '%selse: %s %s\\n' % (prefix, '.'*n, self._default._label)\n if self._default._fname is not None and depth>1:\n s += self._default.pp(width, prefix+' ', depth-1)\n return s\n\n def pseudocode(self, prefix='', depth=4):\n \"\"\"\n Return a string representation of this decision tree that\n expresses the decisions it makes as a nested set of pseudocode\n if statements.\n \"\"\"\n if self._fname is None:\n return \"%sreturn %r\\n\" % (prefix, self._label)\n s = ''\n for (fval, result) in sorted(self._decisions.items()):\n s += '%sif %s == %r: ' % (prefix, self._fname, fval)\n if result._fname is not None and depth>1:\n s += '\\n'+result.pseudocode(prefix+' ', depth-1)\n else:\n s += 'return %r\\n' % result._label\n if self._default is not None:\n if len(self._decisions) == 1:\n s += '%sif %s != %r: '% (prefix, self._fname,\n self._decisions.keys()[0])\n else:\n s += '%selse: ' % (prefix,)\n if self._default._fname is not None and depth>1:\n s += '\\n'+self._default.pseudocode(prefix+' ', depth-1)\n else:\n s += 'return %r\\n' % self._default._label\n return s\n\n def __str__(self):\n return self.pp()\n\n @staticmethod\n def train(labeled_featuresets, entropy_cutoff=0.05, depth_cutoff=100,\n support_cutoff=10, binary=False, feature_values=None,\n verbose=False):\n \"\"\"\n :param binary: If true, then treat all feature/value pairs a\n individual binary features, rather than using a single n-way\n branch for each feature.\n \"\"\"\n # Collect a list of all feature names.\n feature_names = set()\n for featureset, label in labeled_featuresets:\n for fname in featureset:\n feature_names.add(fname)\n\n # Collect a list of the values each feature can take.\n if feature_values is None and binary:\n feature_values = defaultdict(set)\n for featureset, label in labeled_featuresets:\n for fname, fval in featureset.items():\n feature_values[fname].add(fval)\n\n # Start with a stump.\n if not binary:\n tree = DecisionTreeClassifier.best_stump(\n feature_names, labeled_featuresets, verbose)\n else:\n tree = DecisionTreeClassifier.best_binary_stump(\n feature_names, labeled_featuresets, feature_values, verbose)\n\n # Refine the stump.\n tree.refine(labeled_featuresets, entropy_cutoff, depth_cutoff-1,\n support_cutoff, binary, feature_values, verbose)\n\n # Return it\n return tree\n\n @staticmethod\n def leaf(labeled_featuresets):\n label = FreqDist(label for (featureset,label)\n in labeled_featuresets).max()\n return DecisionTreeClassifier(label)\n\n @staticmethod\n def stump(feature_name, labeled_featuresets):\n label = FreqDist(label for (featureset,label)\n in labeled_featuresets).max()\n\n # Find the best label for each value.\n freqs = defaultdict(FreqDist) # freq(label|value)\n for featureset, label in labeled_featuresets:\n feature_value = featureset.get(feature_name)\n freqs[feature_value].inc(label)\n\n decisions = dict((val, DecisionTreeClassifier(freqs[val].max()))\n for val in freqs)\n return DecisionTreeClassifier(label, feature_name, decisions)\n\n def refine(self, labeled_featuresets, entropy_cutoff, depth_cutoff,\n support_cutoff, binary=False, feature_values=None,\n verbose=False):\n if len(labeled_featuresets) <= support_cutoff: return\n if self._fname is None: return\n if depth_cutoff <= 0: return\n for fval in self._decisions:\n fval_featuresets = [(featureset,label) for (featureset,label)\n in labeled_featuresets\n if featureset.get(self._fname) == fval]\n\n label_freqs = FreqDist(label for (featureset,label)\n in fval_featuresets)\n if entropy(MLEProbDist(label_freqs)) > entropy_cutoff:\n self._decisions[fval] = DecisionTreeClassifier.train(\n fval_featuresets, entropy_cutoff, depth_cutoff,\n support_cutoff, binary, feature_values, verbose)\n if self._default is not None:\n default_featuresets = [(featureset, label) for (featureset, label)\n in labeled_featuresets\n if featureset.get(self._fname) not in\n self._decisions]\n label_freqs = FreqDist(label for (featureset,label)\n in default_featuresets)\n if entropy(MLEProbDist(label_freqs)) > entropy_cutoff:\n self._default = DecisionTreeClassifier.train(\n default_featuresets, entropy_cutoff, depth_cutoff,\n support_cutoff, binary, feature_values, verbose)\n\n @staticmethod\n def best_stump(feature_names, labeled_featuresets, verbose=False):\n best_stump = DecisionTreeClassifier.leaf(labeled_featuresets)\n best_error = best_stump.error(labeled_featuresets)\n for fname in feature_names:\n stump = DecisionTreeClassifier.stump(fname, labeled_featuresets)\n stump_error = stump.error(labeled_featuresets)\n if stump_error < best_error:\n best_error = stump_error\n best_stump = stump\n if verbose:\n print(('best stump for %6d toks uses %-20s err=%6.4f' %\n (len(labeled_featuresets), best_stump._fname, best_error)))\n return best_stump\n\n @staticmethod\n def binary_stump(feature_name, feature_value, labeled_featuresets):\n label = FreqDist(label for (featureset, label)\n in labeled_featuresets).max()\n\n # Find the best label for each value.\n pos_fdist = FreqDist()\n neg_fdist = FreqDist()\n for featureset, label in labeled_featuresets:\n if featureset.get(feature_name) == feature_value:\n pos_fdist.inc(label)\n else:\n neg_fdist.inc(label)\n\n decisions = {feature_value: DecisionTreeClassifier(pos_fdist.max())}\n default = DecisionTreeClassifier(neg_fdist.max())\n return DecisionTreeClassifier(label, feature_name, decisions, default)\n\n @staticmethod\n def best_binary_stump(feature_names, labeled_featuresets, feature_values,\n verbose=False):\n best_stump = DecisionTreeClassifier.leaf(labeled_featuresets)\n best_error = best_stump.error(labeled_featuresets)\n for fname in feature_names:\n for fval in feature_values[fname]:\n stump = DecisionTreeClassifier.binary_stump(\n fname, fval, labeled_featuresets)\n stump_error = stump.error(labeled_featuresets)\n if stump_error < best_error:\n best_error = stump_error\n best_stump = stump\n if best_stump._decisions:\n descr = '%s=%s' % (best_stump._fname,\n best_stump._decisions.keys()[0])\n else:\n descr = '(default)'\n if verbose:\n print(('best stump for %6d toks uses %-20s err=%6.4f' %\n (len(labeled_featuresets), descr, best_error)))\n return best_stump\n\n##//////////////////////////////////////////////////////\n## Demo\n##//////////////////////////////////////////////////////\n\ndef f(x):\n return DecisionTreeClassifier.train(x, binary=True, verbose=True)\n\ndef demo():\n from nltk.classify.util import names_demo, binary_names_demo_features\n classifier = names_demo(f, #DecisionTreeClassifier.train,\n binary_names_demo_features)\n print(classifier.pp(depth=7))\n print(classifier.pseudocode(depth=7))\n\nif __name__ == '__main__':\n demo()\n\n", "path": "nltk/classify/decisiontree.py"}], "after_files": [{"content": "# Natural Language Toolkit: Decision Tree Classifiers\n#\n# Copyright (C) 2001-2013 NLTK Project\n# Author: Edward Loper <[email protected]>\n# URL: <http://www.nltk.org/>\n# For license information, see LICENSE.TXT\n\n\"\"\"\nA classifier model that decides which label to assign to a token on\nthe basis of a tree structure, where branches correspond to conditions\non feature values, and leaves correspond to label assignments.\n\"\"\"\nfrom __future__ import print_function, unicode_literals\n\nfrom collections import defaultdict\n\nfrom nltk.probability import FreqDist, MLEProbDist, entropy\nfrom nltk.classify.api import ClassifierI\nfrom nltk.compat import python_2_unicode_compatible\n\n@python_2_unicode_compatible\nclass DecisionTreeClassifier(ClassifierI):\n def __init__(self, label, feature_name=None, decisions=None, default=None):\n \"\"\"\n :param label: The most likely label for tokens that reach\n this node in the decision tree. If this decision tree\n has no children, then this label will be assigned to\n any token that reaches this decision tree.\n :param feature_name: The name of the feature that this\n decision tree selects for.\n :param decisions: A dictionary mapping from feature values\n for the feature identified by ``feature_name`` to\n child decision trees.\n :param default: The child that will be used if the value of\n feature ``feature_name`` does not match any of the keys in\n ``decisions``. This is used when constructing binary\n decision trees.\n \"\"\"\n self._label = label\n self._fname = feature_name\n self._decisions = decisions\n self._default = default\n\n def labels(self):\n labels = [self._label]\n if self._decisions is not None:\n for dt in self._decisions.values():\n labels.extend(dt.labels())\n if self._default is not None:\n labels.extend(self._default.labels())\n return list(set(labels))\n\n def classify(self, featureset):\n # Decision leaf:\n if self._fname is None:\n return self._label\n\n # Decision tree:\n fval = featureset.get(self._fname)\n if fval in self._decisions:\n return self._decisions[fval].classify(featureset)\n elif self._default is not None:\n return self._default.classify(featureset)\n else:\n return self._label\n\n def error(self, labeled_featuresets):\n errors = 0\n for featureset, label in labeled_featuresets:\n if self.classify(featureset) != label:\n errors += 1\n return float(errors)/len(labeled_featuresets)\n\n def pp(self, width=70, prefix='', depth=4):\n \"\"\"\n Return a string containing a pretty-printed version of this\n decision tree. Each line in this string corresponds to a\n single decision tree node or leaf, and indentation is used to\n display the structure of the decision tree.\n \"\"\"\n # [xx] display default!!\n if self._fname is None:\n n = width-len(prefix)-15\n return '%s%s %s\\n' % (prefix, '.'*n, self._label)\n s = ''\n for i, (fval, result) in enumerate(sorted(self._decisions.items())):\n hdr = '%s%s=%s? ' % (prefix, self._fname, fval)\n n = width-15-len(hdr)\n s += '%s%s %s\\n' % (hdr, '.'*(n), result._label)\n if result._fname is not None and depth>1:\n s += result.pp(width, prefix+' ', depth-1)\n if self._default is not None:\n n = width-len(prefix)-21\n s += '%selse: %s %s\\n' % (prefix, '.'*n, self._default._label)\n if self._default._fname is not None and depth>1:\n s += self._default.pp(width, prefix+' ', depth-1)\n return s\n\n def pseudocode(self, prefix='', depth=4):\n \"\"\"\n Return a string representation of this decision tree that\n expresses the decisions it makes as a nested set of pseudocode\n if statements.\n \"\"\"\n if self._fname is None:\n return \"%sreturn %r\\n\" % (prefix, self._label)\n s = ''\n for (fval, result) in sorted(self._decisions.items()):\n s += '%sif %s == %r: ' % (prefix, self._fname, fval)\n if result._fname is not None and depth>1:\n s += '\\n'+result.pseudocode(prefix+' ', depth-1)\n else:\n s += 'return %r\\n' % result._label\n if self._default is not None:\n if len(self._decisions) == 1:\n s += '%sif %s != %r: '% (prefix, self._fname,\n list(self._decisions.keys())[0])\n else:\n s += '%selse: ' % (prefix,)\n if self._default._fname is not None and depth>1:\n s += '\\n'+self._default.pseudocode(prefix+' ', depth-1)\n else:\n s += 'return %r\\n' % self._default._label\n return s\n\n def __str__(self):\n return self.pp()\n\n @staticmethod\n def train(labeled_featuresets, entropy_cutoff=0.05, depth_cutoff=100,\n support_cutoff=10, binary=False, feature_values=None,\n verbose=False):\n \"\"\"\n :param binary: If true, then treat all feature/value pairs as\n individual binary features, rather than using a single n-way\n branch for each feature.\n \"\"\"\n # Collect a list of all feature names.\n feature_names = set()\n for featureset, label in labeled_featuresets:\n for fname in featureset:\n feature_names.add(fname)\n\n # Collect a list of the values each feature can take.\n if feature_values is None and binary:\n feature_values = defaultdict(set)\n for featureset, label in labeled_featuresets:\n for fname, fval in featureset.items():\n feature_values[fname].add(fval)\n\n # Start with a stump.\n if not binary:\n tree = DecisionTreeClassifier.best_stump(\n feature_names, labeled_featuresets, verbose)\n else:\n tree = DecisionTreeClassifier.best_binary_stump(\n feature_names, labeled_featuresets, feature_values, verbose)\n\n # Refine the stump.\n tree.refine(labeled_featuresets, entropy_cutoff, depth_cutoff-1,\n support_cutoff, binary, feature_values, verbose)\n\n # Return it\n return tree\n\n @staticmethod\n def leaf(labeled_featuresets):\n label = FreqDist(label for (featureset,label)\n in labeled_featuresets).max()\n return DecisionTreeClassifier(label)\n\n @staticmethod\n def stump(feature_name, labeled_featuresets):\n label = FreqDist(label for (featureset,label)\n in labeled_featuresets).max()\n\n # Find the best label for each value.\n freqs = defaultdict(FreqDist) # freq(label|value)\n for featureset, label in labeled_featuresets:\n feature_value = featureset.get(feature_name)\n freqs[feature_value].inc(label)\n\n decisions = dict((val, DecisionTreeClassifier(freqs[val].max()))\n for val in freqs)\n return DecisionTreeClassifier(label, feature_name, decisions)\n\n def refine(self, labeled_featuresets, entropy_cutoff, depth_cutoff,\n support_cutoff, binary=False, feature_values=None,\n verbose=False):\n if len(labeled_featuresets) <= support_cutoff: return\n if self._fname is None: return\n if depth_cutoff <= 0: return\n for fval in self._decisions:\n fval_featuresets = [(featureset,label) for (featureset,label)\n in labeled_featuresets\n if featureset.get(self._fname) == fval]\n\n label_freqs = FreqDist(label for (featureset,label)\n in fval_featuresets)\n if entropy(MLEProbDist(label_freqs)) > entropy_cutoff:\n self._decisions[fval] = DecisionTreeClassifier.train(\n fval_featuresets, entropy_cutoff, depth_cutoff,\n support_cutoff, binary, feature_values, verbose)\n if self._default is not None:\n default_featuresets = [(featureset, label) for (featureset, label)\n in labeled_featuresets\n if featureset.get(self._fname) not in\n self._decisions]\n label_freqs = FreqDist(label for (featureset,label)\n in default_featuresets)\n if entropy(MLEProbDist(label_freqs)) > entropy_cutoff:\n self._default = DecisionTreeClassifier.train(\n default_featuresets, entropy_cutoff, depth_cutoff,\n support_cutoff, binary, feature_values, verbose)\n\n @staticmethod\n def best_stump(feature_names, labeled_featuresets, verbose=False):\n best_stump = DecisionTreeClassifier.leaf(labeled_featuresets)\n best_error = best_stump.error(labeled_featuresets)\n for fname in feature_names:\n stump = DecisionTreeClassifier.stump(fname, labeled_featuresets)\n stump_error = stump.error(labeled_featuresets)\n if stump_error < best_error:\n best_error = stump_error\n best_stump = stump\n if verbose:\n print(('best stump for %6d toks uses %-20s err=%6.4f' %\n (len(labeled_featuresets), best_stump._fname, best_error)))\n return best_stump\n\n @staticmethod\n def binary_stump(feature_name, feature_value, labeled_featuresets):\n label = FreqDist(label for (featureset, label)\n in labeled_featuresets).max()\n\n # Find the best label for each value.\n pos_fdist = FreqDist()\n neg_fdist = FreqDist()\n for featureset, label in labeled_featuresets:\n if featureset.get(feature_name) == feature_value:\n pos_fdist.inc(label)\n else:\n neg_fdist.inc(label)\n\n\n decisions = {}\n default = label\n # But hopefully we have observations!\n if pos_fdist.N() > 0:\n decisions = {feature_value: DecisionTreeClassifier(pos_fdist.max())}\n if neg_fdist.N() > 0:\n default = DecisionTreeClassifier(neg_fdist.max())\n\n return DecisionTreeClassifier(label, feature_name, decisions, default)\n\n @staticmethod\n def best_binary_stump(feature_names, labeled_featuresets, feature_values,\n verbose=False):\n best_stump = DecisionTreeClassifier.leaf(labeled_featuresets)\n best_error = best_stump.error(labeled_featuresets)\n for fname in feature_names:\n for fval in feature_values[fname]:\n stump = DecisionTreeClassifier.binary_stump(\n fname, fval, labeled_featuresets)\n stump_error = stump.error(labeled_featuresets)\n if stump_error < best_error:\n best_error = stump_error\n best_stump = stump\n if best_stump._decisions:\n descr = '%s=%s' % (best_stump._fname,\n list(best_stump._decisions.keys())[0])\n else:\n descr = '(default)'\n if verbose:\n print(('best stump for %6d toks uses %-20s err=%6.4f' %\n (len(labeled_featuresets), descr, best_error)))\n return best_stump\n\n##//////////////////////////////////////////////////////\n## Demo\n##//////////////////////////////////////////////////////\n\ndef f(x):\n return DecisionTreeClassifier.train(x, binary=True, verbose=True)\n\ndef demo():\n from nltk.classify.util import names_demo, binary_names_demo_features\n classifier = names_demo(f, #DecisionTreeClassifier.train,\n binary_names_demo_features)\n print(classifier.pp(depth=7))\n print(classifier.pseudocode(depth=7))\n\nif __name__ == '__main__':\n demo()\n\n", "path": "nltk/classify/decisiontree.py"}]} | 3,966 | 474 |
gh_patches_debug_18667 | rasdani/github-patches | git_diff | Bitmessage__PyBitmessage-1413 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Duplicate connections to some onion peers
The description is based on messages from `[chan] bitmessage` but I've seen it myself a couple of weeks ago (I thought it's related to my #1394 - wrongly).

Changes proposed today:
```patch
diff --git a/src/network/connectionpool.py b/src/network/connectionpool.py
index e599cdf..c5ba701 100644
--- a/src/network/connectionpool.py
+++ b/src/network/connectionpool.py
@@ -93,7 +93,7 @@ class BMConnectionPool(object):
del self.inboundConnections[connection.destination.host]
except KeyError:
pass
- connection.close()
+ connection.handle_close()
def getListeningIP(self):
if BMConfigParser().safeGet("bitmessagesettings", "onionhostname").endswith(".onion"):
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/network/connectionpool.py`
Content:
```
1 from ConfigParser import NoOptionError, NoSectionError
2 import errno
3 import socket
4 import time
5 import random
6 import re
7
8 from bmconfigparser import BMConfigParser
9 from debug import logger
10 import helper_bootstrap
11 import knownnodes
12 from network.proxy import Proxy
13 from network.tcp import TCPServer, Socks5BMConnection, Socks4aBMConnection, TCPConnection
14 from network.udp import UDPSocket
15 from network.connectionchooser import chooseConnection
16 import network.asyncore_pollchoose as asyncore
17 import protocol
18 from singleton import Singleton
19 import state
20 import helper_random
21
22
23 @Singleton
24 class BMConnectionPool(object):
25 def __init__(self):
26 asyncore.set_rates(
27 BMConfigParser().safeGetInt("bitmessagesettings", "maxdownloadrate"),
28 BMConfigParser().safeGetInt("bitmessagesettings", "maxuploadrate"))
29 self.outboundConnections = {}
30 self.inboundConnections = {}
31 self.listeningSockets = {}
32 self.udpSockets = {}
33 self.streams = []
34 self.lastSpawned = 0
35 self.spawnWait = 2
36 self.bootstrapped = False
37
38 def connectToStream(self, streamNumber):
39 self.streams.append(streamNumber)
40
41 def getConnectionByAddr(self, addr):
42 if addr in self.inboundConnections:
43 return self.inboundConnections[addr]
44 try:
45 if addr.host in self.inboundConnections:
46 return self.inboundConnections[addr.host]
47 except AttributeError:
48 pass
49 if addr in self.outboundConnections:
50 return self.outboundConnections[addr]
51 try:
52 if addr.host in self.udpSockets:
53 return self.udpSockets[addr.host]
54 except AttributeError:
55 pass
56 raise KeyError
57
58 def isAlreadyConnected(self, nodeid):
59 for i in self.inboundConnections.values() + self.outboundConnections.values():
60 try:
61 if nodeid == i.nodeid:
62 return True
63 except AttributeError:
64 pass
65 return False
66
67 def addConnection(self, connection):
68 if isinstance(connection, UDPSocket):
69 return
70 if connection.isOutbound:
71 self.outboundConnections[connection.destination] = connection
72 else:
73 if connection.destination.host in self.inboundConnections:
74 self.inboundConnections[connection.destination] = connection
75 else:
76 self.inboundConnections[connection.destination.host] = connection
77
78 def removeConnection(self, connection):
79 if isinstance(connection, UDPSocket):
80 del self.udpSockets[connection.listening.host]
81 elif isinstance(connection, TCPServer):
82 del self.listeningSockets[state.Peer(connection.destination.host, connection.destination.port)]
83 elif connection.isOutbound:
84 try:
85 del self.outboundConnections[connection.destination]
86 except KeyError:
87 pass
88 else:
89 try:
90 del self.inboundConnections[connection.destination]
91 except KeyError:
92 try:
93 del self.inboundConnections[connection.destination.host]
94 except KeyError:
95 pass
96 connection.close()
97
98 def getListeningIP(self):
99 if BMConfigParser().safeGet("bitmessagesettings", "onionhostname").endswith(".onion"):
100 host = BMConfigParser().safeGet("bitmessagesettings", "onionbindip")
101 else:
102 host = '127.0.0.1'
103 if BMConfigParser().safeGetBoolean("bitmessagesettings", "sockslisten") or \
104 BMConfigParser().get("bitmessagesettings", "socksproxytype") == "none":
105 # python doesn't like bind + INADDR_ANY?
106 #host = socket.INADDR_ANY
107 host = BMConfigParser().get("network", "bind")
108 return host
109
110 def startListening(self, bind=None):
111 if bind is None:
112 bind = self.getListeningIP()
113 port = BMConfigParser().safeGetInt("bitmessagesettings", "port")
114 # correct port even if it changed
115 ls = TCPServer(host=bind, port=port)
116 self.listeningSockets[ls.destination] = ls
117
118 def startUDPSocket(self, bind=None):
119 if bind is None:
120 host = self.getListeningIP()
121 udpSocket = UDPSocket(host=host, announcing=True)
122 else:
123 if bind is False:
124 udpSocket = UDPSocket(announcing=False)
125 else:
126 udpSocket = UDPSocket(host=bind, announcing=True)
127 self.udpSockets[udpSocket.listening.host] = udpSocket
128
129 def loop(self):
130 # defaults to empty loop if outbound connections are maxed
131 spawnConnections = False
132 acceptConnections = True
133 if BMConfigParser().safeGetBoolean('bitmessagesettings', 'dontconnect'):
134 acceptConnections = False
135 elif BMConfigParser().safeGetBoolean('bitmessagesettings', 'sendoutgoingconnections'):
136 spawnConnections = True
137 if BMConfigParser().get('bitmessagesettings', 'socksproxytype')[0:5] == 'SOCKS' and \
138 (not BMConfigParser().getboolean('bitmessagesettings', 'sockslisten') and \
139 ".onion" not in BMConfigParser().get('bitmessagesettings', 'onionhostname')):
140 acceptConnections = False
141
142 if spawnConnections:
143 if not knownnodes.knownNodesActual:
144 helper_bootstrap.dns()
145 if not self.bootstrapped:
146 self.bootstrapped = True
147 Proxy.proxy = (BMConfigParser().safeGet("bitmessagesettings", "sockshostname"),
148 BMConfigParser().safeGetInt("bitmessagesettings", "socksport"))
149 # TODO AUTH
150 # TODO reset based on GUI settings changes
151 try:
152 if not BMConfigParser().get("network", "onionsocksproxytype").startswith("SOCKS"):
153 raise NoOptionError
154 Proxy.onionproxy = (BMConfigParser().get("network", "onionsockshostname"),
155 BMConfigParser().getint("network", "onionsocksport"))
156 except (NoOptionError, NoSectionError):
157 Proxy.onionproxy = None
158 established = sum(1 for c in self.outboundConnections.values() if (c.connected and c.fullyEstablished))
159 pending = len(self.outboundConnections) - established
160 if established < BMConfigParser().safeGetInt("bitmessagesettings", "maxoutboundconnections"):
161 for i in range(state.maximumNumberOfHalfOpenConnections - pending):
162 try:
163 chosen = chooseConnection(helper_random.randomchoice(self.streams))
164 except ValueError:
165 continue
166 if chosen in self.outboundConnections:
167 continue
168 if chosen.host in self.inboundConnections:
169 continue
170 # don't connect to self
171 if chosen in state.ownAddresses:
172 continue
173
174 #for c in self.outboundConnections:
175 # if chosen == c.destination:
176 # continue
177 #for c in self.inboundConnections:
178 # if chosen.host == c.destination.host:
179 # continue
180 try:
181 if chosen.host.endswith(".onion") and Proxy.onionproxy is not None:
182 if BMConfigParser().get("network", "onionsocksproxytype") == "SOCKS5":
183 self.addConnection(Socks5BMConnection(chosen))
184 elif BMConfigParser().get("network", "onionsocksproxytype") == "SOCKS4a":
185 self.addConnection(Socks4aBMConnection(chosen))
186 elif BMConfigParser().safeGet("bitmessagesettings", "socksproxytype") == "SOCKS5":
187 self.addConnection(Socks5BMConnection(chosen))
188 elif BMConfigParser().safeGet("bitmessagesettings", "socksproxytype") == "SOCKS4a":
189 self.addConnection(Socks4aBMConnection(chosen))
190 else:
191 self.addConnection(TCPConnection(chosen))
192 except socket.error as e:
193 if e.errno == errno.ENETUNREACH:
194 continue
195 except (NoSectionError, NoOptionError):
196 # shouldn't happen
197 pass
198
199 self.lastSpawned = time.time()
200 else:
201 for i in (
202 self.inboundConnections.values() +
203 self.outboundConnections.values()
204 ):
205 i.set_state("close")
206 # FIXME: rating will be increased after next connection
207 i.handle_close()
208
209 if acceptConnections:
210 if not self.listeningSockets:
211 if BMConfigParser().safeGet("network", "bind") == '':
212 self.startListening()
213 else:
214 for bind in re.sub("[^\w.]+", " ", BMConfigParser().safeGet("network", "bind")).split():
215 self.startListening(bind)
216 logger.info('Listening for incoming connections.')
217 if not self.udpSockets:
218 if BMConfigParser().safeGet("network", "bind") == '':
219 self.startUDPSocket()
220 else:
221 for bind in re.sub("[^\w.]+", " ", BMConfigParser().safeGet("network", "bind")).split():
222 self.startUDPSocket(bind)
223 self.startUDPSocket(False)
224 logger.info('Starting UDP socket(s).')
225 else:
226 if self.listeningSockets:
227 for i in self.listeningSockets.values():
228 i.close_reason = "Stopping listening"
229 i.accepting = i.connecting = i.connected = False
230 logger.info('Stopped listening for incoming connections.')
231 if self.udpSockets:
232 for i in self.udpSockets.values():
233 i.close_reason = "Stopping UDP socket"
234 i.accepting = i.connecting = i.connected = False
235 logger.info('Stopped udp sockets.')
236
237 loopTime = float(self.spawnWait)
238 if self.lastSpawned < time.time() - self.spawnWait:
239 loopTime = 2.0
240 asyncore.loop(timeout=loopTime, count=1000)
241
242 reaper = []
243 for i in self.inboundConnections.values() + self.outboundConnections.values():
244 minTx = time.time() - 20
245 if i.fullyEstablished:
246 minTx -= 300 - 20
247 if i.lastTx < minTx:
248 if i.fullyEstablished:
249 i.append_write_buf(protocol.CreatePacket('ping'))
250 else:
251 i.close_reason = "Timeout (%is)" % (time.time() - i.lastTx)
252 i.set_state("close")
253 for i in self.inboundConnections.values() + self.outboundConnections.values() + self.listeningSockets.values() + self.udpSockets.values():
254 if not (i.accepting or i.connecting or i.connected):
255 reaper.append(i)
256 else:
257 try:
258 if i.state == "close":
259 reaper.append(i)
260 except AttributeError:
261 pass
262 for i in reaper:
263 self.removeConnection(i)
264
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/network/connectionpool.py b/src/network/connectionpool.py
--- a/src/network/connectionpool.py
+++ b/src/network/connectionpool.py
@@ -93,7 +93,7 @@
del self.inboundConnections[connection.destination.host]
except KeyError:
pass
- connection.close()
+ connection.handle_close()
def getListeningIP(self):
if BMConfigParser().safeGet("bitmessagesettings", "onionhostname").endswith(".onion"):
@@ -202,7 +202,6 @@
self.inboundConnections.values() +
self.outboundConnections.values()
):
- i.set_state("close")
# FIXME: rating will be increased after next connection
i.handle_close()
| {"golden_diff": "diff --git a/src/network/connectionpool.py b/src/network/connectionpool.py\n--- a/src/network/connectionpool.py\n+++ b/src/network/connectionpool.py\n@@ -93,7 +93,7 @@\n del self.inboundConnections[connection.destination.host]\n except KeyError:\n pass\n- connection.close()\n+ connection.handle_close()\n \n def getListeningIP(self):\n if BMConfigParser().safeGet(\"bitmessagesettings\", \"onionhostname\").endswith(\".onion\"):\n@@ -202,7 +202,6 @@\n self.inboundConnections.values() +\n self.outboundConnections.values()\n ):\n- i.set_state(\"close\")\n # FIXME: rating will be increased after next connection\n i.handle_close()\n", "issue": "Duplicate connections to some onion peers\nThe description is based on messages from `[chan] bitmessage` but I've seen it myself a couple of weeks ago (I thought it's related to my #1394 - wrongly).\r\n\r\n\r\n\r\nChanges proposed today:\r\n```patch\r\ndiff --git a/src/network/connectionpool.py b/src/network/connectionpool.py\r\nindex e599cdf..c5ba701 100644\r\n--- a/src/network/connectionpool.py\r\n+++ b/src/network/connectionpool.py\r\n@@ -93,7 +93,7 @@ class BMConnectionPool(object):\r\n del self.inboundConnections[connection.destination.host]\r\n except KeyError:\r\n pass\r\n- connection.close()\r\n+ connection.handle_close()\r\n \r\n def getListeningIP(self):\r\n if BMConfigParser().safeGet(\"bitmessagesettings\", \"onionhostname\").endswith(\".onion\"):\r\n\r\n```\n", "before_files": [{"content": "from ConfigParser import NoOptionError, NoSectionError\nimport errno\nimport socket\nimport time\nimport random\nimport re\n\nfrom bmconfigparser import BMConfigParser\nfrom debug import logger\nimport helper_bootstrap\nimport knownnodes\nfrom network.proxy import Proxy\nfrom network.tcp import TCPServer, Socks5BMConnection, Socks4aBMConnection, TCPConnection\nfrom network.udp import UDPSocket\nfrom network.connectionchooser import chooseConnection\nimport network.asyncore_pollchoose as asyncore\nimport protocol\nfrom singleton import Singleton\nimport state\nimport helper_random\n\n\n@Singleton\nclass BMConnectionPool(object):\n def __init__(self):\n asyncore.set_rates(\n BMConfigParser().safeGetInt(\"bitmessagesettings\", \"maxdownloadrate\"),\n BMConfigParser().safeGetInt(\"bitmessagesettings\", \"maxuploadrate\"))\n self.outboundConnections = {}\n self.inboundConnections = {}\n self.listeningSockets = {}\n self.udpSockets = {}\n self.streams = []\n self.lastSpawned = 0\n self.spawnWait = 2 \n self.bootstrapped = False\n\n def connectToStream(self, streamNumber):\n self.streams.append(streamNumber)\n\n def getConnectionByAddr(self, addr):\n if addr in self.inboundConnections:\n return self.inboundConnections[addr]\n try:\n if addr.host in self.inboundConnections:\n return self.inboundConnections[addr.host]\n except AttributeError:\n pass\n if addr in self.outboundConnections:\n return self.outboundConnections[addr]\n try:\n if addr.host in self.udpSockets:\n return self.udpSockets[addr.host]\n except AttributeError:\n pass\n raise KeyError\n\n def isAlreadyConnected(self, nodeid):\n for i in self.inboundConnections.values() + self.outboundConnections.values():\n try:\n if nodeid == i.nodeid:\n return True\n except AttributeError:\n pass\n return False\n\n def addConnection(self, connection):\n if isinstance(connection, UDPSocket):\n return\n if connection.isOutbound:\n self.outboundConnections[connection.destination] = connection\n else:\n if connection.destination.host in self.inboundConnections:\n self.inboundConnections[connection.destination] = connection\n else:\n self.inboundConnections[connection.destination.host] = connection\n\n def removeConnection(self, connection):\n if isinstance(connection, UDPSocket):\n del self.udpSockets[connection.listening.host]\n elif isinstance(connection, TCPServer):\n del self.listeningSockets[state.Peer(connection.destination.host, connection.destination.port)]\n elif connection.isOutbound:\n try:\n del self.outboundConnections[connection.destination]\n except KeyError:\n pass\n else:\n try:\n del self.inboundConnections[connection.destination]\n except KeyError:\n try:\n del self.inboundConnections[connection.destination.host]\n except KeyError:\n pass\n connection.close()\n\n def getListeningIP(self):\n if BMConfigParser().safeGet(\"bitmessagesettings\", \"onionhostname\").endswith(\".onion\"):\n host = BMConfigParser().safeGet(\"bitmessagesettings\", \"onionbindip\")\n else:\n host = '127.0.0.1'\n if BMConfigParser().safeGetBoolean(\"bitmessagesettings\", \"sockslisten\") or \\\n BMConfigParser().get(\"bitmessagesettings\", \"socksproxytype\") == \"none\":\n # python doesn't like bind + INADDR_ANY?\n #host = socket.INADDR_ANY\n host = BMConfigParser().get(\"network\", \"bind\")\n return host\n\n def startListening(self, bind=None):\n if bind is None:\n bind = self.getListeningIP()\n port = BMConfigParser().safeGetInt(\"bitmessagesettings\", \"port\")\n # correct port even if it changed\n ls = TCPServer(host=bind, port=port)\n self.listeningSockets[ls.destination] = ls\n\n def startUDPSocket(self, bind=None):\n if bind is None:\n host = self.getListeningIP()\n udpSocket = UDPSocket(host=host, announcing=True)\n else:\n if bind is False:\n udpSocket = UDPSocket(announcing=False)\n else:\n udpSocket = UDPSocket(host=bind, announcing=True)\n self.udpSockets[udpSocket.listening.host] = udpSocket\n\n def loop(self):\n # defaults to empty loop if outbound connections are maxed\n spawnConnections = False\n acceptConnections = True\n if BMConfigParser().safeGetBoolean('bitmessagesettings', 'dontconnect'):\n acceptConnections = False\n elif BMConfigParser().safeGetBoolean('bitmessagesettings', 'sendoutgoingconnections'):\n spawnConnections = True\n if BMConfigParser().get('bitmessagesettings', 'socksproxytype')[0:5] == 'SOCKS' and \\\n (not BMConfigParser().getboolean('bitmessagesettings', 'sockslisten') and \\\n \".onion\" not in BMConfigParser().get('bitmessagesettings', 'onionhostname')):\n acceptConnections = False\n\n if spawnConnections:\n if not knownnodes.knownNodesActual:\n helper_bootstrap.dns()\n if not self.bootstrapped:\n self.bootstrapped = True\n Proxy.proxy = (BMConfigParser().safeGet(\"bitmessagesettings\", \"sockshostname\"),\n BMConfigParser().safeGetInt(\"bitmessagesettings\", \"socksport\"))\n # TODO AUTH\n # TODO reset based on GUI settings changes\n try:\n if not BMConfigParser().get(\"network\", \"onionsocksproxytype\").startswith(\"SOCKS\"):\n raise NoOptionError\n Proxy.onionproxy = (BMConfigParser().get(\"network\", \"onionsockshostname\"),\n BMConfigParser().getint(\"network\", \"onionsocksport\"))\n except (NoOptionError, NoSectionError):\n Proxy.onionproxy = None\n established = sum(1 for c in self.outboundConnections.values() if (c.connected and c.fullyEstablished))\n pending = len(self.outboundConnections) - established\n if established < BMConfigParser().safeGetInt(\"bitmessagesettings\", \"maxoutboundconnections\"):\n for i in range(state.maximumNumberOfHalfOpenConnections - pending):\n try:\n chosen = chooseConnection(helper_random.randomchoice(self.streams))\n except ValueError:\n continue\n if chosen in self.outboundConnections:\n continue\n if chosen.host in self.inboundConnections:\n continue\n # don't connect to self\n if chosen in state.ownAddresses:\n continue\n \n #for c in self.outboundConnections:\n # if chosen == c.destination:\n # continue\n #for c in self.inboundConnections:\n # if chosen.host == c.destination.host:\n # continue\n try:\n if chosen.host.endswith(\".onion\") and Proxy.onionproxy is not None:\n if BMConfigParser().get(\"network\", \"onionsocksproxytype\") == \"SOCKS5\":\n self.addConnection(Socks5BMConnection(chosen))\n elif BMConfigParser().get(\"network\", \"onionsocksproxytype\") == \"SOCKS4a\":\n self.addConnection(Socks4aBMConnection(chosen))\n elif BMConfigParser().safeGet(\"bitmessagesettings\", \"socksproxytype\") == \"SOCKS5\":\n self.addConnection(Socks5BMConnection(chosen))\n elif BMConfigParser().safeGet(\"bitmessagesettings\", \"socksproxytype\") == \"SOCKS4a\":\n self.addConnection(Socks4aBMConnection(chosen))\n else:\n self.addConnection(TCPConnection(chosen))\n except socket.error as e:\n if e.errno == errno.ENETUNREACH:\n continue\n except (NoSectionError, NoOptionError):\n # shouldn't happen\n pass\n\n self.lastSpawned = time.time()\n else:\n for i in (\n self.inboundConnections.values() +\n self.outboundConnections.values()\n ):\n i.set_state(\"close\")\n # FIXME: rating will be increased after next connection\n i.handle_close()\n\n if acceptConnections:\n if not self.listeningSockets:\n if BMConfigParser().safeGet(\"network\", \"bind\") == '':\n self.startListening()\n else:\n for bind in re.sub(\"[^\\w.]+\", \" \", BMConfigParser().safeGet(\"network\", \"bind\")).split():\n self.startListening(bind)\n logger.info('Listening for incoming connections.')\n if not self.udpSockets:\n if BMConfigParser().safeGet(\"network\", \"bind\") == '':\n self.startUDPSocket()\n else:\n for bind in re.sub(\"[^\\w.]+\", \" \", BMConfigParser().safeGet(\"network\", \"bind\")).split():\n self.startUDPSocket(bind)\n self.startUDPSocket(False)\n logger.info('Starting UDP socket(s).')\n else:\n if self.listeningSockets:\n for i in self.listeningSockets.values():\n i.close_reason = \"Stopping listening\"\n i.accepting = i.connecting = i.connected = False\n logger.info('Stopped listening for incoming connections.')\n if self.udpSockets:\n for i in self.udpSockets.values():\n i.close_reason = \"Stopping UDP socket\"\n i.accepting = i.connecting = i.connected = False\n logger.info('Stopped udp sockets.')\n\n loopTime = float(self.spawnWait)\n if self.lastSpawned < time.time() - self.spawnWait:\n loopTime = 2.0\n asyncore.loop(timeout=loopTime, count=1000)\n\n reaper = []\n for i in self.inboundConnections.values() + self.outboundConnections.values():\n minTx = time.time() - 20\n if i.fullyEstablished:\n minTx -= 300 - 20\n if i.lastTx < minTx:\n if i.fullyEstablished:\n i.append_write_buf(protocol.CreatePacket('ping'))\n else:\n i.close_reason = \"Timeout (%is)\" % (time.time() - i.lastTx) \n i.set_state(\"close\")\n for i in self.inboundConnections.values() + self.outboundConnections.values() + self.listeningSockets.values() + self.udpSockets.values():\n if not (i.accepting or i.connecting or i.connected):\n reaper.append(i)\n else:\n try:\n if i.state == \"close\":\n reaper.append(i)\n except AttributeError:\n pass\n for i in reaper:\n self.removeConnection(i)\n", "path": "src/network/connectionpool.py"}], "after_files": [{"content": "from ConfigParser import NoOptionError, NoSectionError\nimport errno\nimport socket\nimport time\nimport random\nimport re\n\nfrom bmconfigparser import BMConfigParser\nfrom debug import logger\nimport helper_bootstrap\nimport knownnodes\nfrom network.proxy import Proxy\nfrom network.tcp import TCPServer, Socks5BMConnection, Socks4aBMConnection, TCPConnection\nfrom network.udp import UDPSocket\nfrom network.connectionchooser import chooseConnection\nimport network.asyncore_pollchoose as asyncore\nimport protocol\nfrom singleton import Singleton\nimport state\nimport helper_random\n\n\n@Singleton\nclass BMConnectionPool(object):\n def __init__(self):\n asyncore.set_rates(\n BMConfigParser().safeGetInt(\"bitmessagesettings\", \"maxdownloadrate\"),\n BMConfigParser().safeGetInt(\"bitmessagesettings\", \"maxuploadrate\"))\n self.outboundConnections = {}\n self.inboundConnections = {}\n self.listeningSockets = {}\n self.udpSockets = {}\n self.streams = []\n self.lastSpawned = 0\n self.spawnWait = 2 \n self.bootstrapped = False\n\n def connectToStream(self, streamNumber):\n self.streams.append(streamNumber)\n\n def getConnectionByAddr(self, addr):\n if addr in self.inboundConnections:\n return self.inboundConnections[addr]\n try:\n if addr.host in self.inboundConnections:\n return self.inboundConnections[addr.host]\n except AttributeError:\n pass\n if addr in self.outboundConnections:\n return self.outboundConnections[addr]\n try:\n if addr.host in self.udpSockets:\n return self.udpSockets[addr.host]\n except AttributeError:\n pass\n raise KeyError\n\n def isAlreadyConnected(self, nodeid):\n for i in self.inboundConnections.values() + self.outboundConnections.values():\n try:\n if nodeid == i.nodeid:\n return True\n except AttributeError:\n pass\n return False\n\n def addConnection(self, connection):\n if isinstance(connection, UDPSocket):\n return\n if connection.isOutbound:\n self.outboundConnections[connection.destination] = connection\n else:\n if connection.destination.host in self.inboundConnections:\n self.inboundConnections[connection.destination] = connection\n else:\n self.inboundConnections[connection.destination.host] = connection\n\n def removeConnection(self, connection):\n if isinstance(connection, UDPSocket):\n del self.udpSockets[connection.listening.host]\n elif isinstance(connection, TCPServer):\n del self.listeningSockets[state.Peer(connection.destination.host, connection.destination.port)]\n elif connection.isOutbound:\n try:\n del self.outboundConnections[connection.destination]\n except KeyError:\n pass\n else:\n try:\n del self.inboundConnections[connection.destination]\n except KeyError:\n try:\n del self.inboundConnections[connection.destination.host]\n except KeyError:\n pass\n connection.handle_close()\n\n def getListeningIP(self):\n if BMConfigParser().safeGet(\"bitmessagesettings\", \"onionhostname\").endswith(\".onion\"):\n host = BMConfigParser().safeGet(\"bitmessagesettings\", \"onionbindip\")\n else:\n host = '127.0.0.1'\n if BMConfigParser().safeGetBoolean(\"bitmessagesettings\", \"sockslisten\") or \\\n BMConfigParser().get(\"bitmessagesettings\", \"socksproxytype\") == \"none\":\n # python doesn't like bind + INADDR_ANY?\n #host = socket.INADDR_ANY\n host = BMConfigParser().get(\"network\", \"bind\")\n return host\n\n def startListening(self, bind=None):\n if bind is None:\n bind = self.getListeningIP()\n port = BMConfigParser().safeGetInt(\"bitmessagesettings\", \"port\")\n # correct port even if it changed\n ls = TCPServer(host=bind, port=port)\n self.listeningSockets[ls.destination] = ls\n\n def startUDPSocket(self, bind=None):\n if bind is None:\n host = self.getListeningIP()\n udpSocket = UDPSocket(host=host, announcing=True)\n else:\n if bind is False:\n udpSocket = UDPSocket(announcing=False)\n else:\n udpSocket = UDPSocket(host=bind, announcing=True)\n self.udpSockets[udpSocket.listening.host] = udpSocket\n\n def loop(self):\n # defaults to empty loop if outbound connections are maxed\n spawnConnections = False\n acceptConnections = True\n if BMConfigParser().safeGetBoolean('bitmessagesettings', 'dontconnect'):\n acceptConnections = False\n elif BMConfigParser().safeGetBoolean('bitmessagesettings', 'sendoutgoingconnections'):\n spawnConnections = True\n if BMConfigParser().get('bitmessagesettings', 'socksproxytype')[0:5] == 'SOCKS' and \\\n (not BMConfigParser().getboolean('bitmessagesettings', 'sockslisten') and \\\n \".onion\" not in BMConfigParser().get('bitmessagesettings', 'onionhostname')):\n acceptConnections = False\n\n if spawnConnections:\n if not knownnodes.knownNodesActual:\n helper_bootstrap.dns()\n if not self.bootstrapped:\n self.bootstrapped = True\n Proxy.proxy = (BMConfigParser().safeGet(\"bitmessagesettings\", \"sockshostname\"),\n BMConfigParser().safeGetInt(\"bitmessagesettings\", \"socksport\"))\n # TODO AUTH\n # TODO reset based on GUI settings changes\n try:\n if not BMConfigParser().get(\"network\", \"onionsocksproxytype\").startswith(\"SOCKS\"):\n raise NoOptionError\n Proxy.onionproxy = (BMConfigParser().get(\"network\", \"onionsockshostname\"),\n BMConfigParser().getint(\"network\", \"onionsocksport\"))\n except (NoOptionError, NoSectionError):\n Proxy.onionproxy = None\n established = sum(1 for c in self.outboundConnections.values() if (c.connected and c.fullyEstablished))\n pending = len(self.outboundConnections) - established\n if established < BMConfigParser().safeGetInt(\"bitmessagesettings\", \"maxoutboundconnections\"):\n for i in range(state.maximumNumberOfHalfOpenConnections - pending):\n try:\n chosen = chooseConnection(helper_random.randomchoice(self.streams))\n except ValueError:\n continue\n if chosen in self.outboundConnections:\n continue\n if chosen.host in self.inboundConnections:\n continue\n # don't connect to self\n if chosen in state.ownAddresses:\n continue\n \n #for c in self.outboundConnections:\n # if chosen == c.destination:\n # continue\n #for c in self.inboundConnections:\n # if chosen.host == c.destination.host:\n # continue\n try:\n if chosen.host.endswith(\".onion\") and Proxy.onionproxy is not None:\n if BMConfigParser().get(\"network\", \"onionsocksproxytype\") == \"SOCKS5\":\n self.addConnection(Socks5BMConnection(chosen))\n elif BMConfigParser().get(\"network\", \"onionsocksproxytype\") == \"SOCKS4a\":\n self.addConnection(Socks4aBMConnection(chosen))\n elif BMConfigParser().safeGet(\"bitmessagesettings\", \"socksproxytype\") == \"SOCKS5\":\n self.addConnection(Socks5BMConnection(chosen))\n elif BMConfigParser().safeGet(\"bitmessagesettings\", \"socksproxytype\") == \"SOCKS4a\":\n self.addConnection(Socks4aBMConnection(chosen))\n else:\n self.addConnection(TCPConnection(chosen))\n except socket.error as e:\n if e.errno == errno.ENETUNREACH:\n continue\n except (NoSectionError, NoOptionError):\n # shouldn't happen\n pass\n\n self.lastSpawned = time.time()\n else:\n for i in (\n self.inboundConnections.values() +\n self.outboundConnections.values()\n ):\n # FIXME: rating will be increased after next connection\n i.handle_close()\n\n if acceptConnections:\n if not self.listeningSockets:\n if BMConfigParser().safeGet(\"network\", \"bind\") == '':\n self.startListening()\n else:\n for bind in re.sub(\"[^\\w.]+\", \" \", BMConfigParser().safeGet(\"network\", \"bind\")).split():\n self.startListening(bind)\n logger.info('Listening for incoming connections.')\n if not self.udpSockets:\n if BMConfigParser().safeGet(\"network\", \"bind\") == '':\n self.startUDPSocket()\n else:\n for bind in re.sub(\"[^\\w.]+\", \" \", BMConfigParser().safeGet(\"network\", \"bind\")).split():\n self.startUDPSocket(bind)\n self.startUDPSocket(False)\n logger.info('Starting UDP socket(s).')\n else:\n if self.listeningSockets:\n for i in self.listeningSockets.values():\n i.close_reason = \"Stopping listening\"\n i.accepting = i.connecting = i.connected = False\n logger.info('Stopped listening for incoming connections.')\n if self.udpSockets:\n for i in self.udpSockets.values():\n i.close_reason = \"Stopping UDP socket\"\n i.accepting = i.connecting = i.connected = False\n logger.info('Stopped udp sockets.')\n\n loopTime = float(self.spawnWait)\n if self.lastSpawned < time.time() - self.spawnWait:\n loopTime = 2.0\n asyncore.loop(timeout=loopTime, count=1000)\n\n reaper = []\n for i in self.inboundConnections.values() + self.outboundConnections.values():\n minTx = time.time() - 20\n if i.fullyEstablished:\n minTx -= 300 - 20\n if i.lastTx < minTx:\n if i.fullyEstablished:\n i.append_write_buf(protocol.CreatePacket('ping'))\n else:\n i.close_reason = \"Timeout (%is)\" % (time.time() - i.lastTx) \n i.set_state(\"close\")\n for i in self.inboundConnections.values() + self.outboundConnections.values() + self.listeningSockets.values() + self.udpSockets.values():\n if not (i.accepting or i.connecting or i.connected):\n reaper.append(i)\n else:\n try:\n if i.state == \"close\":\n reaper.append(i)\n except AttributeError:\n pass\n for i in reaper:\n self.removeConnection(i)\n", "path": "src/network/connectionpool.py"}]} | 3,513 | 161 |
gh_patches_debug_48679 | rasdani/github-patches | git_diff | ethereum__web3.py-2659 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
protobuf dependency compatibility
* Python: 3.5
* OS: osx
* `import web3` output
```
ContextualVersionConflict
```
### What was wrong?
[protobuf](https://github.com/ethereum/web3.py/pull/1493) compatibility needs updating. Needed to downgrade protobuf to get it working. Version currently needs to be >4 but protobuf's latest version is 4.21.6
### How can it be fixed?
The newest version of protobuf should be compatible https://pypi.org/project/protobuf/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 from setuptools import (
3 find_packages,
4 setup,
5 )
6
7 extras_require = {
8 "tester": [
9 "eth-tester[py-evm]==v0.6.0-beta.6",
10 "py-geth>=3.9.1,<4",
11 ],
12 "linter": [
13 "flake8==3.8.3",
14 "isort>=4.2.15,<4.3.5",
15 "mypy==0.910",
16 "types-setuptools>=57.4.4,<58",
17 "types-requests>=2.26.1,<3",
18 "types-protobuf==3.19.13",
19 ],
20 "docs": [
21 "mock",
22 "sphinx-better-theme>=0.1.4",
23 "click>=5.1",
24 "configparser==3.5.0",
25 "contextlib2>=0.5.4",
26 "py-geth>=3.9.1,<4",
27 "py-solc>=0.4.0",
28 "pytest>=4.4.0,<5.0.0",
29 "sphinx>=3.0,<4",
30 "sphinx_rtd_theme>=0.1.9",
31 "toposort>=1.4",
32 "towncrier==18.5.0",
33 "urllib3",
34 "wheel",
35 "Jinja2<=3.0.3", # Jinja v3.1.0 dropped support for python 3.6
36 ],
37 "dev": [
38 "bumpversion",
39 "flaky>=3.7.0,<4",
40 "hypothesis>=3.31.2,<6",
41 "pytest>=4.4.0,<5.0.0",
42 "pytest-asyncio>=0.10.0,<0.11",
43 "pytest-mock>=1.10,<2",
44 "pytest-pythonpath>=0.3",
45 "pytest-watch>=4.2,<5",
46 "pytest-xdist>=1.29,<2",
47 "setuptools>=38.6.0",
48 "tox>=1.8.0",
49 "tqdm>4.32,<5",
50 "twine>=1.13,<2",
51 "pluggy==0.13.1",
52 "when-changed>=0.3.0,<0.4",
53 ],
54 }
55
56 extras_require["dev"] = (
57 extras_require["tester"]
58 + extras_require["linter"]
59 + extras_require["docs"]
60 + extras_require["dev"]
61 )
62
63 with open("./README.md") as readme:
64 long_description = readme.read()
65
66 setup(
67 name="web3",
68 # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.
69 version="5.31.0",
70 description="""Web3.py""",
71 long_description_content_type="text/markdown",
72 long_description=long_description,
73 author="Piper Merriam",
74 author_email="[email protected]",
75 url="https://github.com/ethereum/web3.py",
76 include_package_data=True,
77 install_requires=[
78 "aiohttp>=3.7.4.post0,<4",
79 "eth-abi>=2.0.0b6,<3.0.0",
80 "eth-account>=0.5.9,<0.6.0",
81 "eth-hash[pycryptodome]>=0.2.0,<1.0.0",
82 # eth-account allows too broad of an eth-rlp dependency.
83 # This eth-rlp pin can be removed once it gets tightened up in eth-account
84 "eth-rlp<0.3",
85 "eth-typing>=2.0.0,<3.0.0",
86 "eth-utils>=1.9.5,<2.0.0",
87 "hexbytes>=0.1.0,<1.0.0",
88 "ipfshttpclient==0.8.0a2",
89 "jsonschema>=3.2.0,<5",
90 "lru-dict>=1.1.6,<2.0.0",
91 "protobuf>=3.10.0,<4",
92 "pywin32>=223;platform_system=='Windows'",
93 "requests>=2.16.0,<3.0.0",
94 # remove typing_extensions after python_requires>=3.8, see web3._utils.compat
95 "typing-extensions>=3.7.4.1,<5;python_version<'3.8'",
96 "websockets>=9.1,<10",
97 ],
98 python_requires=">=3.6,<4",
99 extras_require=extras_require,
100 py_modules=["web3", "ens", "ethpm"],
101 entry_points={"pytest11": ["pytest_ethereum = web3.tools.pytest_ethereum.plugins"]},
102 license="MIT",
103 zip_safe=False,
104 keywords="ethereum",
105 packages=find_packages(exclude=["tests", "tests.*"]),
106 package_data={"web3": ["py.typed"]},
107 classifiers=[
108 "Development Status :: 5 - Production/Stable",
109 "Intended Audience :: Developers",
110 "License :: OSI Approved :: MIT License",
111 "Natural Language :: English",
112 "Programming Language :: Python :: 3",
113 "Programming Language :: Python :: 3.6",
114 "Programming Language :: Python :: 3.7",
115 "Programming Language :: Python :: 3.8",
116 "Programming Language :: Python :: 3.9",
117 ],
118 )
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -88,7 +88,7 @@
"ipfshttpclient==0.8.0a2",
"jsonschema>=3.2.0,<5",
"lru-dict>=1.1.6,<2.0.0",
- "protobuf>=3.10.0,<4",
+ "protobuf==3.19.4",
"pywin32>=223;platform_system=='Windows'",
"requests>=2.16.0,<3.0.0",
# remove typing_extensions after python_requires>=3.8, see web3._utils.compat
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -88,7 +88,7 @@\n \"ipfshttpclient==0.8.0a2\",\n \"jsonschema>=3.2.0,<5\",\n \"lru-dict>=1.1.6,<2.0.0\",\n- \"protobuf>=3.10.0,<4\",\n+ \"protobuf==3.19.4\",\n \"pywin32>=223;platform_system=='Windows'\",\n \"requests>=2.16.0,<3.0.0\",\n # remove typing_extensions after python_requires>=3.8, see web3._utils.compat\n", "issue": "protobuf dependency compatibility\n* Python: 3.5\r\n* OS: osx\r\n* `import web3` output\r\n\r\n```\r\nContextualVersionConflict\r\n```\r\n\r\n### What was wrong?\r\n\r\n[protobuf](https://github.com/ethereum/web3.py/pull/1493) compatibility needs updating. Needed to downgrade protobuf to get it working. Version currently needs to be >4 but protobuf's latest version is 4.21.6\r\n\r\n### How can it be fixed?\r\n\r\nThe newest version of protobuf should be compatible https://pypi.org/project/protobuf/\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n \"tester\": [\n \"eth-tester[py-evm]==v0.6.0-beta.6\",\n \"py-geth>=3.9.1,<4\",\n ],\n \"linter\": [\n \"flake8==3.8.3\",\n \"isort>=4.2.15,<4.3.5\",\n \"mypy==0.910\",\n \"types-setuptools>=57.4.4,<58\",\n \"types-requests>=2.26.1,<3\",\n \"types-protobuf==3.19.13\",\n ],\n \"docs\": [\n \"mock\",\n \"sphinx-better-theme>=0.1.4\",\n \"click>=5.1\",\n \"configparser==3.5.0\",\n \"contextlib2>=0.5.4\",\n \"py-geth>=3.9.1,<4\",\n \"py-solc>=0.4.0\",\n \"pytest>=4.4.0,<5.0.0\",\n \"sphinx>=3.0,<4\",\n \"sphinx_rtd_theme>=0.1.9\",\n \"toposort>=1.4\",\n \"towncrier==18.5.0\",\n \"urllib3\",\n \"wheel\",\n \"Jinja2<=3.0.3\", # Jinja v3.1.0 dropped support for python 3.6\n ],\n \"dev\": [\n \"bumpversion\",\n \"flaky>=3.7.0,<4\",\n \"hypothesis>=3.31.2,<6\",\n \"pytest>=4.4.0,<5.0.0\",\n \"pytest-asyncio>=0.10.0,<0.11\",\n \"pytest-mock>=1.10,<2\",\n \"pytest-pythonpath>=0.3\",\n \"pytest-watch>=4.2,<5\",\n \"pytest-xdist>=1.29,<2\",\n \"setuptools>=38.6.0\",\n \"tox>=1.8.0\",\n \"tqdm>4.32,<5\",\n \"twine>=1.13,<2\",\n \"pluggy==0.13.1\",\n \"when-changed>=0.3.0,<0.4\",\n ],\n}\n\nextras_require[\"dev\"] = (\n extras_require[\"tester\"]\n + extras_require[\"linter\"]\n + extras_require[\"docs\"]\n + extras_require[\"dev\"]\n)\n\nwith open(\"./README.md\") as readme:\n long_description = readme.read()\n\nsetup(\n name=\"web3\",\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version=\"5.31.0\",\n description=\"\"\"Web3.py\"\"\",\n long_description_content_type=\"text/markdown\",\n long_description=long_description,\n author=\"Piper Merriam\",\n author_email=\"[email protected]\",\n url=\"https://github.com/ethereum/web3.py\",\n include_package_data=True,\n install_requires=[\n \"aiohttp>=3.7.4.post0,<4\",\n \"eth-abi>=2.0.0b6,<3.0.0\",\n \"eth-account>=0.5.9,<0.6.0\",\n \"eth-hash[pycryptodome]>=0.2.0,<1.0.0\",\n # eth-account allows too broad of an eth-rlp dependency.\n # This eth-rlp pin can be removed once it gets tightened up in eth-account\n \"eth-rlp<0.3\",\n \"eth-typing>=2.0.0,<3.0.0\",\n \"eth-utils>=1.9.5,<2.0.0\",\n \"hexbytes>=0.1.0,<1.0.0\",\n \"ipfshttpclient==0.8.0a2\",\n \"jsonschema>=3.2.0,<5\",\n \"lru-dict>=1.1.6,<2.0.0\",\n \"protobuf>=3.10.0,<4\",\n \"pywin32>=223;platform_system=='Windows'\",\n \"requests>=2.16.0,<3.0.0\",\n # remove typing_extensions after python_requires>=3.8, see web3._utils.compat\n \"typing-extensions>=3.7.4.1,<5;python_version<'3.8'\",\n \"websockets>=9.1,<10\",\n ],\n python_requires=\">=3.6,<4\",\n extras_require=extras_require,\n py_modules=[\"web3\", \"ens\", \"ethpm\"],\n entry_points={\"pytest11\": [\"pytest_ethereum = web3.tools.pytest_ethereum.plugins\"]},\n license=\"MIT\",\n zip_safe=False,\n keywords=\"ethereum\",\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n package_data={\"web3\": [\"py.typed\"]},\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n \"tester\": [\n \"eth-tester[py-evm]==v0.6.0-beta.6\",\n \"py-geth>=3.9.1,<4\",\n ],\n \"linter\": [\n \"flake8==3.8.3\",\n \"isort>=4.2.15,<4.3.5\",\n \"mypy==0.910\",\n \"types-setuptools>=57.4.4,<58\",\n \"types-requests>=2.26.1,<3\",\n \"types-protobuf==3.19.13\",\n ],\n \"docs\": [\n \"mock\",\n \"sphinx-better-theme>=0.1.4\",\n \"click>=5.1\",\n \"configparser==3.5.0\",\n \"contextlib2>=0.5.4\",\n \"py-geth>=3.9.1,<4\",\n \"py-solc>=0.4.0\",\n \"pytest>=4.4.0,<5.0.0\",\n \"sphinx>=3.0,<4\",\n \"sphinx_rtd_theme>=0.1.9\",\n \"toposort>=1.4\",\n \"towncrier==18.5.0\",\n \"urllib3\",\n \"wheel\",\n \"Jinja2<=3.0.3\", # Jinja v3.1.0 dropped support for python 3.6\n ],\n \"dev\": [\n \"bumpversion\",\n \"flaky>=3.7.0,<4\",\n \"hypothesis>=3.31.2,<6\",\n \"pytest>=4.4.0,<5.0.0\",\n \"pytest-asyncio>=0.10.0,<0.11\",\n \"pytest-mock>=1.10,<2\",\n \"pytest-pythonpath>=0.3\",\n \"pytest-watch>=4.2,<5\",\n \"pytest-xdist>=1.29,<2\",\n \"setuptools>=38.6.0\",\n \"tox>=1.8.0\",\n \"tqdm>4.32,<5\",\n \"twine>=1.13,<2\",\n \"pluggy==0.13.1\",\n \"when-changed>=0.3.0,<0.4\",\n ],\n}\n\nextras_require[\"dev\"] = (\n extras_require[\"tester\"]\n + extras_require[\"linter\"]\n + extras_require[\"docs\"]\n + extras_require[\"dev\"]\n)\n\nwith open(\"./README.md\") as readme:\n long_description = readme.read()\n\nsetup(\n name=\"web3\",\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version=\"5.31.0\",\n description=\"\"\"Web3.py\"\"\",\n long_description_content_type=\"text/markdown\",\n long_description=long_description,\n author=\"Piper Merriam\",\n author_email=\"[email protected]\",\n url=\"https://github.com/ethereum/web3.py\",\n include_package_data=True,\n install_requires=[\n \"aiohttp>=3.7.4.post0,<4\",\n \"eth-abi>=2.0.0b6,<3.0.0\",\n \"eth-account>=0.5.9,<0.6.0\",\n \"eth-hash[pycryptodome]>=0.2.0,<1.0.0\",\n # eth-account allows too broad of an eth-rlp dependency.\n # This eth-rlp pin can be removed once it gets tightened up in eth-account\n \"eth-rlp<0.3\",\n \"eth-typing>=2.0.0,<3.0.0\",\n \"eth-utils>=1.9.5,<2.0.0\",\n \"hexbytes>=0.1.0,<1.0.0\",\n \"ipfshttpclient==0.8.0a2\",\n \"jsonschema>=3.2.0,<5\",\n \"lru-dict>=1.1.6,<2.0.0\",\n \"protobuf==3.19.4\",\n \"pywin32>=223;platform_system=='Windows'\",\n \"requests>=2.16.0,<3.0.0\",\n # remove typing_extensions after python_requires>=3.8, see web3._utils.compat\n \"typing-extensions>=3.7.4.1,<5;python_version<'3.8'\",\n \"websockets>=9.1,<10\",\n ],\n python_requires=\">=3.6,<4\",\n extras_require=extras_require,\n py_modules=[\"web3\", \"ens\", \"ethpm\"],\n entry_points={\"pytest11\": [\"pytest_ethereum = web3.tools.pytest_ethereum.plugins\"]},\n license=\"MIT\",\n zip_safe=False,\n keywords=\"ethereum\",\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n package_data={\"web3\": [\"py.typed\"]},\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n ],\n)\n", "path": "setup.py"}]} | 1,857 | 158 |
gh_patches_debug_37097 | rasdani/github-patches | git_diff | AUTOMATIC1111__stable-diffusion-webui-12975 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Feature Request]: Where is the save style button?
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Is it possible to make the old implementation of save style as well?
Not being able to save the currently typed prompt is very troublesome.
Why do we have to open the edit screen and copy/paste the prompt?
### Proposed workflow
Restore old implementation of save styles button
### Additional information
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `modules/ui_prompt_styles.py`
Content:
```
1 import gradio as gr
2
3 from modules import shared, ui_common, ui_components, styles
4
5 styles_edit_symbol = '\U0001f58c\uFE0F' # 🖌️
6 styles_materialize_symbol = '\U0001f4cb' # 📋
7
8
9 def select_style(name):
10 style = shared.prompt_styles.styles.get(name)
11 existing = style is not None
12 empty = not name
13
14 prompt = style.prompt if style else gr.update()
15 negative_prompt = style.negative_prompt if style else gr.update()
16
17 return prompt, negative_prompt, gr.update(visible=existing), gr.update(visible=not empty)
18
19
20 def save_style(name, prompt, negative_prompt):
21 if not name:
22 return gr.update(visible=False)
23
24 style = styles.PromptStyle(name, prompt, negative_prompt)
25 shared.prompt_styles.styles[style.name] = style
26 shared.prompt_styles.save_styles(shared.styles_filename)
27
28 return gr.update(visible=True)
29
30
31 def delete_style(name):
32 if name == "":
33 return
34
35 shared.prompt_styles.styles.pop(name, None)
36 shared.prompt_styles.save_styles(shared.styles_filename)
37
38 return '', '', ''
39
40
41 def materialize_styles(prompt, negative_prompt, styles):
42 prompt = shared.prompt_styles.apply_styles_to_prompt(prompt, styles)
43 negative_prompt = shared.prompt_styles.apply_negative_styles_to_prompt(negative_prompt, styles)
44
45 return [gr.Textbox.update(value=prompt), gr.Textbox.update(value=negative_prompt), gr.Dropdown.update(value=[])]
46
47
48 def refresh_styles():
49 return gr.update(choices=list(shared.prompt_styles.styles)), gr.update(choices=list(shared.prompt_styles.styles))
50
51
52 class UiPromptStyles:
53 def __init__(self, tabname, main_ui_prompt, main_ui_negative_prompt):
54 self.tabname = tabname
55
56 with gr.Row(elem_id=f"{tabname}_styles_row"):
57 self.dropdown = gr.Dropdown(label="Styles", show_label=False, elem_id=f"{tabname}_styles", choices=list(shared.prompt_styles.styles), value=[], multiselect=True, tooltip="Styles")
58 edit_button = ui_components.ToolButton(value=styles_edit_symbol, elem_id=f"{tabname}_styles_edit_button", tooltip="Edit styles")
59
60 with gr.Box(elem_id=f"{tabname}_styles_dialog", elem_classes="popup-dialog") as styles_dialog:
61 with gr.Row():
62 self.selection = gr.Dropdown(label="Styles", elem_id=f"{tabname}_styles_edit_select", choices=list(shared.prompt_styles.styles), value=[], allow_custom_value=True, info="Styles allow you to add custom text to prompt. Use the {prompt} token in style text, and it will be replaced with user's prompt when applying style. Otherwise, style's text will be added to the end of the prompt.")
63 ui_common.create_refresh_button([self.dropdown, self.selection], shared.prompt_styles.reload, lambda: {"choices": list(shared.prompt_styles.styles)}, f"refresh_{tabname}_styles")
64 self.materialize = ui_components.ToolButton(value=styles_materialize_symbol, elem_id=f"{tabname}_style_apply", tooltip="Apply all selected styles from the style selction dropdown in main UI to the prompt.")
65
66 with gr.Row():
67 self.prompt = gr.Textbox(label="Prompt", show_label=True, elem_id=f"{tabname}_edit_style_prompt", lines=3)
68
69 with gr.Row():
70 self.neg_prompt = gr.Textbox(label="Negative prompt", show_label=True, elem_id=f"{tabname}_edit_style_neg_prompt", lines=3)
71
72 with gr.Row():
73 self.save = gr.Button('Save', variant='primary', elem_id=f'{tabname}_edit_style_save', visible=False)
74 self.delete = gr.Button('Delete', variant='primary', elem_id=f'{tabname}_edit_style_delete', visible=False)
75 self.close = gr.Button('Close', variant='secondary', elem_id=f'{tabname}_edit_style_close')
76
77 self.selection.change(
78 fn=select_style,
79 inputs=[self.selection],
80 outputs=[self.prompt, self.neg_prompt, self.delete, self.save],
81 show_progress=False,
82 )
83
84 self.save.click(
85 fn=save_style,
86 inputs=[self.selection, self.prompt, self.neg_prompt],
87 outputs=[self.delete],
88 show_progress=False,
89 ).then(refresh_styles, outputs=[self.dropdown, self.selection], show_progress=False)
90
91 self.delete.click(
92 fn=delete_style,
93 _js='function(name){ if(name == "") return ""; return confirm("Delete style " + name + "?") ? name : ""; }',
94 inputs=[self.selection],
95 outputs=[self.selection, self.prompt, self.neg_prompt],
96 show_progress=False,
97 ).then(refresh_styles, outputs=[self.dropdown, self.selection], show_progress=False)
98
99 self.materialize.click(
100 fn=materialize_styles,
101 inputs=[main_ui_prompt, main_ui_negative_prompt, self.dropdown],
102 outputs=[main_ui_prompt, main_ui_negative_prompt, self.dropdown],
103 show_progress=False,
104 ).then(fn=None, _js="function(){update_"+tabname+"_tokens(); closePopup();}", show_progress=False)
105
106 ui_common.setup_dialog(button_show=edit_button, dialog=styles_dialog, button_close=self.close)
107
108
109
110
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/modules/ui_prompt_styles.py b/modules/ui_prompt_styles.py
--- a/modules/ui_prompt_styles.py
+++ b/modules/ui_prompt_styles.py
@@ -4,6 +4,7 @@
styles_edit_symbol = '\U0001f58c\uFE0F' # 🖌️
styles_materialize_symbol = '\U0001f4cb' # 📋
+styles_copy_symbol = '\U0001f4dd' # 📝
def select_style(name):
@@ -62,6 +63,7 @@
self.selection = gr.Dropdown(label="Styles", elem_id=f"{tabname}_styles_edit_select", choices=list(shared.prompt_styles.styles), value=[], allow_custom_value=True, info="Styles allow you to add custom text to prompt. Use the {prompt} token in style text, and it will be replaced with user's prompt when applying style. Otherwise, style's text will be added to the end of the prompt.")
ui_common.create_refresh_button([self.dropdown, self.selection], shared.prompt_styles.reload, lambda: {"choices": list(shared.prompt_styles.styles)}, f"refresh_{tabname}_styles")
self.materialize = ui_components.ToolButton(value=styles_materialize_symbol, elem_id=f"{tabname}_style_apply", tooltip="Apply all selected styles from the style selction dropdown in main UI to the prompt.")
+ self.copy = ui_components.ToolButton(value=styles_copy_symbol, elem_id=f"{tabname}_style_copy", tooltip="Copy main UI prompt to style.")
with gr.Row():
self.prompt = gr.Textbox(label="Prompt", show_label=True, elem_id=f"{tabname}_edit_style_prompt", lines=3)
@@ -103,6 +105,13 @@
show_progress=False,
).then(fn=None, _js="function(){update_"+tabname+"_tokens(); closePopup();}", show_progress=False)
+ self.copy.click(
+ fn=lambda p, n: (p, n),
+ inputs=[main_ui_prompt, main_ui_negative_prompt],
+ outputs=[self.prompt, self.neg_prompt],
+ show_progress=False,
+ )
+
ui_common.setup_dialog(button_show=edit_button, dialog=styles_dialog, button_close=self.close)
| {"golden_diff": "diff --git a/modules/ui_prompt_styles.py b/modules/ui_prompt_styles.py\n--- a/modules/ui_prompt_styles.py\n+++ b/modules/ui_prompt_styles.py\n@@ -4,6 +4,7 @@\n \r\n styles_edit_symbol = '\\U0001f58c\\uFE0F' # \ud83d\udd8c\ufe0f\r\n styles_materialize_symbol = '\\U0001f4cb' # \ud83d\udccb\r\n+styles_copy_symbol = '\\U0001f4dd' # \ud83d\udcdd\r\n \r\n \r\n def select_style(name):\r\n@@ -62,6 +63,7 @@\n self.selection = gr.Dropdown(label=\"Styles\", elem_id=f\"{tabname}_styles_edit_select\", choices=list(shared.prompt_styles.styles), value=[], allow_custom_value=True, info=\"Styles allow you to add custom text to prompt. Use the {prompt} token in style text, and it will be replaced with user's prompt when applying style. Otherwise, style's text will be added to the end of the prompt.\")\r\n ui_common.create_refresh_button([self.dropdown, self.selection], shared.prompt_styles.reload, lambda: {\"choices\": list(shared.prompt_styles.styles)}, f\"refresh_{tabname}_styles\")\r\n self.materialize = ui_components.ToolButton(value=styles_materialize_symbol, elem_id=f\"{tabname}_style_apply\", tooltip=\"Apply all selected styles from the style selction dropdown in main UI to the prompt.\")\r\n+ self.copy = ui_components.ToolButton(value=styles_copy_symbol, elem_id=f\"{tabname}_style_copy\", tooltip=\"Copy main UI prompt to style.\")\r\n \r\n with gr.Row():\r\n self.prompt = gr.Textbox(label=\"Prompt\", show_label=True, elem_id=f\"{tabname}_edit_style_prompt\", lines=3)\r\n@@ -103,6 +105,13 @@\n show_progress=False,\r\n ).then(fn=None, _js=\"function(){update_\"+tabname+\"_tokens(); closePopup();}\", show_progress=False)\r\n \r\n+ self.copy.click(\r\n+ fn=lambda p, n: (p, n),\r\n+ inputs=[main_ui_prompt, main_ui_negative_prompt],\r\n+ outputs=[self.prompt, self.neg_prompt],\r\n+ show_progress=False,\r\n+ )\r\n+\r\n ui_common.setup_dialog(button_show=edit_button, dialog=styles_dialog, button_close=self.close)\n", "issue": "[Feature Request]: Where is the save style button?\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What would your feature do ?\n\nIs it possible to make the old implementation of save style as well?\r\nNot being able to save the currently typed prompt is very troublesome.\r\nWhy do we have to open the edit screen and copy/paste the prompt?\n\n### Proposed workflow\n\nRestore old implementation of save styles button\n\n### Additional information\n\n_No response_\n", "before_files": [{"content": "import gradio as gr\r\n\r\nfrom modules import shared, ui_common, ui_components, styles\r\n\r\nstyles_edit_symbol = '\\U0001f58c\\uFE0F' # \ud83d\udd8c\ufe0f\r\nstyles_materialize_symbol = '\\U0001f4cb' # \ud83d\udccb\r\n\r\n\r\ndef select_style(name):\r\n style = shared.prompt_styles.styles.get(name)\r\n existing = style is not None\r\n empty = not name\r\n\r\n prompt = style.prompt if style else gr.update()\r\n negative_prompt = style.negative_prompt if style else gr.update()\r\n\r\n return prompt, negative_prompt, gr.update(visible=existing), gr.update(visible=not empty)\r\n\r\n\r\ndef save_style(name, prompt, negative_prompt):\r\n if not name:\r\n return gr.update(visible=False)\r\n\r\n style = styles.PromptStyle(name, prompt, negative_prompt)\r\n shared.prompt_styles.styles[style.name] = style\r\n shared.prompt_styles.save_styles(shared.styles_filename)\r\n\r\n return gr.update(visible=True)\r\n\r\n\r\ndef delete_style(name):\r\n if name == \"\":\r\n return\r\n\r\n shared.prompt_styles.styles.pop(name, None)\r\n shared.prompt_styles.save_styles(shared.styles_filename)\r\n\r\n return '', '', ''\r\n\r\n\r\ndef materialize_styles(prompt, negative_prompt, styles):\r\n prompt = shared.prompt_styles.apply_styles_to_prompt(prompt, styles)\r\n negative_prompt = shared.prompt_styles.apply_negative_styles_to_prompt(negative_prompt, styles)\r\n\r\n return [gr.Textbox.update(value=prompt), gr.Textbox.update(value=negative_prompt), gr.Dropdown.update(value=[])]\r\n\r\n\r\ndef refresh_styles():\r\n return gr.update(choices=list(shared.prompt_styles.styles)), gr.update(choices=list(shared.prompt_styles.styles))\r\n\r\n\r\nclass UiPromptStyles:\r\n def __init__(self, tabname, main_ui_prompt, main_ui_negative_prompt):\r\n self.tabname = tabname\r\n\r\n with gr.Row(elem_id=f\"{tabname}_styles_row\"):\r\n self.dropdown = gr.Dropdown(label=\"Styles\", show_label=False, elem_id=f\"{tabname}_styles\", choices=list(shared.prompt_styles.styles), value=[], multiselect=True, tooltip=\"Styles\")\r\n edit_button = ui_components.ToolButton(value=styles_edit_symbol, elem_id=f\"{tabname}_styles_edit_button\", tooltip=\"Edit styles\")\r\n\r\n with gr.Box(elem_id=f\"{tabname}_styles_dialog\", elem_classes=\"popup-dialog\") as styles_dialog:\r\n with gr.Row():\r\n self.selection = gr.Dropdown(label=\"Styles\", elem_id=f\"{tabname}_styles_edit_select\", choices=list(shared.prompt_styles.styles), value=[], allow_custom_value=True, info=\"Styles allow you to add custom text to prompt. Use the {prompt} token in style text, and it will be replaced with user's prompt when applying style. Otherwise, style's text will be added to the end of the prompt.\")\r\n ui_common.create_refresh_button([self.dropdown, self.selection], shared.prompt_styles.reload, lambda: {\"choices\": list(shared.prompt_styles.styles)}, f\"refresh_{tabname}_styles\")\r\n self.materialize = ui_components.ToolButton(value=styles_materialize_symbol, elem_id=f\"{tabname}_style_apply\", tooltip=\"Apply all selected styles from the style selction dropdown in main UI to the prompt.\")\r\n\r\n with gr.Row():\r\n self.prompt = gr.Textbox(label=\"Prompt\", show_label=True, elem_id=f\"{tabname}_edit_style_prompt\", lines=3)\r\n\r\n with gr.Row():\r\n self.neg_prompt = gr.Textbox(label=\"Negative prompt\", show_label=True, elem_id=f\"{tabname}_edit_style_neg_prompt\", lines=3)\r\n\r\n with gr.Row():\r\n self.save = gr.Button('Save', variant='primary', elem_id=f'{tabname}_edit_style_save', visible=False)\r\n self.delete = gr.Button('Delete', variant='primary', elem_id=f'{tabname}_edit_style_delete', visible=False)\r\n self.close = gr.Button('Close', variant='secondary', elem_id=f'{tabname}_edit_style_close')\r\n\r\n self.selection.change(\r\n fn=select_style,\r\n inputs=[self.selection],\r\n outputs=[self.prompt, self.neg_prompt, self.delete, self.save],\r\n show_progress=False,\r\n )\r\n\r\n self.save.click(\r\n fn=save_style,\r\n inputs=[self.selection, self.prompt, self.neg_prompt],\r\n outputs=[self.delete],\r\n show_progress=False,\r\n ).then(refresh_styles, outputs=[self.dropdown, self.selection], show_progress=False)\r\n\r\n self.delete.click(\r\n fn=delete_style,\r\n _js='function(name){ if(name == \"\") return \"\"; return confirm(\"Delete style \" + name + \"?\") ? name : \"\"; }',\r\n inputs=[self.selection],\r\n outputs=[self.selection, self.prompt, self.neg_prompt],\r\n show_progress=False,\r\n ).then(refresh_styles, outputs=[self.dropdown, self.selection], show_progress=False)\r\n\r\n self.materialize.click(\r\n fn=materialize_styles,\r\n inputs=[main_ui_prompt, main_ui_negative_prompt, self.dropdown],\r\n outputs=[main_ui_prompt, main_ui_negative_prompt, self.dropdown],\r\n show_progress=False,\r\n ).then(fn=None, _js=\"function(){update_\"+tabname+\"_tokens(); closePopup();}\", show_progress=False)\r\n\r\n ui_common.setup_dialog(button_show=edit_button, dialog=styles_dialog, button_close=self.close)\r\n\r\n\r\n\r\n\r\n", "path": "modules/ui_prompt_styles.py"}], "after_files": [{"content": "import gradio as gr\r\n\r\nfrom modules import shared, ui_common, ui_components, styles\r\n\r\nstyles_edit_symbol = '\\U0001f58c\\uFE0F' # \ud83d\udd8c\ufe0f\r\nstyles_materialize_symbol = '\\U0001f4cb' # \ud83d\udccb\r\nstyles_copy_symbol = '\\U0001f4dd' # \ud83d\udcdd\r\n\r\n\r\ndef select_style(name):\r\n style = shared.prompt_styles.styles.get(name)\r\n existing = style is not None\r\n empty = not name\r\n\r\n prompt = style.prompt if style else gr.update()\r\n negative_prompt = style.negative_prompt if style else gr.update()\r\n\r\n return prompt, negative_prompt, gr.update(visible=existing), gr.update(visible=not empty)\r\n\r\n\r\ndef save_style(name, prompt, negative_prompt):\r\n if not name:\r\n return gr.update(visible=False)\r\n\r\n style = styles.PromptStyle(name, prompt, negative_prompt)\r\n shared.prompt_styles.styles[style.name] = style\r\n shared.prompt_styles.save_styles(shared.styles_filename)\r\n\r\n return gr.update(visible=True)\r\n\r\n\r\ndef delete_style(name):\r\n if name == \"\":\r\n return\r\n\r\n shared.prompt_styles.styles.pop(name, None)\r\n shared.prompt_styles.save_styles(shared.styles_filename)\r\n\r\n return '', '', ''\r\n\r\n\r\ndef materialize_styles(prompt, negative_prompt, styles):\r\n prompt = shared.prompt_styles.apply_styles_to_prompt(prompt, styles)\r\n negative_prompt = shared.prompt_styles.apply_negative_styles_to_prompt(negative_prompt, styles)\r\n\r\n return [gr.Textbox.update(value=prompt), gr.Textbox.update(value=negative_prompt), gr.Dropdown.update(value=[])]\r\n\r\n\r\ndef refresh_styles():\r\n return gr.update(choices=list(shared.prompt_styles.styles)), gr.update(choices=list(shared.prompt_styles.styles))\r\n\r\n\r\nclass UiPromptStyles:\r\n def __init__(self, tabname, main_ui_prompt, main_ui_negative_prompt):\r\n self.tabname = tabname\r\n\r\n with gr.Row(elem_id=f\"{tabname}_styles_row\"):\r\n self.dropdown = gr.Dropdown(label=\"Styles\", show_label=False, elem_id=f\"{tabname}_styles\", choices=list(shared.prompt_styles.styles), value=[], multiselect=True, tooltip=\"Styles\")\r\n edit_button = ui_components.ToolButton(value=styles_edit_symbol, elem_id=f\"{tabname}_styles_edit_button\", tooltip=\"Edit styles\")\r\n\r\n with gr.Box(elem_id=f\"{tabname}_styles_dialog\", elem_classes=\"popup-dialog\") as styles_dialog:\r\n with gr.Row():\r\n self.selection = gr.Dropdown(label=\"Styles\", elem_id=f\"{tabname}_styles_edit_select\", choices=list(shared.prompt_styles.styles), value=[], allow_custom_value=True, info=\"Styles allow you to add custom text to prompt. Use the {prompt} token in style text, and it will be replaced with user's prompt when applying style. Otherwise, style's text will be added to the end of the prompt.\")\r\n ui_common.create_refresh_button([self.dropdown, self.selection], shared.prompt_styles.reload, lambda: {\"choices\": list(shared.prompt_styles.styles)}, f\"refresh_{tabname}_styles\")\r\n self.materialize = ui_components.ToolButton(value=styles_materialize_symbol, elem_id=f\"{tabname}_style_apply\", tooltip=\"Apply all selected styles from the style selction dropdown in main UI to the prompt.\")\r\n self.copy = ui_components.ToolButton(value=styles_copy_symbol, elem_id=f\"{tabname}_style_copy\", tooltip=\"Copy main UI prompt to style.\")\r\n\r\n with gr.Row():\r\n self.prompt = gr.Textbox(label=\"Prompt\", show_label=True, elem_id=f\"{tabname}_edit_style_prompt\", lines=3)\r\n\r\n with gr.Row():\r\n self.neg_prompt = gr.Textbox(label=\"Negative prompt\", show_label=True, elem_id=f\"{tabname}_edit_style_neg_prompt\", lines=3)\r\n\r\n with gr.Row():\r\n self.save = gr.Button('Save', variant='primary', elem_id=f'{tabname}_edit_style_save', visible=False)\r\n self.delete = gr.Button('Delete', variant='primary', elem_id=f'{tabname}_edit_style_delete', visible=False)\r\n self.close = gr.Button('Close', variant='secondary', elem_id=f'{tabname}_edit_style_close')\r\n\r\n self.selection.change(\r\n fn=select_style,\r\n inputs=[self.selection],\r\n outputs=[self.prompt, self.neg_prompt, self.delete, self.save],\r\n show_progress=False,\r\n )\r\n\r\n self.save.click(\r\n fn=save_style,\r\n inputs=[self.selection, self.prompt, self.neg_prompt],\r\n outputs=[self.delete],\r\n show_progress=False,\r\n ).then(refresh_styles, outputs=[self.dropdown, self.selection], show_progress=False)\r\n\r\n self.delete.click(\r\n fn=delete_style,\r\n _js='function(name){ if(name == \"\") return \"\"; return confirm(\"Delete style \" + name + \"?\") ? name : \"\"; }',\r\n inputs=[self.selection],\r\n outputs=[self.selection, self.prompt, self.neg_prompt],\r\n show_progress=False,\r\n ).then(refresh_styles, outputs=[self.dropdown, self.selection], show_progress=False)\r\n\r\n self.materialize.click(\r\n fn=materialize_styles,\r\n inputs=[main_ui_prompt, main_ui_negative_prompt, self.dropdown],\r\n outputs=[main_ui_prompt, main_ui_negative_prompt, self.dropdown],\r\n show_progress=False,\r\n ).then(fn=None, _js=\"function(){update_\"+tabname+\"_tokens(); closePopup();}\", show_progress=False)\r\n\r\n self.copy.click(\r\n fn=lambda p, n: (p, n),\r\n inputs=[main_ui_prompt, main_ui_negative_prompt],\r\n outputs=[self.prompt, self.neg_prompt],\r\n show_progress=False,\r\n )\r\n\r\n ui_common.setup_dialog(button_show=edit_button, dialog=styles_dialog, button_close=self.close)\r\n\r\n\r\n\r\n\r\n", "path": "modules/ui_prompt_styles.py"}]} | 1,710 | 490 |
gh_patches_debug_11197 | rasdani/github-patches | git_diff | ESMCI__cime-2860 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
floating point mpiprocs when running ./case.setup with python3
I noticed that when running with python3, mpiprocs is set to be a float, i.e.,
$ python3 ./case.setup # will create the following in .case.run:
#PBS -l select=5:ncpus=36:mpiprocs=36.0:ompthreads=1
$ python2 ./case.setup # will create the following .case.run:
#PBS -l select=5:ncpus=36:mpiprocs=36:ompthreads=1
NOTE: You'll need to rm .case.run, in between ./case.setup executions to see the difference.
I haven't looked this into depth, but I bet it has to do with "true division" that comes with python3.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/lib/CIME/XML/env_mach_pes.py`
Content:
```
1 """
2 Interface to the env_mach_pes.xml file. This class inherits from EntryID
3 """
4 from CIME.XML.standard_module_setup import *
5 from CIME.XML.env_base import EnvBase
6 import math
7
8 logger = logging.getLogger(__name__)
9
10 class EnvMachPes(EnvBase):
11
12 def __init__(self, case_root=None, infile="env_mach_pes.xml", components=None):
13 """
14 initialize an object interface to file env_mach_pes.xml in the case directory
15 """
16 self._components = components
17 schema = os.path.join(get_cime_root(), "config", "xml_schemas", "env_mach_pes.xsd")
18 EnvBase.__init__(self, case_root, infile, schema=schema)
19
20 def add_comment(self, comment):
21 if comment is not None:
22 node = self.make_child("comment", text=comment)
23 # make_child adds to the end of the file but we want it to follow the header
24 # so we need to remove it and add it in the correct position
25 self.remove_child(node)
26 self.add_child(node, position=1)
27
28 def get_value(self, vid, attribute=None, resolved=True, subgroup=None, max_mpitasks_per_node=None): # pylint: disable=arguments-differ
29 # Special variable NINST_MAX is used to determine the number of
30 # drivers in multi-driver mode.
31 if vid == "NINST_MAX":
32 value = 1
33 for comp in self._components:
34 if comp != "CPL":
35 value = max(value, self.get_value("NINST_{}".format(comp)))
36 return value
37
38 value = EnvBase.get_value(self, vid, attribute, resolved, subgroup)
39
40 if "NTASKS" in vid or "ROOTPE" in vid:
41 if max_mpitasks_per_node is None:
42 max_mpitasks_per_node = self.get_value("MAX_MPITASKS_PER_NODE")
43 if value is not None and value < 0:
44 value = -1*value*max_mpitasks_per_node
45
46 return value
47
48 def set_value(self, vid, value, subgroup=None, ignore_type=False):
49 """
50 Set the value of an entry-id field to value
51 Returns the value or None if not found
52 subgroup is ignored in the general routine and applied in specific methods
53 """
54 if vid == "MULTI_DRIVER" and value:
55 ninst_max = self.get_value("NINST_MAX")
56 for comp in self._components:
57 if comp == "CPL":
58 continue
59 ninst = self.get_value("NINST_{}".format(comp))
60 expect(ninst == ninst_max,
61 "All components must have the same NINST value in multi_driver mode. NINST_{}={} shoud be {}".format(comp,ninst,ninst_max))
62 if "NTASKS" in vid or "NTHRDS" in vid:
63 expect(value != 0, "Cannot set NTASKS or NTHRDS to 0")
64
65
66 return EnvBase.set_value(self, vid, value, subgroup=subgroup, ignore_type=ignore_type)
67
68
69 def get_max_thread_count(self, comp_classes):
70 ''' Find the maximum number of openmp threads for any component in the case '''
71 max_threads = 1
72 for comp in comp_classes:
73 threads = self.get_value("NTHRDS",attribute={"compclass":comp})
74 expect(threads is not None, "Error no thread count found for component class {}".format(comp))
75 if threads > max_threads:
76 max_threads = threads
77 return max_threads
78
79 def get_total_tasks(self, comp_classes):
80 total_tasks = 0
81 maxinst = 1
82 for comp in comp_classes:
83 ntasks = self.get_value("NTASKS", attribute={"compclass":comp})
84 rootpe = self.get_value("ROOTPE", attribute={"compclass":comp})
85 pstrid = self.get_value("PSTRID", attribute={"compclass":comp})
86 if comp != "CPL":
87 ninst = self.get_value("NINST", attribute={"compclass":comp})
88 maxinst = max(maxinst, ninst)
89 tt = rootpe + (ntasks - 1) * pstrid + 1
90 total_tasks = max(tt, total_tasks)
91 if self.get_value("MULTI_DRIVER"):
92 total_tasks *= maxinst
93 return total_tasks
94
95 def get_tasks_per_node(self, total_tasks, max_thread_count):
96 expect(total_tasks > 0,"totaltasks > 0 expected, totaltasks = {}".format(total_tasks))
97 tasks_per_node = min(self.get_value("MAX_TASKS_PER_NODE")/ max_thread_count,
98 self.get_value("MAX_MPITASKS_PER_NODE"), total_tasks)
99 return tasks_per_node if tasks_per_node > 0 else 1
100
101 def get_total_nodes(self, total_tasks, max_thread_count):
102 """
103 Return (num_active_nodes, num_spare_nodes)
104 """
105 tasks_per_node = self.get_tasks_per_node(total_tasks, max_thread_count)
106 num_nodes = int(math.ceil(float(total_tasks) / tasks_per_node))
107 return num_nodes, self.get_spare_nodes(num_nodes)
108
109 def get_spare_nodes(self, num_nodes):
110 force_spare_nodes = self.get_value("FORCE_SPARE_NODES")
111 if force_spare_nodes != -999:
112 return force_spare_nodes
113
114 if self.get_value("ALLOCATE_SPARE_NODES"):
115 ten_pct = int(math.ceil(float(num_nodes) * 0.1))
116 if ten_pct < 1:
117 return 1 # Always provide at lease one spare node
118 elif ten_pct > 10:
119 return 10 # Never provide more than 10 spare nodes
120 else:
121 return ten_pct
122 else:
123 return 0
124
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scripts/lib/CIME/XML/env_mach_pes.py b/scripts/lib/CIME/XML/env_mach_pes.py
--- a/scripts/lib/CIME/XML/env_mach_pes.py
+++ b/scripts/lib/CIME/XML/env_mach_pes.py
@@ -94,7 +94,7 @@
def get_tasks_per_node(self, total_tasks, max_thread_count):
expect(total_tasks > 0,"totaltasks > 0 expected, totaltasks = {}".format(total_tasks))
- tasks_per_node = min(self.get_value("MAX_TASKS_PER_NODE")/ max_thread_count,
+ tasks_per_node = min(self.get_value("MAX_TASKS_PER_NODE")// max_thread_count,
self.get_value("MAX_MPITASKS_PER_NODE"), total_tasks)
return tasks_per_node if tasks_per_node > 0 else 1
| {"golden_diff": "diff --git a/scripts/lib/CIME/XML/env_mach_pes.py b/scripts/lib/CIME/XML/env_mach_pes.py\n--- a/scripts/lib/CIME/XML/env_mach_pes.py\n+++ b/scripts/lib/CIME/XML/env_mach_pes.py\n@@ -94,7 +94,7 @@\n \n def get_tasks_per_node(self, total_tasks, max_thread_count):\n expect(total_tasks > 0,\"totaltasks > 0 expected, totaltasks = {}\".format(total_tasks))\n- tasks_per_node = min(self.get_value(\"MAX_TASKS_PER_NODE\")/ max_thread_count,\n+ tasks_per_node = min(self.get_value(\"MAX_TASKS_PER_NODE\")// max_thread_count,\n self.get_value(\"MAX_MPITASKS_PER_NODE\"), total_tasks)\n return tasks_per_node if tasks_per_node > 0 else 1\n", "issue": "floating point mpiprocs when running ./case.setup with python3\nI noticed that when running with python3, mpiprocs is set to be a float, i.e.,\r\n\r\n$ python3 ./case.setup # will create the following in .case.run:\r\n#PBS -l select=5:ncpus=36:mpiprocs=36.0:ompthreads=1\r\n\r\n$ python2 ./case.setup # will create the following .case.run:\r\n#PBS -l select=5:ncpus=36:mpiprocs=36:ompthreads=1\r\n\r\nNOTE: You'll need to rm .case.run, in between ./case.setup executions to see the difference.\r\n\r\nI haven't looked this into depth, but I bet it has to do with \"true division\" that comes with python3.\n", "before_files": [{"content": "\"\"\"\nInterface to the env_mach_pes.xml file. This class inherits from EntryID\n\"\"\"\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.XML.env_base import EnvBase\nimport math\n\nlogger = logging.getLogger(__name__)\n\nclass EnvMachPes(EnvBase):\n\n def __init__(self, case_root=None, infile=\"env_mach_pes.xml\", components=None):\n \"\"\"\n initialize an object interface to file env_mach_pes.xml in the case directory\n \"\"\"\n self._components = components\n schema = os.path.join(get_cime_root(), \"config\", \"xml_schemas\", \"env_mach_pes.xsd\")\n EnvBase.__init__(self, case_root, infile, schema=schema)\n\n def add_comment(self, comment):\n if comment is not None:\n node = self.make_child(\"comment\", text=comment)\n # make_child adds to the end of the file but we want it to follow the header\n # so we need to remove it and add it in the correct position\n self.remove_child(node)\n self.add_child(node, position=1)\n\n def get_value(self, vid, attribute=None, resolved=True, subgroup=None, max_mpitasks_per_node=None): # pylint: disable=arguments-differ\n # Special variable NINST_MAX is used to determine the number of\n # drivers in multi-driver mode.\n if vid == \"NINST_MAX\":\n value = 1\n for comp in self._components:\n if comp != \"CPL\":\n value = max(value, self.get_value(\"NINST_{}\".format(comp)))\n return value\n\n value = EnvBase.get_value(self, vid, attribute, resolved, subgroup)\n\n if \"NTASKS\" in vid or \"ROOTPE\" in vid:\n if max_mpitasks_per_node is None:\n max_mpitasks_per_node = self.get_value(\"MAX_MPITASKS_PER_NODE\")\n if value is not None and value < 0:\n value = -1*value*max_mpitasks_per_node\n\n return value\n\n def set_value(self, vid, value, subgroup=None, ignore_type=False):\n \"\"\"\n Set the value of an entry-id field to value\n Returns the value or None if not found\n subgroup is ignored in the general routine and applied in specific methods\n \"\"\"\n if vid == \"MULTI_DRIVER\" and value:\n ninst_max = self.get_value(\"NINST_MAX\")\n for comp in self._components:\n if comp == \"CPL\":\n continue\n ninst = self.get_value(\"NINST_{}\".format(comp))\n expect(ninst == ninst_max,\n \"All components must have the same NINST value in multi_driver mode. NINST_{}={} shoud be {}\".format(comp,ninst,ninst_max))\n if \"NTASKS\" in vid or \"NTHRDS\" in vid:\n expect(value != 0, \"Cannot set NTASKS or NTHRDS to 0\")\n\n\n return EnvBase.set_value(self, vid, value, subgroup=subgroup, ignore_type=ignore_type)\n\n\n def get_max_thread_count(self, comp_classes):\n ''' Find the maximum number of openmp threads for any component in the case '''\n max_threads = 1\n for comp in comp_classes:\n threads = self.get_value(\"NTHRDS\",attribute={\"compclass\":comp})\n expect(threads is not None, \"Error no thread count found for component class {}\".format(comp))\n if threads > max_threads:\n max_threads = threads\n return max_threads\n\n def get_total_tasks(self, comp_classes):\n total_tasks = 0\n maxinst = 1\n for comp in comp_classes:\n ntasks = self.get_value(\"NTASKS\", attribute={\"compclass\":comp})\n rootpe = self.get_value(\"ROOTPE\", attribute={\"compclass\":comp})\n pstrid = self.get_value(\"PSTRID\", attribute={\"compclass\":comp})\n if comp != \"CPL\":\n ninst = self.get_value(\"NINST\", attribute={\"compclass\":comp})\n maxinst = max(maxinst, ninst)\n tt = rootpe + (ntasks - 1) * pstrid + 1\n total_tasks = max(tt, total_tasks)\n if self.get_value(\"MULTI_DRIVER\"):\n total_tasks *= maxinst\n return total_tasks\n\n def get_tasks_per_node(self, total_tasks, max_thread_count):\n expect(total_tasks > 0,\"totaltasks > 0 expected, totaltasks = {}\".format(total_tasks))\n tasks_per_node = min(self.get_value(\"MAX_TASKS_PER_NODE\")/ max_thread_count,\n self.get_value(\"MAX_MPITASKS_PER_NODE\"), total_tasks)\n return tasks_per_node if tasks_per_node > 0 else 1\n\n def get_total_nodes(self, total_tasks, max_thread_count):\n \"\"\"\n Return (num_active_nodes, num_spare_nodes)\n \"\"\"\n tasks_per_node = self.get_tasks_per_node(total_tasks, max_thread_count)\n num_nodes = int(math.ceil(float(total_tasks) / tasks_per_node))\n return num_nodes, self.get_spare_nodes(num_nodes)\n\n def get_spare_nodes(self, num_nodes):\n force_spare_nodes = self.get_value(\"FORCE_SPARE_NODES\")\n if force_spare_nodes != -999:\n return force_spare_nodes\n\n if self.get_value(\"ALLOCATE_SPARE_NODES\"):\n ten_pct = int(math.ceil(float(num_nodes) * 0.1))\n if ten_pct < 1:\n return 1 # Always provide at lease one spare node\n elif ten_pct > 10:\n return 10 # Never provide more than 10 spare nodes\n else:\n return ten_pct\n else:\n return 0\n", "path": "scripts/lib/CIME/XML/env_mach_pes.py"}], "after_files": [{"content": "\"\"\"\nInterface to the env_mach_pes.xml file. This class inherits from EntryID\n\"\"\"\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.XML.env_base import EnvBase\nimport math\n\nlogger = logging.getLogger(__name__)\n\nclass EnvMachPes(EnvBase):\n\n def __init__(self, case_root=None, infile=\"env_mach_pes.xml\", components=None):\n \"\"\"\n initialize an object interface to file env_mach_pes.xml in the case directory\n \"\"\"\n self._components = components\n schema = os.path.join(get_cime_root(), \"config\", \"xml_schemas\", \"env_mach_pes.xsd\")\n EnvBase.__init__(self, case_root, infile, schema=schema)\n\n def add_comment(self, comment):\n if comment is not None:\n node = self.make_child(\"comment\", text=comment)\n # make_child adds to the end of the file but we want it to follow the header\n # so we need to remove it and add it in the correct position\n self.remove_child(node)\n self.add_child(node, position=1)\n\n def get_value(self, vid, attribute=None, resolved=True, subgroup=None, max_mpitasks_per_node=None): # pylint: disable=arguments-differ\n # Special variable NINST_MAX is used to determine the number of\n # drivers in multi-driver mode.\n if vid == \"NINST_MAX\":\n value = 1\n for comp in self._components:\n if comp != \"CPL\":\n value = max(value, self.get_value(\"NINST_{}\".format(comp)))\n return value\n\n value = EnvBase.get_value(self, vid, attribute, resolved, subgroup)\n\n if \"NTASKS\" in vid or \"ROOTPE\" in vid:\n if max_mpitasks_per_node is None:\n max_mpitasks_per_node = self.get_value(\"MAX_MPITASKS_PER_NODE\")\n if value is not None and value < 0:\n value = -1*value*max_mpitasks_per_node\n\n return value\n\n def set_value(self, vid, value, subgroup=None, ignore_type=False):\n \"\"\"\n Set the value of an entry-id field to value\n Returns the value or None if not found\n subgroup is ignored in the general routine and applied in specific methods\n \"\"\"\n if vid == \"MULTI_DRIVER\" and value:\n ninst_max = self.get_value(\"NINST_MAX\")\n for comp in self._components:\n if comp == \"CPL\":\n continue\n ninst = self.get_value(\"NINST_{}\".format(comp))\n expect(ninst == ninst_max,\n \"All components must have the same NINST value in multi_driver mode. NINST_{}={} shoud be {}\".format(comp,ninst,ninst_max))\n if \"NTASKS\" in vid or \"NTHRDS\" in vid:\n expect(value != 0, \"Cannot set NTASKS or NTHRDS to 0\")\n\n\n return EnvBase.set_value(self, vid, value, subgroup=subgroup, ignore_type=ignore_type)\n\n\n def get_max_thread_count(self, comp_classes):\n ''' Find the maximum number of openmp threads for any component in the case '''\n max_threads = 1\n for comp in comp_classes:\n threads = self.get_value(\"NTHRDS\",attribute={\"compclass\":comp})\n expect(threads is not None, \"Error no thread count found for component class {}\".format(comp))\n if threads > max_threads:\n max_threads = threads\n return max_threads\n\n def get_total_tasks(self, comp_classes):\n total_tasks = 0\n maxinst = 1\n for comp in comp_classes:\n ntasks = self.get_value(\"NTASKS\", attribute={\"compclass\":comp})\n rootpe = self.get_value(\"ROOTPE\", attribute={\"compclass\":comp})\n pstrid = self.get_value(\"PSTRID\", attribute={\"compclass\":comp})\n if comp != \"CPL\":\n ninst = self.get_value(\"NINST\", attribute={\"compclass\":comp})\n maxinst = max(maxinst, ninst)\n tt = rootpe + (ntasks - 1) * pstrid + 1\n total_tasks = max(tt, total_tasks)\n if self.get_value(\"MULTI_DRIVER\"):\n total_tasks *= maxinst\n return total_tasks\n\n def get_tasks_per_node(self, total_tasks, max_thread_count):\n expect(total_tasks > 0,\"totaltasks > 0 expected, totaltasks = {}\".format(total_tasks))\n tasks_per_node = min(self.get_value(\"MAX_TASKS_PER_NODE\")// max_thread_count,\n self.get_value(\"MAX_MPITASKS_PER_NODE\"), total_tasks)\n return tasks_per_node if tasks_per_node > 0 else 1\n\n def get_total_nodes(self, total_tasks, max_thread_count):\n \"\"\"\n Return (num_active_nodes, num_spare_nodes)\n \"\"\"\n tasks_per_node = self.get_tasks_per_node(total_tasks, max_thread_count)\n num_nodes = int(math.ceil(float(total_tasks) / tasks_per_node))\n return num_nodes, self.get_spare_nodes(num_nodes)\n\n def get_spare_nodes(self, num_nodes):\n force_spare_nodes = self.get_value(\"FORCE_SPARE_NODES\")\n if force_spare_nodes != -999:\n return force_spare_nodes\n\n if self.get_value(\"ALLOCATE_SPARE_NODES\"):\n ten_pct = int(math.ceil(float(num_nodes) * 0.1))\n if ten_pct < 1:\n return 1 # Always provide at lease one spare node\n elif ten_pct > 10:\n return 10 # Never provide more than 10 spare nodes\n else:\n return ten_pct\n else:\n return 0\n", "path": "scripts/lib/CIME/XML/env_mach_pes.py"}]} | 1,955 | 179 |
gh_patches_debug_4066 | rasdani/github-patches | git_diff | sublimelsp__LSP-1243 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Server is being shutdown on server sending empty stderr line
When server triggers stderr output that is an empty string (or becomes an empty string after `rstrip` then LSP closes the transports and thus the server.
Issue found when analyzing https://github.com/sublimelsp/LSP-angular/issues/1
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugin/core/transports.py`
Content:
```
1 from .logging import exception_log, debug
2 from .types import ClientConfig
3 from .typing import Dict, Any, Optional, IO, Protocol
4 from abc import ABCMeta, abstractmethod
5 from contextlib import closing
6 from queue import Queue
7 import json
8 import os
9 import shutil
10 import socket
11 import sublime
12 import subprocess
13 import threading
14 import time
15 import weakref
16
17
18 TCP_CONNECT_TIMEOUT = 5
19
20
21 class Transport(metaclass=ABCMeta):
22
23 @abstractmethod
24 def send(self, payload: Dict[str, Any]) -> None:
25 pass
26
27 @abstractmethod
28 def close(self) -> None:
29 pass
30
31
32 class TransportCallbacks(Protocol):
33
34 def on_transport_close(self, exit_code: int, exception: Optional[Exception]) -> None:
35 ...
36
37 def on_payload(self, payload: Dict[str, Any]) -> None:
38 ...
39
40 def on_stderr_message(self, message: str) -> None:
41 ...
42
43
44 class JsonRpcTransport(Transport):
45
46 def __init__(self, name: str, process: subprocess.Popen, socket: Optional[socket.socket], reader: IO[bytes],
47 writer: IO[bytes], stderr: Optional[IO[bytes]], callback_object: TransportCallbacks) -> None:
48 self._closed = False
49 self._process = process
50 self._socket = socket
51 self._reader = reader
52 self._writer = writer
53 self._stderr = stderr
54 self._reader_thread = threading.Thread(target=self._read_loop, name='{}-reader'.format(name))
55 self._writer_thread = threading.Thread(target=self._write_loop, name='{}-writer'.format(name))
56 self._stderr_thread = threading.Thread(target=self._stderr_loop, name='{}-stderr'.format(name))
57 self._callback_object = weakref.ref(callback_object)
58 self._send_queue = Queue(0) # type: Queue[Optional[Dict[str, Any]]]
59 self._reader_thread.start()
60 self._writer_thread.start()
61 self._stderr_thread.start()
62
63 def send(self, payload: Dict[str, Any]) -> None:
64 self._send_queue.put_nowait(payload)
65
66 def close(self) -> None:
67 if not self._closed:
68 self._send_queue.put_nowait(None)
69 if self._socket:
70 self._socket.close()
71 self._closed = True
72
73 def _join_thread(self, t: threading.Thread) -> None:
74 if t.ident == threading.current_thread().ident:
75 return
76 try:
77 t.join(2)
78 except TimeoutError as ex:
79 exception_log("failed to join {} thread".format(t.name), ex)
80
81 def __del__(self) -> None:
82 self.close()
83 self._join_thread(self._writer_thread)
84 self._join_thread(self._reader_thread)
85 self._join_thread(self._stderr_thread)
86
87 def _read_loop(self) -> None:
88 try:
89 while self._reader:
90 line = self._reader.readline()
91 if not line:
92 break
93 try:
94 num_bytes = _content_length(line)
95 except ValueError:
96 continue
97 if num_bytes is None:
98 continue
99 while line and line.strip():
100 line = self._reader.readline()
101 if not line:
102 continue
103 body = self._reader.read(num_bytes)
104 callback_object = self._callback_object()
105 if callback_object:
106 try:
107 callback_object.on_payload(_decode(body))
108 except Exception as ex:
109 exception_log("Error handling payload", ex)
110 else:
111 break
112 except (AttributeError, BrokenPipeError):
113 pass
114 except Exception as ex:
115 exception_log("Unexpected exception", ex)
116 self._send_queue.put_nowait(None)
117
118 def _end(self, exception: Optional[Exception]) -> None:
119 exit_code = 0
120 if not exception:
121 try:
122 # Allow the process to stop itself.
123 exit_code = self._process.wait(1)
124 except (AttributeError, ProcessLookupError, subprocess.TimeoutExpired):
125 pass
126 if self._process:
127 try:
128 # The process didn't stop itself. Terminate!
129 self._process.kill()
130 # still wait for the process to die, or zombie processes might be the result
131 # Ignore the exit code in this case, it's going to be something non-zero because we sent SIGKILL.
132 self._process.wait()
133 except (AttributeError, ProcessLookupError):
134 pass
135 except Exception as ex:
136 exception = ex # TODO: Old captured exception is overwritten
137 callback_object = self._callback_object()
138 if callback_object:
139 callback_object.on_transport_close(exit_code, exception)
140
141 def _write_loop(self) -> None:
142 exception = None # type: Optional[Exception]
143 try:
144 while self._writer:
145 d = self._send_queue.get()
146 if d is None:
147 break
148 body = _encode(d)
149 self._writer.writelines(("Content-Length: {}\r\n\r\n".format(len(body)).encode('ascii'), body))
150 self._writer.flush()
151 except (BrokenPipeError, AttributeError):
152 pass
153 except Exception as ex:
154 exception = ex
155 self._end(exception)
156
157 def _stderr_loop(self) -> None:
158 try:
159 while self._stderr:
160 if self._closed:
161 break
162 message = self._stderr.readline().decode('utf-8', 'replace').rstrip()
163 callback_object = self._callback_object()
164 if callback_object:
165 callback_object.on_stderr_message(message)
166 else:
167 break
168 except (BrokenPipeError, AttributeError):
169 pass
170 except Exception as ex:
171 exception_log('unexpected exception type in stderr loop', ex)
172 self._send_queue.put_nowait(None)
173
174
175 def create_transport(config: ClientConfig, cwd: Optional[str], window: sublime.Window,
176 callback_object: TransportCallbacks, variables: Dict[str, str]) -> JsonRpcTransport:
177 tcp_port = None # type: Optional[int]
178 if config.tcp_port is not None:
179 tcp_port = _find_free_port() if config.tcp_port == 0 else config.tcp_port
180 if tcp_port is not None:
181 variables["port"] = str(tcp_port)
182 args = sublime.expand_variables(config.binary_args, variables)
183 args = [os.path.expanduser(arg) for arg in args]
184 if tcp_port is not None:
185 # DEPRECATED -- replace {port} with $port or ${port} in your client config
186 args = [a.replace('{port}', str(tcp_port)) for a in args]
187 env = os.environ.copy()
188 for var, value in config.env.items():
189 env[var] = sublime.expand_variables(value, variables)
190 if tcp_port is not None:
191 stdout = subprocess.DEVNULL
192 stdin = subprocess.DEVNULL
193 else:
194 stdout = subprocess.PIPE
195 stdin = subprocess.PIPE
196 if sublime.platform() == "windows":
197 startupinfo = subprocess.STARTUPINFO() # type: ignore
198 startupinfo.dwFlags |= subprocess.SW_HIDE | subprocess.STARTF_USESHOWWINDOW # type: ignore
199 executable_arg = args[0]
200 fname, ext = os.path.splitext(executable_arg)
201 if len(ext) < 1:
202 path_to_executable = shutil.which(executable_arg)
203 # what extensions should we append so CreateProcess can find it?
204 # node has .cmd
205 # dart has .bat
206 # python has .exe wrappers - not needed
207 for extension in ['.cmd', '.bat']:
208 if path_to_executable and path_to_executable.lower().endswith(extension):
209 args[0] = executable_arg + extension
210 break
211 else:
212 startupinfo = None
213 debug("starting {} in {}".format(args, cwd if cwd else os.getcwd()))
214 process = subprocess.Popen(
215 args=args,
216 stdin=stdin,
217 stdout=stdout,
218 stderr=subprocess.PIPE,
219 startupinfo=startupinfo,
220 env=env,
221 cwd=cwd)
222 _subprocesses.add(process)
223 sock = None # type: Optional[socket.socket]
224 if tcp_port:
225 sock = _connect_tcp(tcp_port)
226 if sock is None:
227 raise RuntimeError("Failed to connect on port {}".format(config.tcp_port))
228 reader = sock.makefile('rwb') # type: IO[bytes]
229 writer = reader
230 else:
231 reader = process.stdout # type: ignore
232 writer = process.stdin # type: ignore
233 return JsonRpcTransport(config.name, process, sock, reader, writer, process.stderr, callback_object)
234
235
236 _subprocesses = weakref.WeakSet() # type: weakref.WeakSet[subprocess.Popen]
237
238
239 def kill_all_subprocesses() -> None:
240 global _subprocesses
241 subprocesses = list(_subprocesses)
242 for p in subprocesses:
243 try:
244 p.kill()
245 except Exception:
246 pass
247 for p in subprocesses:
248 try:
249 p.wait()
250 except Exception:
251 pass
252
253
254 def _connect_tcp(port: int) -> Optional[socket.socket]:
255 start_time = time.time()
256 while time.time() - start_time < TCP_CONNECT_TIMEOUT:
257 try:
258 return socket.create_connection(('localhost', port))
259 except ConnectionRefusedError:
260 pass
261 return None
262
263
264 def _find_free_port() -> int:
265 with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
266 s.bind(('', 0))
267 s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
268 return s.getsockname()[1]
269
270
271 def _encode(d: Dict[str, Any]) -> bytes:
272 return json.dumps(d, sort_keys=False, check_circular=False, separators=(',', ':')).encode('utf-8')
273
274
275 def _decode(message: bytes) -> Dict[str, Any]:
276 return json.loads(message.decode('utf-8'))
277
278
279 def _content_length(line: bytes) -> Optional[int]:
280 if line.startswith(b'Content-Length: '):
281 _, value = line.split(b'Content-Length: ')
282 value = value.strip()
283 try:
284 return int(value)
285 except ValueError as ex:
286 raise ValueError("Invalid Content-Length header: {}".format(value.decode('ascii'))) from ex
287 return None
288
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/plugin/core/transports.py b/plugin/core/transports.py
--- a/plugin/core/transports.py
+++ b/plugin/core/transports.py
@@ -137,6 +137,7 @@
callback_object = self._callback_object()
if callback_object:
callback_object.on_transport_close(exit_code, exception)
+ self.close()
def _write_loop(self) -> None:
exception = None # type: Optional[Exception]
| {"golden_diff": "diff --git a/plugin/core/transports.py b/plugin/core/transports.py\n--- a/plugin/core/transports.py\n+++ b/plugin/core/transports.py\n@@ -137,6 +137,7 @@\n callback_object = self._callback_object()\n if callback_object:\n callback_object.on_transport_close(exit_code, exception)\n+ self.close()\n \n def _write_loop(self) -> None:\n exception = None # type: Optional[Exception]\n", "issue": "Server is being shutdown on server sending empty stderr line\nWhen server triggers stderr output that is an empty string (or becomes an empty string after `rstrip` then LSP closes the transports and thus the server.\r\n\r\nIssue found when analyzing https://github.com/sublimelsp/LSP-angular/issues/1\n", "before_files": [{"content": "from .logging import exception_log, debug\nfrom .types import ClientConfig\nfrom .typing import Dict, Any, Optional, IO, Protocol\nfrom abc import ABCMeta, abstractmethod\nfrom contextlib import closing\nfrom queue import Queue\nimport json\nimport os\nimport shutil\nimport socket\nimport sublime\nimport subprocess\nimport threading\nimport time\nimport weakref\n\n\nTCP_CONNECT_TIMEOUT = 5\n\n\nclass Transport(metaclass=ABCMeta):\n\n @abstractmethod\n def send(self, payload: Dict[str, Any]) -> None:\n pass\n\n @abstractmethod\n def close(self) -> None:\n pass\n\n\nclass TransportCallbacks(Protocol):\n\n def on_transport_close(self, exit_code: int, exception: Optional[Exception]) -> None:\n ...\n\n def on_payload(self, payload: Dict[str, Any]) -> None:\n ...\n\n def on_stderr_message(self, message: str) -> None:\n ...\n\n\nclass JsonRpcTransport(Transport):\n\n def __init__(self, name: str, process: subprocess.Popen, socket: Optional[socket.socket], reader: IO[bytes],\n writer: IO[bytes], stderr: Optional[IO[bytes]], callback_object: TransportCallbacks) -> None:\n self._closed = False\n self._process = process\n self._socket = socket\n self._reader = reader\n self._writer = writer\n self._stderr = stderr\n self._reader_thread = threading.Thread(target=self._read_loop, name='{}-reader'.format(name))\n self._writer_thread = threading.Thread(target=self._write_loop, name='{}-writer'.format(name))\n self._stderr_thread = threading.Thread(target=self._stderr_loop, name='{}-stderr'.format(name))\n self._callback_object = weakref.ref(callback_object)\n self._send_queue = Queue(0) # type: Queue[Optional[Dict[str, Any]]]\n self._reader_thread.start()\n self._writer_thread.start()\n self._stderr_thread.start()\n\n def send(self, payload: Dict[str, Any]) -> None:\n self._send_queue.put_nowait(payload)\n\n def close(self) -> None:\n if not self._closed:\n self._send_queue.put_nowait(None)\n if self._socket:\n self._socket.close()\n self._closed = True\n\n def _join_thread(self, t: threading.Thread) -> None:\n if t.ident == threading.current_thread().ident:\n return\n try:\n t.join(2)\n except TimeoutError as ex:\n exception_log(\"failed to join {} thread\".format(t.name), ex)\n\n def __del__(self) -> None:\n self.close()\n self._join_thread(self._writer_thread)\n self._join_thread(self._reader_thread)\n self._join_thread(self._stderr_thread)\n\n def _read_loop(self) -> None:\n try:\n while self._reader:\n line = self._reader.readline()\n if not line:\n break\n try:\n num_bytes = _content_length(line)\n except ValueError:\n continue\n if num_bytes is None:\n continue\n while line and line.strip():\n line = self._reader.readline()\n if not line:\n continue\n body = self._reader.read(num_bytes)\n callback_object = self._callback_object()\n if callback_object:\n try:\n callback_object.on_payload(_decode(body))\n except Exception as ex:\n exception_log(\"Error handling payload\", ex)\n else:\n break\n except (AttributeError, BrokenPipeError):\n pass\n except Exception as ex:\n exception_log(\"Unexpected exception\", ex)\n self._send_queue.put_nowait(None)\n\n def _end(self, exception: Optional[Exception]) -> None:\n exit_code = 0\n if not exception:\n try:\n # Allow the process to stop itself.\n exit_code = self._process.wait(1)\n except (AttributeError, ProcessLookupError, subprocess.TimeoutExpired):\n pass\n if self._process:\n try:\n # The process didn't stop itself. Terminate!\n self._process.kill()\n # still wait for the process to die, or zombie processes might be the result\n # Ignore the exit code in this case, it's going to be something non-zero because we sent SIGKILL.\n self._process.wait()\n except (AttributeError, ProcessLookupError):\n pass\n except Exception as ex:\n exception = ex # TODO: Old captured exception is overwritten\n callback_object = self._callback_object()\n if callback_object:\n callback_object.on_transport_close(exit_code, exception)\n\n def _write_loop(self) -> None:\n exception = None # type: Optional[Exception]\n try:\n while self._writer:\n d = self._send_queue.get()\n if d is None:\n break\n body = _encode(d)\n self._writer.writelines((\"Content-Length: {}\\r\\n\\r\\n\".format(len(body)).encode('ascii'), body))\n self._writer.flush()\n except (BrokenPipeError, AttributeError):\n pass\n except Exception as ex:\n exception = ex\n self._end(exception)\n\n def _stderr_loop(self) -> None:\n try:\n while self._stderr:\n if self._closed:\n break\n message = self._stderr.readline().decode('utf-8', 'replace').rstrip()\n callback_object = self._callback_object()\n if callback_object:\n callback_object.on_stderr_message(message)\n else:\n break\n except (BrokenPipeError, AttributeError):\n pass\n except Exception as ex:\n exception_log('unexpected exception type in stderr loop', ex)\n self._send_queue.put_nowait(None)\n\n\ndef create_transport(config: ClientConfig, cwd: Optional[str], window: sublime.Window,\n callback_object: TransportCallbacks, variables: Dict[str, str]) -> JsonRpcTransport:\n tcp_port = None # type: Optional[int]\n if config.tcp_port is not None:\n tcp_port = _find_free_port() if config.tcp_port == 0 else config.tcp_port\n if tcp_port is not None:\n variables[\"port\"] = str(tcp_port)\n args = sublime.expand_variables(config.binary_args, variables)\n args = [os.path.expanduser(arg) for arg in args]\n if tcp_port is not None:\n # DEPRECATED -- replace {port} with $port or ${port} in your client config\n args = [a.replace('{port}', str(tcp_port)) for a in args]\n env = os.environ.copy()\n for var, value in config.env.items():\n env[var] = sublime.expand_variables(value, variables)\n if tcp_port is not None:\n stdout = subprocess.DEVNULL\n stdin = subprocess.DEVNULL\n else:\n stdout = subprocess.PIPE\n stdin = subprocess.PIPE\n if sublime.platform() == \"windows\":\n startupinfo = subprocess.STARTUPINFO() # type: ignore\n startupinfo.dwFlags |= subprocess.SW_HIDE | subprocess.STARTF_USESHOWWINDOW # type: ignore\n executable_arg = args[0]\n fname, ext = os.path.splitext(executable_arg)\n if len(ext) < 1:\n path_to_executable = shutil.which(executable_arg)\n # what extensions should we append so CreateProcess can find it?\n # node has .cmd\n # dart has .bat\n # python has .exe wrappers - not needed\n for extension in ['.cmd', '.bat']:\n if path_to_executable and path_to_executable.lower().endswith(extension):\n args[0] = executable_arg + extension\n break\n else:\n startupinfo = None\n debug(\"starting {} in {}\".format(args, cwd if cwd else os.getcwd()))\n process = subprocess.Popen(\n args=args,\n stdin=stdin,\n stdout=stdout,\n stderr=subprocess.PIPE,\n startupinfo=startupinfo,\n env=env,\n cwd=cwd)\n _subprocesses.add(process)\n sock = None # type: Optional[socket.socket]\n if tcp_port:\n sock = _connect_tcp(tcp_port)\n if sock is None:\n raise RuntimeError(\"Failed to connect on port {}\".format(config.tcp_port))\n reader = sock.makefile('rwb') # type: IO[bytes]\n writer = reader\n else:\n reader = process.stdout # type: ignore\n writer = process.stdin # type: ignore\n return JsonRpcTransport(config.name, process, sock, reader, writer, process.stderr, callback_object)\n\n\n_subprocesses = weakref.WeakSet() # type: weakref.WeakSet[subprocess.Popen]\n\n\ndef kill_all_subprocesses() -> None:\n global _subprocesses\n subprocesses = list(_subprocesses)\n for p in subprocesses:\n try:\n p.kill()\n except Exception:\n pass\n for p in subprocesses:\n try:\n p.wait()\n except Exception:\n pass\n\n\ndef _connect_tcp(port: int) -> Optional[socket.socket]:\n start_time = time.time()\n while time.time() - start_time < TCP_CONNECT_TIMEOUT:\n try:\n return socket.create_connection(('localhost', port))\n except ConnectionRefusedError:\n pass\n return None\n\n\ndef _find_free_port() -> int:\n with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:\n s.bind(('', 0))\n s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n return s.getsockname()[1]\n\n\ndef _encode(d: Dict[str, Any]) -> bytes:\n return json.dumps(d, sort_keys=False, check_circular=False, separators=(',', ':')).encode('utf-8')\n\n\ndef _decode(message: bytes) -> Dict[str, Any]:\n return json.loads(message.decode('utf-8'))\n\n\ndef _content_length(line: bytes) -> Optional[int]:\n if line.startswith(b'Content-Length: '):\n _, value = line.split(b'Content-Length: ')\n value = value.strip()\n try:\n return int(value)\n except ValueError as ex:\n raise ValueError(\"Invalid Content-Length header: {}\".format(value.decode('ascii'))) from ex\n return None\n", "path": "plugin/core/transports.py"}], "after_files": [{"content": "from .logging import exception_log, debug\nfrom .types import ClientConfig\nfrom .typing import Dict, Any, Optional, IO, Protocol\nfrom abc import ABCMeta, abstractmethod\nfrom contextlib import closing\nfrom queue import Queue\nimport json\nimport os\nimport shutil\nimport socket\nimport sublime\nimport subprocess\nimport threading\nimport time\nimport weakref\n\n\nTCP_CONNECT_TIMEOUT = 5\n\n\nclass Transport(metaclass=ABCMeta):\n\n @abstractmethod\n def send(self, payload: Dict[str, Any]) -> None:\n pass\n\n @abstractmethod\n def close(self) -> None:\n pass\n\n\nclass TransportCallbacks(Protocol):\n\n def on_transport_close(self, exit_code: int, exception: Optional[Exception]) -> None:\n ...\n\n def on_payload(self, payload: Dict[str, Any]) -> None:\n ...\n\n def on_stderr_message(self, message: str) -> None:\n ...\n\n\nclass JsonRpcTransport(Transport):\n\n def __init__(self, name: str, process: subprocess.Popen, socket: Optional[socket.socket], reader: IO[bytes],\n writer: IO[bytes], stderr: Optional[IO[bytes]], callback_object: TransportCallbacks) -> None:\n self._closed = False\n self._process = process\n self._socket = socket\n self._reader = reader\n self._writer = writer\n self._stderr = stderr\n self._reader_thread = threading.Thread(target=self._read_loop, name='{}-reader'.format(name))\n self._writer_thread = threading.Thread(target=self._write_loop, name='{}-writer'.format(name))\n self._stderr_thread = threading.Thread(target=self._stderr_loop, name='{}-stderr'.format(name))\n self._callback_object = weakref.ref(callback_object)\n self._send_queue = Queue(0) # type: Queue[Optional[Dict[str, Any]]]\n self._reader_thread.start()\n self._writer_thread.start()\n self._stderr_thread.start()\n\n def send(self, payload: Dict[str, Any]) -> None:\n self._send_queue.put_nowait(payload)\n\n def close(self) -> None:\n if not self._closed:\n self._send_queue.put_nowait(None)\n if self._socket:\n self._socket.close()\n self._closed = True\n\n def _join_thread(self, t: threading.Thread) -> None:\n if t.ident == threading.current_thread().ident:\n return\n try:\n t.join(2)\n except TimeoutError as ex:\n exception_log(\"failed to join {} thread\".format(t.name), ex)\n\n def __del__(self) -> None:\n self.close()\n self._join_thread(self._writer_thread)\n self._join_thread(self._reader_thread)\n self._join_thread(self._stderr_thread)\n\n def _read_loop(self) -> None:\n try:\n while self._reader:\n line = self._reader.readline()\n if not line:\n break\n try:\n num_bytes = _content_length(line)\n except ValueError:\n continue\n if num_bytes is None:\n continue\n while line and line.strip():\n line = self._reader.readline()\n if not line:\n continue\n body = self._reader.read(num_bytes)\n callback_object = self._callback_object()\n if callback_object:\n try:\n callback_object.on_payload(_decode(body))\n except Exception as ex:\n exception_log(\"Error handling payload\", ex)\n else:\n break\n except (AttributeError, BrokenPipeError):\n pass\n except Exception as ex:\n exception_log(\"Unexpected exception\", ex)\n self._send_queue.put_nowait(None)\n\n def _end(self, exception: Optional[Exception]) -> None:\n exit_code = 0\n if not exception:\n try:\n # Allow the process to stop itself.\n exit_code = self._process.wait(1)\n except (AttributeError, ProcessLookupError, subprocess.TimeoutExpired):\n pass\n if self._process:\n try:\n # The process didn't stop itself. Terminate!\n self._process.kill()\n # still wait for the process to die, or zombie processes might be the result\n # Ignore the exit code in this case, it's going to be something non-zero because we sent SIGKILL.\n self._process.wait()\n except (AttributeError, ProcessLookupError):\n pass\n except Exception as ex:\n exception = ex # TODO: Old captured exception is overwritten\n callback_object = self._callback_object()\n if callback_object:\n callback_object.on_transport_close(exit_code, exception)\n self.close()\n\n def _write_loop(self) -> None:\n exception = None # type: Optional[Exception]\n try:\n while self._writer:\n d = self._send_queue.get()\n if d is None:\n break\n body = _encode(d)\n self._writer.writelines((\"Content-Length: {}\\r\\n\\r\\n\".format(len(body)).encode('ascii'), body))\n self._writer.flush()\n except (BrokenPipeError, AttributeError):\n pass\n except Exception as ex:\n exception = ex\n self._end(exception)\n\n def _stderr_loop(self) -> None:\n try:\n while self._stderr:\n if self._closed:\n break\n message = self._stderr.readline().decode('utf-8', 'replace').rstrip()\n callback_object = self._callback_object()\n if callback_object:\n callback_object.on_stderr_message(message)\n else:\n break\n except (BrokenPipeError, AttributeError):\n pass\n except Exception as ex:\n exception_log('unexpected exception type in stderr loop', ex)\n self._send_queue.put_nowait(None)\n\n\ndef create_transport(config: ClientConfig, cwd: Optional[str], window: sublime.Window,\n callback_object: TransportCallbacks, variables: Dict[str, str]) -> JsonRpcTransport:\n tcp_port = None # type: Optional[int]\n if config.tcp_port is not None:\n tcp_port = _find_free_port() if config.tcp_port == 0 else config.tcp_port\n if tcp_port is not None:\n variables[\"port\"] = str(tcp_port)\n args = sublime.expand_variables(config.binary_args, variables)\n args = [os.path.expanduser(arg) for arg in args]\n if tcp_port is not None:\n # DEPRECATED -- replace {port} with $port or ${port} in your client config\n args = [a.replace('{port}', str(tcp_port)) for a in args]\n env = os.environ.copy()\n for var, value in config.env.items():\n env[var] = sublime.expand_variables(value, variables)\n if tcp_port is not None:\n stdout = subprocess.DEVNULL\n stdin = subprocess.DEVNULL\n else:\n stdout = subprocess.PIPE\n stdin = subprocess.PIPE\n if sublime.platform() == \"windows\":\n startupinfo = subprocess.STARTUPINFO() # type: ignore\n startupinfo.dwFlags |= subprocess.SW_HIDE | subprocess.STARTF_USESHOWWINDOW # type: ignore\n executable_arg = args[0]\n fname, ext = os.path.splitext(executable_arg)\n if len(ext) < 1:\n path_to_executable = shutil.which(executable_arg)\n # what extensions should we append so CreateProcess can find it?\n # node has .cmd\n # dart has .bat\n # python has .exe wrappers - not needed\n for extension in ['.cmd', '.bat']:\n if path_to_executable and path_to_executable.lower().endswith(extension):\n args[0] = executable_arg + extension\n break\n else:\n startupinfo = None\n debug(\"starting {} in {}\".format(args, cwd if cwd else os.getcwd()))\n process = subprocess.Popen(\n args=args,\n stdin=stdin,\n stdout=stdout,\n stderr=subprocess.PIPE,\n startupinfo=startupinfo,\n env=env,\n cwd=cwd)\n _subprocesses.add(process)\n sock = None # type: Optional[socket.socket]\n if tcp_port:\n sock = _connect_tcp(tcp_port)\n if sock is None:\n raise RuntimeError(\"Failed to connect on port {}\".format(config.tcp_port))\n reader = sock.makefile('rwb') # type: IO[bytes]\n writer = reader\n else:\n reader = process.stdout # type: ignore\n writer = process.stdin # type: ignore\n return JsonRpcTransport(config.name, process, sock, reader, writer, process.stderr, callback_object)\n\n\n_subprocesses = weakref.WeakSet() # type: weakref.WeakSet[subprocess.Popen]\n\n\ndef kill_all_subprocesses() -> None:\n global _subprocesses\n subprocesses = list(_subprocesses)\n for p in subprocesses:\n try:\n p.kill()\n except Exception:\n pass\n for p in subprocesses:\n try:\n p.wait()\n except Exception:\n pass\n\n\ndef _connect_tcp(port: int) -> Optional[socket.socket]:\n start_time = time.time()\n while time.time() - start_time < TCP_CONNECT_TIMEOUT:\n try:\n return socket.create_connection(('localhost', port))\n except ConnectionRefusedError:\n pass\n return None\n\n\ndef _find_free_port() -> int:\n with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:\n s.bind(('', 0))\n s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n return s.getsockname()[1]\n\n\ndef _encode(d: Dict[str, Any]) -> bytes:\n return json.dumps(d, sort_keys=False, check_circular=False, separators=(',', ':')).encode('utf-8')\n\n\ndef _decode(message: bytes) -> Dict[str, Any]:\n return json.loads(message.decode('utf-8'))\n\n\ndef _content_length(line: bytes) -> Optional[int]:\n if line.startswith(b'Content-Length: '):\n _, value = line.split(b'Content-Length: ')\n value = value.strip()\n try:\n return int(value)\n except ValueError as ex:\n raise ValueError(\"Invalid Content-Length header: {}\".format(value.decode('ascii'))) from ex\n return None\n", "path": "plugin/core/transports.py"}]} | 3,283 | 99 |
gh_patches_debug_379 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-3650 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Non-executable files with shebangs in the repository
**Describe your environment**
(Nothing relevant to describe)
**Steps to reproduce**
```
$ rg -l '^#!' | xargs ls -l
-rwxr-xr-x. 1 ben ben 1420 Jul 5 2023 docs/examples/django/manage.py
-rw-r--r--. 1 ben ben 1300 Jul 5 2023 docs/examples/opencensus-exporter-tracer/collector.py
-rwxr-xr-x. 1 ben ben 1485 Jul 5 2023 docs/examples/opentracing/main.py
-rwxr-xr-x. 1 ben ben 853 Jul 13 2023 scripts/build.sh
-rwxr-xr-x. 1 ben ben 1163 Jan 22 10:06 scripts/coverage.sh
-rwxr-xr-x. 1 ben ben 20741 Jul 13 2023 scripts/eachdist.py
-rwxr-xr-x. 1 ben ben 215 Jul 5 2023 scripts/generate_website_docs.sh
-rwxr-xr-x. 1 ben ben 2377 Jan 22 10:06 scripts/proto_codegen.sh
-rwxr-xr-x. 1 ben ben 1928 Jan 22 10:06 scripts/semconv/generate.sh
-rwxr-xr-x. 1 ben ben 945 Jul 5 2023 scripts/tracecontext-integration-test.sh
-rw-r--r--. 1 ben ben 2519 Jan 22 11:43 tests/w3c_tracecontext_validation_server.py
```
Note that two files have shebang lines (`#!`) but do not have the executable bit set, which makes the shebang lines useless.
**What is the expected behavior?**
Files should either be non-executable and have no shebang line, or be executable and have a shebang line.
**What is the actual behavior?**
The following files are not executable and have useless shebang lines:
- `docs/examples/opencensus-exporter-tracer/collector.py`
- `tests/w3c_tracecontext_validation_server.py`
**Additional context**
This is a trivial thing, but I would like to fix it in a PR – either by setting the executable bit on these two files, or by removing the useless shebang lines. Both files are “script-like,” i.e. they have `if __name__ == "__main__"` or have useful side effects. Which approach would you prefer?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/examples/opencensus-exporter-tracer/collector.py`
Content:
```
1 #!/usr/bin/env python3
2 #
3 # Copyright The OpenTelemetry Authors
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 from opentelemetry import trace
18 from opentelemetry.exporter.opencensus.trace_exporter import (
19 OpenCensusSpanExporter,
20 )
21 from opentelemetry.sdk.trace import TracerProvider
22 from opentelemetry.sdk.trace.export import BatchSpanProcessor
23
24 exporter = OpenCensusSpanExporter(endpoint="localhost:55678")
25
26 trace.set_tracer_provider(TracerProvider())
27 tracer = trace.get_tracer(__name__)
28 span_processor = BatchSpanProcessor(exporter)
29
30 trace.get_tracer_provider().add_span_processor(span_processor)
31 with tracer.start_as_current_span("foo"):
32 with tracer.start_as_current_span("bar"):
33 with tracer.start_as_current_span("baz"):
34 print("Hello world from OpenTelemetry Python!")
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/examples/opencensus-exporter-tracer/collector.py b/docs/examples/opencensus-exporter-tracer/collector.py
--- a/docs/examples/opencensus-exporter-tracer/collector.py
+++ b/docs/examples/opencensus-exporter-tracer/collector.py
@@ -1,5 +1,3 @@
-#!/usr/bin/env python3
-#
# Copyright The OpenTelemetry Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
| {"golden_diff": "diff --git a/docs/examples/opencensus-exporter-tracer/collector.py b/docs/examples/opencensus-exporter-tracer/collector.py\n--- a/docs/examples/opencensus-exporter-tracer/collector.py\n+++ b/docs/examples/opencensus-exporter-tracer/collector.py\n@@ -1,5 +1,3 @@\n-#!/usr/bin/env python3\n-#\n # Copyright The OpenTelemetry Authors\n #\n # Licensed under the Apache License, Version 2.0 (the \"License\");\n", "issue": "Non-executable files with shebangs in the repository\n**Describe your environment**\r\n\r\n(Nothing relevant to describe)\r\n\r\n**Steps to reproduce**\r\n\r\n```\r\n$ rg -l '^#!' | xargs ls -l\r\n-rwxr-xr-x. 1 ben ben 1420 Jul 5 2023 docs/examples/django/manage.py\r\n-rw-r--r--. 1 ben ben 1300 Jul 5 2023 docs/examples/opencensus-exporter-tracer/collector.py\r\n-rwxr-xr-x. 1 ben ben 1485 Jul 5 2023 docs/examples/opentracing/main.py\r\n-rwxr-xr-x. 1 ben ben 853 Jul 13 2023 scripts/build.sh\r\n-rwxr-xr-x. 1 ben ben 1163 Jan 22 10:06 scripts/coverage.sh\r\n-rwxr-xr-x. 1 ben ben 20741 Jul 13 2023 scripts/eachdist.py\r\n-rwxr-xr-x. 1 ben ben 215 Jul 5 2023 scripts/generate_website_docs.sh\r\n-rwxr-xr-x. 1 ben ben 2377 Jan 22 10:06 scripts/proto_codegen.sh\r\n-rwxr-xr-x. 1 ben ben 1928 Jan 22 10:06 scripts/semconv/generate.sh\r\n-rwxr-xr-x. 1 ben ben 945 Jul 5 2023 scripts/tracecontext-integration-test.sh\r\n-rw-r--r--. 1 ben ben 2519 Jan 22 11:43 tests/w3c_tracecontext_validation_server.py\r\n```\r\n\r\nNote that two files have shebang lines (`#!`) but do not have the executable bit set, which makes the shebang lines useless.\r\n\r\n**What is the expected behavior?**\r\n\r\nFiles should either be non-executable and have no shebang line, or be executable and have a shebang line.\r\n\r\n**What is the actual behavior?**\r\n\r\nThe following files are not executable and have useless shebang lines:\r\n\r\n- `docs/examples/opencensus-exporter-tracer/collector.py`\r\n- `tests/w3c_tracecontext_validation_server.py`\r\n\r\n**Additional context**\r\n\r\nThis is a trivial thing, but I would like to fix it in a PR \u2013 either by setting the executable bit on these two files, or by removing the useless shebang lines. Both files are \u201cscript-like,\u201d i.e. they have `if __name__ == \"__main__\"` or have useful side effects. Which approach would you prefer?\n", "before_files": [{"content": "#!/usr/bin/env python3\n#\n# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom opentelemetry import trace\nfrom opentelemetry.exporter.opencensus.trace_exporter import (\n OpenCensusSpanExporter,\n)\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import BatchSpanProcessor\n\nexporter = OpenCensusSpanExporter(endpoint=\"localhost:55678\")\n\ntrace.set_tracer_provider(TracerProvider())\ntracer = trace.get_tracer(__name__)\nspan_processor = BatchSpanProcessor(exporter)\n\ntrace.get_tracer_provider().add_span_processor(span_processor)\nwith tracer.start_as_current_span(\"foo\"):\n with tracer.start_as_current_span(\"bar\"):\n with tracer.start_as_current_span(\"baz\"):\n print(\"Hello world from OpenTelemetry Python!\")\n", "path": "docs/examples/opencensus-exporter-tracer/collector.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom opentelemetry import trace\nfrom opentelemetry.exporter.opencensus.trace_exporter import (\n OpenCensusSpanExporter,\n)\nfrom opentelemetry.sdk.trace import TracerProvider\nfrom opentelemetry.sdk.trace.export import BatchSpanProcessor\n\nexporter = OpenCensusSpanExporter(endpoint=\"localhost:55678\")\n\ntrace.set_tracer_provider(TracerProvider())\ntracer = trace.get_tracer(__name__)\nspan_processor = BatchSpanProcessor(exporter)\n\ntrace.get_tracer_provider().add_span_processor(span_processor)\nwith tracer.start_as_current_span(\"foo\"):\n with tracer.start_as_current_span(\"bar\"):\n with tracer.start_as_current_span(\"baz\"):\n print(\"Hello world from OpenTelemetry Python!\")\n", "path": "docs/examples/opencensus-exporter-tracer/collector.py"}]} | 1,229 | 106 |
gh_patches_debug_33894 | rasdani/github-patches | git_diff | google__TensorNetwork-377 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Enable contractor contraction on disconnected graphs
When we are using tensor network to represent a tensor in the quantum physics, there is no rule that the network must be connected. for example, we can represent an exterior product from two non-connected nodes.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensornetwork/contractors/opt_einsum_paths/path_contractors.py`
Content:
```
1 # pylint: disable=cyclic-import
2 # Copyright 2019 The TensorNetwork Authors
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Contractors based on `opt_einsum`'s path algorithms."""
16
17 import functools
18 import opt_einsum
19 # pylint: disable=line-too-long
20 from tensornetwork.network_operations import check_connected, get_all_edges, get_subgraph_dangling
21 # pylint: disable=line-too-long
22 from tensornetwork.network_components import get_all_nondangling, contract_parallel
23 from tensornetwork.network_components import Edge, BaseNode
24 from tensornetwork.contractors.opt_einsum_paths import utils
25 from typing import Any, Optional, Sequence, Iterable
26
27 #TODO (martin): add return types of functions back once TensorNetwork is gone
28 # remove _base_network
29 # _base_nodes -> base
30
31
32 def base(nodes: Iterable[BaseNode],
33 algorithm: utils.Algorithm,
34 output_edge_order: Optional[Sequence[Edge]] = None,
35 ignore_edge_order: bool = False) -> BaseNode:
36 """Base method for all `opt_einsum` contractors.
37
38 Args:
39 nodes: A collection of connected nodes.
40 algorithm: `opt_einsum` contraction method to use.
41 output_edge_order: An optional list of edges. Edges of the
42 final node in `nodes_set`
43 are reordered into `output_edge_order`;
44 if final node has more than one edge,
45 `output_edge_order` must be pronvided.
46 ignore_edge_order: An option to ignore the output edge
47 order.
48
49 Returns:
50 Final node after full contraction.
51 """
52 nodes_set = set(nodes)
53 check_connected(nodes_set)
54 edges = get_all_edges(nodes_set)
55 #output edge order has to be determinded before any contraction
56 #(edges are refreshed after contractions)
57
58 if not ignore_edge_order:
59 if output_edge_order is None:
60 output_edge_order = list(get_subgraph_dangling(nodes))
61 if len(output_edge_order) > 1:
62 raise ValueError("The final node after contraction has more than "
63 "one remaining edge. In this case `output_edge_order` "
64 "has to be provided.")
65
66 if set(output_edge_order) != get_subgraph_dangling(nodes):
67 raise ValueError(
68 "output edges are not equal to the remaining "
69 "non-contracted edges of the final node."
70 )
71
72 for edge in edges:
73 if not edge.is_disabled: #if its disabled we already contracted it
74 if edge.is_trace():
75 nodes_set.remove(edge.node1)
76 nodes_set.add(contract_parallel(edge))
77
78 if len(nodes_set) == 1:
79 # There's nothing to contract.
80 if ignore_edge_order:
81 return list(nodes_set)[0]
82 return list(nodes_set)[0].reorder_edges(output_edge_order)
83
84 # Then apply `opt_einsum`'s algorithm
85 path, nodes = utils.get_path(nodes_set, algorithm)
86 for a, b in path:
87 new_node = nodes[a] @ nodes[b]
88 nodes.append(new_node)
89 nodes = utils.multi_remove(nodes, [a, b])
90
91 # if the final node has more than one edge,
92 # output_edge_order has to be specified
93 final_node = nodes[0] # nodes were connected, we checked this
94 if not ignore_edge_order:
95 final_node.reorder_edges(output_edge_order)
96 return final_node
97
98
99 def optimal(
100 nodes: Iterable[BaseNode],
101 output_edge_order: Optional[Sequence[Edge]] = None,
102 memory_limit: Optional[int] = None,
103 ignore_edge_order: bool = False) -> BaseNode:
104 """Optimal contraction order via `opt_einsum`.
105
106 This method will find the truly optimal contraction order via
107 `opt_einsum`'s depth first search algorithm. Since this search is
108 exhaustive, if your network is large (n>10), then the search may
109 take longer than just contracting in a suboptimal way.
110
111 Args:
112 nodes: an iterable of Nodes
113 output_edge_order: An optional list of edges.
114 Edges of the final node in `nodes_set`
115 are reordered into `output_edge_order`;
116 if final node has more than one edge,
117 `output_edge_order` must be provided.
118 memory_limit: Maximum number of elements in an array during contractions.
119 ignore_edge_order: An option to ignore the output edge order.
120
121 Returns:
122 The final node after full contraction.
123 """
124 alg = functools.partial(opt_einsum.paths.optimal, memory_limit=memory_limit)
125 return base(nodes, alg, output_edge_order, ignore_edge_order)
126
127
128 def branch(nodes: Iterable[BaseNode],
129 output_edge_order: Optional[Sequence[Edge]] = None,
130 memory_limit: Optional[int] = None,
131 nbranch: Optional[int] = None,
132 ignore_edge_order: bool = False) -> BaseNode:
133 """Branch contraction path via `opt_einsum`.
134
135 This method uses the DFS approach of `optimal` while sorting potential
136 contractions based on a heuristic cost, in order to reduce time spent
137 in exploring paths which are unlikely to be optimal.
138 For more details:
139 https://optimized-einsum.readthedocs.io/en/latest/branching_path.html
140
141 Args:
142 nodes: an iterable of Nodes
143 output_edge_order: An optional list of edges.
144 Edges of the final node in `nodes_set`
145 are reordered into `output_edge_order`;
146 if final node has more than one edge,
147 `output_edge_order` must be provided.
148 memory_limit: Maximum number of elements in an array during contractions.
149 nbranch: Number of best contractions to explore.
150 If None it explores all inner products starting with those that
151 have the best cost heuristic.
152 ignore_edge_order: An option to ignore the output edge order.
153
154 Returns:
155 The final node after full contraction.
156 """
157 alg = functools.partial(
158 opt_einsum.paths.branch, memory_limit=memory_limit, nbranch=nbranch)
159 return base(nodes, alg, output_edge_order, ignore_edge_order)
160
161
162 def greedy(
163 nodes: Iterable[BaseNode],
164 output_edge_order: Optional[Sequence[Edge]] = None,
165 memory_limit: Optional[int] = None,
166 ignore_edge_order: bool = False) -> BaseNode:
167 """Greedy contraction path via `opt_einsum`.
168
169 This provides a more efficient strategy than `optimal` for finding
170 contraction paths in large networks. First contracts pairs of tensors
171 by finding the pair with the lowest cost at each step. Then it performs
172 the outer products.
173 For more details:
174 https://optimized-einsum.readthedocs.io/en/latest/greedy_path.html
175
176 Args:
177 nodes: an iterable of Nodes
178 output_edge_order: An optional list of edges.
179 Edges of the final node in `nodes_set`
180 are reordered into `output_edge_order`;
181 if final node has more than one edge,
182 `output_edge_order` must be provided.
183 memory_limit: Maximum number of elements in an array during contractions.
184 ignore_edge_order: An option to ignore the output edge order.
185
186 Returns:
187 The final node after full contraction.
188 """
189 alg = functools.partial(opt_einsum.paths.greedy, memory_limit=memory_limit)
190 return base(nodes, alg, output_edge_order, ignore_edge_order)
191
192
193 # pylint: disable=too-many-return-statements
194 def auto(
195 nodes: BaseNode,
196 output_edge_order: Optional[Sequence[Edge]] = None,
197 memory_limit: Optional[int] = None,
198 ignore_edge_order: bool = False) -> BaseNode:
199 """Chooses one of the above algorithms according to network size.
200
201 Default behavior is based on `opt_einsum`'s `auto` contractor.
202
203 Args:
204 nodes: A collection of connected nodes.
205 output_edge_order: An optional list of edges.
206 Edges of the final node in `nodes_set`
207 are reordered into `output_edge_order`;
208 if final node has more than one edge,
209 `output_edge_order` must be provided.
210 memory_limit: Maximum number of elements in an array during contractions.
211 ignore_edge_order: An option to ignore the output edge order.
212
213 Returns:
214 Final node after full contraction.
215 """
216
217 n = len(list(nodes)) #pytype thing
218 _nodes = nodes
219 if n <= 0:
220 raise ValueError("Cannot contract empty tensor network.")
221 if n == 1:
222 if not ignore_edge_order:
223 if output_edge_order is None:
224 output_edge_order = list(
225 (get_all_edges(_nodes) - get_all_nondangling(_nodes)))
226 if len(output_edge_order) > 1:
227 raise ValueError("The final node after contraction has more than "
228 "one dangling edge. In this case `output_edge_order` "
229 "has to be provided.")
230
231 edges = get_all_nondangling(_nodes)
232 if edges:
233 final_node = contract_parallel(edges.pop())
234 else:
235 final_node = list(_nodes)[0]
236 final_node.reorder_edges(output_edge_order)
237 if not ignore_edge_order:
238 final_node.reorder_edges(output_edge_order)
239 return final_node
240
241 if n < 5:
242 return optimal(nodes, output_edge_order, memory_limit, ignore_edge_order)
243 if n < 7:
244 return branch(nodes, output_edge_order, memory_limit, ignore_edge_order)
245 if n < 9:
246 return branch(nodes, output_edge_order, memory_limit, nbranch=2, ignore_edge_order=ignore_edge_order)
247 if n < 15:
248 return branch(nodes, output_edge_order, nbranch=1, ignore_edge_order=ignore_edge_order)
249 return greedy(nodes, output_edge_order, memory_limit, ignore_edge_order)
250
251
252 def custom(
253 nodes: Iterable[BaseNode],
254 optimizer: Any,
255 output_edge_order: Sequence[Edge] = None,
256 memory_limit: Optional[int] = None,
257 ignore_edge_order: bool = False) -> BaseNode:
258 """Uses a custom path optimizer created by the user to calculate paths.
259
260 The custom path optimizer should inherit `opt_einsum`'s `PathOptimizer`.
261 For more details:
262 https://optimized-einsum.readthedocs.io/en/latest/custom_paths.html
263
264 Args:
265 nodes: an iterable of Nodes
266 output_edge_order: An optional list of edges.
267 Edges of the final node in `nodes_set`
268 are reordered into `output_edge_order`;
269 if final node has more than one edge,
270 output_edge_order` must be provided.
271 optimizer: A custom `opt_einsum.PathOptimizer` object.
272 memory_limit: Maximum number of elements in an array during contractions.
273 ignore_edge_order: An option to ignore the output edge order.
274
275 Returns:
276 Final node after full contraction.
277 """
278 alg = functools.partial(optimizer, memory_limit=memory_limit)
279 return base(nodes, alg, output_edge_order, ignore_edge_order)
280
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tensornetwork/contractors/opt_einsum_paths/path_contractors.py b/tensornetwork/contractors/opt_einsum_paths/path_contractors.py
--- a/tensornetwork/contractors/opt_einsum_paths/path_contractors.py
+++ b/tensornetwork/contractors/opt_einsum_paths/path_contractors.py
@@ -19,7 +19,7 @@
# pylint: disable=line-too-long
from tensornetwork.network_operations import check_connected, get_all_edges, get_subgraph_dangling
# pylint: disable=line-too-long
-from tensornetwork.network_components import get_all_nondangling, contract_parallel
+from tensornetwork.network_components import get_all_nondangling, contract_parallel, contract_between
from tensornetwork.network_components import Edge, BaseNode
from tensornetwork.contractors.opt_einsum_paths import utils
from typing import Any, Optional, Sequence, Iterable
@@ -50,7 +50,6 @@
Final node after full contraction.
"""
nodes_set = set(nodes)
- check_connected(nodes_set)
edges = get_all_edges(nodes_set)
#output edge order has to be determinded before any contraction
#(edges are refreshed after contractions)
@@ -84,7 +83,7 @@
# Then apply `opt_einsum`'s algorithm
path, nodes = utils.get_path(nodes_set, algorithm)
for a, b in path:
- new_node = nodes[a] @ nodes[b]
+ new_node = contract_between(nodes[a], nodes[b], allow_outer_product=True)
nodes.append(new_node)
nodes = utils.multi_remove(nodes, [a, b])
@@ -192,7 +191,7 @@
# pylint: disable=too-many-return-statements
def auto(
- nodes: BaseNode,
+ nodes: Iterable[BaseNode],
output_edge_order: Optional[Sequence[Edge]] = None,
memory_limit: Optional[int] = None,
ignore_edge_order: bool = False) -> BaseNode:
| {"golden_diff": "diff --git a/tensornetwork/contractors/opt_einsum_paths/path_contractors.py b/tensornetwork/contractors/opt_einsum_paths/path_contractors.py\n--- a/tensornetwork/contractors/opt_einsum_paths/path_contractors.py\n+++ b/tensornetwork/contractors/opt_einsum_paths/path_contractors.py\n@@ -19,7 +19,7 @@\n # pylint: disable=line-too-long\n from tensornetwork.network_operations import check_connected, get_all_edges, get_subgraph_dangling\n # pylint: disable=line-too-long\n-from tensornetwork.network_components import get_all_nondangling, contract_parallel\n+from tensornetwork.network_components import get_all_nondangling, contract_parallel, contract_between\n from tensornetwork.network_components import Edge, BaseNode\n from tensornetwork.contractors.opt_einsum_paths import utils\n from typing import Any, Optional, Sequence, Iterable\n@@ -50,7 +50,6 @@\n Final node after full contraction.\n \"\"\"\n nodes_set = set(nodes)\n- check_connected(nodes_set)\n edges = get_all_edges(nodes_set)\n #output edge order has to be determinded before any contraction\n #(edges are refreshed after contractions)\n@@ -84,7 +83,7 @@\n # Then apply `opt_einsum`'s algorithm\n path, nodes = utils.get_path(nodes_set, algorithm)\n for a, b in path:\n- new_node = nodes[a] @ nodes[b]\n+ new_node = contract_between(nodes[a], nodes[b], allow_outer_product=True)\n nodes.append(new_node)\n nodes = utils.multi_remove(nodes, [a, b])\n \n@@ -192,7 +191,7 @@\n \n # pylint: disable=too-many-return-statements\n def auto(\n- nodes: BaseNode,\n+ nodes: Iterable[BaseNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n", "issue": "Enable contractor contraction on disconnected graphs\nWhen we are using tensor network to represent a tensor in the quantum physics, there is no rule that the network must be connected. for example, we can represent an exterior product from two non-connected nodes.\n", "before_files": [{"content": "# pylint: disable=cyclic-import\n# Copyright 2019 The TensorNetwork Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Contractors based on `opt_einsum`'s path algorithms.\"\"\"\n\nimport functools\nimport opt_einsum\n# pylint: disable=line-too-long\nfrom tensornetwork.network_operations import check_connected, get_all_edges, get_subgraph_dangling\n# pylint: disable=line-too-long\nfrom tensornetwork.network_components import get_all_nondangling, contract_parallel\nfrom tensornetwork.network_components import Edge, BaseNode\nfrom tensornetwork.contractors.opt_einsum_paths import utils\nfrom typing import Any, Optional, Sequence, Iterable\n\n#TODO (martin): add return types of functions back once TensorNetwork is gone\n# remove _base_network\n# _base_nodes -> base\n\n\ndef base(nodes: Iterable[BaseNode],\n algorithm: utils.Algorithm,\n output_edge_order: Optional[Sequence[Edge]] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Base method for all `opt_einsum` contractors.\n\n Args:\n nodes: A collection of connected nodes.\n algorithm: `opt_einsum` contraction method to use.\n output_edge_order: An optional list of edges. Edges of the\n final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be pronvided.\n ignore_edge_order: An option to ignore the output edge\n order.\n\n Returns:\n Final node after full contraction.\n \"\"\"\n nodes_set = set(nodes)\n check_connected(nodes_set)\n edges = get_all_edges(nodes_set)\n #output edge order has to be determinded before any contraction\n #(edges are refreshed after contractions)\n\n if not ignore_edge_order:\n if output_edge_order is None:\n output_edge_order = list(get_subgraph_dangling(nodes))\n if len(output_edge_order) > 1:\n raise ValueError(\"The final node after contraction has more than \"\n \"one remaining edge. In this case `output_edge_order` \"\n \"has to be provided.\")\n\n if set(output_edge_order) != get_subgraph_dangling(nodes):\n raise ValueError(\n \"output edges are not equal to the remaining \"\n \"non-contracted edges of the final node.\"\n )\n\n for edge in edges:\n if not edge.is_disabled: #if its disabled we already contracted it\n if edge.is_trace():\n nodes_set.remove(edge.node1)\n nodes_set.add(contract_parallel(edge))\n\n if len(nodes_set) == 1:\n # There's nothing to contract.\n if ignore_edge_order:\n return list(nodes_set)[0]\n return list(nodes_set)[0].reorder_edges(output_edge_order)\n\n # Then apply `opt_einsum`'s algorithm\n path, nodes = utils.get_path(nodes_set, algorithm)\n for a, b in path:\n new_node = nodes[a] @ nodes[b]\n nodes.append(new_node)\n nodes = utils.multi_remove(nodes, [a, b])\n\n # if the final node has more than one edge,\n # output_edge_order has to be specified\n final_node = nodes[0] # nodes were connected, we checked this\n if not ignore_edge_order:\n final_node.reorder_edges(output_edge_order)\n return final_node\n\n\ndef optimal(\n nodes: Iterable[BaseNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Optimal contraction order via `opt_einsum`.\n\n This method will find the truly optimal contraction order via\n `opt_einsum`'s depth first search algorithm. Since this search is\n exhaustive, if your network is large (n>10), then the search may\n take longer than just contracting in a suboptimal way.\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n The final node after full contraction.\n \"\"\"\n alg = functools.partial(opt_einsum.paths.optimal, memory_limit=memory_limit)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n\n\ndef branch(nodes: Iterable[BaseNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n nbranch: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Branch contraction path via `opt_einsum`.\n\n This method uses the DFS approach of `optimal` while sorting potential\n contractions based on a heuristic cost, in order to reduce time spent\n in exploring paths which are unlikely to be optimal.\n For more details:\n https://optimized-einsum.readthedocs.io/en/latest/branching_path.html\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n nbranch: Number of best contractions to explore.\n If None it explores all inner products starting with those that\n have the best cost heuristic.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n The final node after full contraction.\n \"\"\"\n alg = functools.partial(\n opt_einsum.paths.branch, memory_limit=memory_limit, nbranch=nbranch)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n\n\ndef greedy(\n nodes: Iterable[BaseNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Greedy contraction path via `opt_einsum`.\n\n This provides a more efficient strategy than `optimal` for finding\n contraction paths in large networks. First contracts pairs of tensors\n by finding the pair with the lowest cost at each step. Then it performs\n the outer products.\n For more details:\n https://optimized-einsum.readthedocs.io/en/latest/greedy_path.html\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n The final node after full contraction.\n \"\"\"\n alg = functools.partial(opt_einsum.paths.greedy, memory_limit=memory_limit)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n\n\n# pylint: disable=too-many-return-statements\ndef auto(\n nodes: BaseNode,\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Chooses one of the above algorithms according to network size.\n\n Default behavior is based on `opt_einsum`'s `auto` contractor.\n\n Args:\n nodes: A collection of connected nodes.\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n Final node after full contraction.\n \"\"\"\n\n n = len(list(nodes)) #pytype thing\n _nodes = nodes\n if n <= 0:\n raise ValueError(\"Cannot contract empty tensor network.\")\n if n == 1:\n if not ignore_edge_order:\n if output_edge_order is None:\n output_edge_order = list(\n (get_all_edges(_nodes) - get_all_nondangling(_nodes)))\n if len(output_edge_order) > 1:\n raise ValueError(\"The final node after contraction has more than \"\n \"one dangling edge. In this case `output_edge_order` \"\n \"has to be provided.\")\n\n edges = get_all_nondangling(_nodes)\n if edges:\n final_node = contract_parallel(edges.pop())\n else:\n final_node = list(_nodes)[0]\n final_node.reorder_edges(output_edge_order)\n if not ignore_edge_order:\n final_node.reorder_edges(output_edge_order)\n return final_node\n\n if n < 5:\n return optimal(nodes, output_edge_order, memory_limit, ignore_edge_order)\n if n < 7:\n return branch(nodes, output_edge_order, memory_limit, ignore_edge_order)\n if n < 9:\n return branch(nodes, output_edge_order, memory_limit, nbranch=2, ignore_edge_order=ignore_edge_order)\n if n < 15:\n return branch(nodes, output_edge_order, nbranch=1, ignore_edge_order=ignore_edge_order)\n return greedy(nodes, output_edge_order, memory_limit, ignore_edge_order)\n\n\ndef custom(\n nodes: Iterable[BaseNode],\n optimizer: Any,\n output_edge_order: Sequence[Edge] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Uses a custom path optimizer created by the user to calculate paths.\n\n The custom path optimizer should inherit `opt_einsum`'s `PathOptimizer`.\n For more details:\n https://optimized-einsum.readthedocs.io/en/latest/custom_paths.html\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n output_edge_order` must be provided.\n optimizer: A custom `opt_einsum.PathOptimizer` object.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n Final node after full contraction.\n \"\"\"\n alg = functools.partial(optimizer, memory_limit=memory_limit)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n", "path": "tensornetwork/contractors/opt_einsum_paths/path_contractors.py"}], "after_files": [{"content": "# pylint: disable=cyclic-import\n# Copyright 2019 The TensorNetwork Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Contractors based on `opt_einsum`'s path algorithms.\"\"\"\n\nimport functools\nimport opt_einsum\n# pylint: disable=line-too-long\nfrom tensornetwork.network_operations import check_connected, get_all_edges, get_subgraph_dangling\n# pylint: disable=line-too-long\nfrom tensornetwork.network_components import get_all_nondangling, contract_parallel, contract_between\nfrom tensornetwork.network_components import Edge, BaseNode\nfrom tensornetwork.contractors.opt_einsum_paths import utils\nfrom typing import Any, Optional, Sequence, Iterable\n\n#TODO (martin): add return types of functions back once TensorNetwork is gone\n# remove _base_network\n# _base_nodes -> base\n\n\ndef base(nodes: Iterable[BaseNode],\n algorithm: utils.Algorithm,\n output_edge_order: Optional[Sequence[Edge]] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Base method for all `opt_einsum` contractors.\n\n Args:\n nodes: A collection of connected nodes.\n algorithm: `opt_einsum` contraction method to use.\n output_edge_order: An optional list of edges. Edges of the\n final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be pronvided.\n ignore_edge_order: An option to ignore the output edge\n order.\n\n Returns:\n Final node after full contraction.\n \"\"\"\n nodes_set = set(nodes)\n edges = get_all_edges(nodes_set)\n #output edge order has to be determinded before any contraction\n #(edges are refreshed after contractions)\n\n if not ignore_edge_order:\n if output_edge_order is None:\n output_edge_order = list(get_subgraph_dangling(nodes))\n if len(output_edge_order) > 1:\n raise ValueError(\"The final node after contraction has more than \"\n \"one remaining edge. In this case `output_edge_order` \"\n \"has to be provided.\")\n\n if set(output_edge_order) != get_subgraph_dangling(nodes):\n raise ValueError(\n \"output edges are not equal to the remaining \"\n \"non-contracted edges of the final node.\"\n )\n\n for edge in edges:\n if not edge.is_disabled: #if its disabled we already contracted it\n if edge.is_trace():\n nodes_set.remove(edge.node1)\n nodes_set.add(contract_parallel(edge))\n\n if len(nodes_set) == 1:\n # There's nothing to contract.\n if ignore_edge_order:\n return list(nodes_set)[0]\n return list(nodes_set)[0].reorder_edges(output_edge_order)\n\n # Then apply `opt_einsum`'s algorithm\n path, nodes = utils.get_path(nodes_set, algorithm)\n for a, b in path:\n new_node = contract_between(nodes[a], nodes[b], allow_outer_product=True)\n nodes.append(new_node)\n nodes = utils.multi_remove(nodes, [a, b])\n\n # if the final node has more than one edge,\n # output_edge_order has to be specified\n final_node = nodes[0] # nodes were connected, we checked this\n if not ignore_edge_order:\n final_node.reorder_edges(output_edge_order)\n return final_node\n\n\ndef optimal(\n nodes: Iterable[BaseNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Optimal contraction order via `opt_einsum`.\n\n This method will find the truly optimal contraction order via\n `opt_einsum`'s depth first search algorithm. Since this search is\n exhaustive, if your network is large (n>10), then the search may\n take longer than just contracting in a suboptimal way.\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n The final node after full contraction.\n \"\"\"\n alg = functools.partial(opt_einsum.paths.optimal, memory_limit=memory_limit)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n\n\ndef branch(nodes: Iterable[BaseNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n nbranch: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Branch contraction path via `opt_einsum`.\n\n This method uses the DFS approach of `optimal` while sorting potential\n contractions based on a heuristic cost, in order to reduce time spent\n in exploring paths which are unlikely to be optimal.\n For more details:\n https://optimized-einsum.readthedocs.io/en/latest/branching_path.html\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n nbranch: Number of best contractions to explore.\n If None it explores all inner products starting with those that\n have the best cost heuristic.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n The final node after full contraction.\n \"\"\"\n alg = functools.partial(\n opt_einsum.paths.branch, memory_limit=memory_limit, nbranch=nbranch)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n\n\ndef greedy(\n nodes: Iterable[BaseNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Greedy contraction path via `opt_einsum`.\n\n This provides a more efficient strategy than `optimal` for finding\n contraction paths in large networks. First contracts pairs of tensors\n by finding the pair with the lowest cost at each step. Then it performs\n the outer products.\n For more details:\n https://optimized-einsum.readthedocs.io/en/latest/greedy_path.html\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n The final node after full contraction.\n \"\"\"\n alg = functools.partial(opt_einsum.paths.greedy, memory_limit=memory_limit)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n\n\n# pylint: disable=too-many-return-statements\ndef auto(\n nodes: Iterable[BaseNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Chooses one of the above algorithms according to network size.\n\n Default behavior is based on `opt_einsum`'s `auto` contractor.\n\n Args:\n nodes: A collection of connected nodes.\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n Final node after full contraction.\n \"\"\"\n\n n = len(list(nodes)) #pytype thing\n _nodes = nodes\n if n <= 0:\n raise ValueError(\"Cannot contract empty tensor network.\")\n if n == 1:\n if not ignore_edge_order:\n if output_edge_order is None:\n output_edge_order = list(\n (get_all_edges(_nodes) - get_all_nondangling(_nodes)))\n if len(output_edge_order) > 1:\n raise ValueError(\"The final node after contraction has more than \"\n \"one dangling edge. In this case `output_edge_order` \"\n \"has to be provided.\")\n\n edges = get_all_nondangling(_nodes)\n if edges:\n final_node = contract_parallel(edges.pop())\n else:\n final_node = list(_nodes)[0]\n final_node.reorder_edges(output_edge_order)\n if not ignore_edge_order:\n final_node.reorder_edges(output_edge_order)\n return final_node\n\n if n < 5:\n return optimal(nodes, output_edge_order, memory_limit, ignore_edge_order)\n if n < 7:\n return branch(nodes, output_edge_order, memory_limit, ignore_edge_order)\n if n < 9:\n return branch(nodes, output_edge_order, memory_limit, nbranch=2, ignore_edge_order=ignore_edge_order)\n if n < 15:\n return branch(nodes, output_edge_order, nbranch=1, ignore_edge_order=ignore_edge_order)\n return greedy(nodes, output_edge_order, memory_limit, ignore_edge_order)\n\n\ndef custom(\n nodes: Iterable[BaseNode],\n optimizer: Any,\n output_edge_order: Sequence[Edge] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> BaseNode:\n \"\"\"Uses a custom path optimizer created by the user to calculate paths.\n\n The custom path optimizer should inherit `opt_einsum`'s `PathOptimizer`.\n For more details:\n https://optimized-einsum.readthedocs.io/en/latest/custom_paths.html\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n output_edge_order` must be provided.\n optimizer: A custom `opt_einsum.PathOptimizer` object.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n Final node after full contraction.\n \"\"\"\n alg = functools.partial(optimizer, memory_limit=memory_limit)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n", "path": "tensornetwork/contractors/opt_einsum_paths/path_contractors.py"}]} | 3,543 | 444 |
gh_patches_debug_12207 | rasdani/github-patches | git_diff | ESMCI__cime-249 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Misleading error message in case_setup.py
I suggest changing "%s NINST value %d greater than %s NTASKS %d"
(e.g ERROR: ATM NINST value 1 greater than ATM NTASKS 0)
to
"NINST_%s value %d greater than NTASKS_%s %d
(e.g. ERROR: NINST_ATM value 1 greater than NTASKS_ATM 0)
to reflect the real variable name which can be queried or changed with xmlquery/xmlchange
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `utils/python/CIME/case_setup.py`
Content:
```
1 """
2 Library for case.setup.
3 """
4
5 from CIME.XML.standard_module_setup import *
6
7 from CIME.check_lockedfiles import check_lockedfiles
8 from CIME.preview_namelists import preview_namelists
9 from CIME.XML.env_mach_pes import EnvMachPes
10 from CIME.XML.component import Component
11 from CIME.XML.compilers import Compilers
12 from CIME.utils import expect, run_cmd, append_status
13
14 import shutil, time, glob
15
16 logger = logging.getLogger(__name__)
17
18 ###############################################################################
19 def _check_pelayouts_require_rebuild(case, models):
20 ###############################################################################
21 """
22 Create if we require a rebuild, expects cwd is caseroot
23 """
24 locked_pes = "LockedFiles/env_mach_pes.xml"
25 if os.path.exists(locked_pes):
26 # Look to see if $comp_PE_CHANGE_REQUIRES_REBUILD is defined
27 # for any component
28 env_mach_pes_locked = EnvMachPes(infile=locked_pes)
29 for comp in models:
30 if case.get_value("%s_PE_CHANGE_REQUIRES_REBUILD" % comp):
31 # Changing these values in env_mach_pes.xml will force
32 # you to clean the corresponding component
33 old_tasks = env_mach_pes_locked.get_value("NTASKS_%s" % comp)
34 old_threads = env_mach_pes_locked.get_value("NTHRDS_%s" % comp)
35 old_inst = env_mach_pes_locked.get_value("NINST_%s" % comp)
36
37 new_tasks = case.get_value("NTASKS_%s" % comp)
38 new_threads = case.get_value("NTHRDS_%s" % comp)
39 new_inst = case.get_value("NINST_%s" % comp)
40
41 if old_tasks != new_tasks or old_threads != new_threads or old_inst != new_inst:
42 logger.warn("%s pe change requires clean build" % comp)
43 cleanflag = comp.lower()
44 run_cmd("./case.build --clean %s" % cleanflag)
45
46 os.remove(locked_pes)
47
48 ###############################################################################
49 def _build_usernl_files(case, model, comp):
50 ###############################################################################
51 """
52 Create user_nl_xxx files, expects cwd is caseroot
53 """
54 model = model.upper()
55 model_file = case.get_value("CONFIG_%s_FILE" % model)
56 model_dir = os.path.dirname(model_file)
57
58 expect(os.path.isdir(model_dir),
59 "cannot find cime_config directory %s for component %s" % (model_dir, comp))
60
61 if comp == "cpl":
62 if not os.path.exists("user_nl_cpl"):
63 shutil.copy(os.path.join(model_dir, "user_nl_cpl"), ".")
64 else:
65 ninst = case.get_value("NINST_%s" % model)
66 nlfile = "user_nl_%s" % comp
67 model_nl = os.path.join(model_dir, nlfile)
68 if os.path.exists(model_nl):
69 if ninst > 1:
70 for inst_counter in xrange(1, ninst+1):
71 case_nlfile = "%s_%04d" % (nlfile, inst_counter)
72 if not os.path.exists(case_nlfile):
73 shutil.copy(model_nl, case_nlfile)
74 else:
75 if not os.path.exists(nlfile):
76 shutil.copy(model_nl, nlfile)
77
78 ###############################################################################
79 def case_setup(case, clean=False, test_mode=False, reset=False):
80 ###############################################################################
81 caseroot = case.get_value("CASEROOT")
82 os.chdir(caseroot)
83 msg = "case.setup starting"
84 append_status(msg, caseroot=caseroot, sfile="CaseStatus")
85
86 cimeroot = os.environ["CIMEROOT"]
87
88 # Check that $DIN_LOC_ROOT exists - and abort if not a namelist compare tests
89 din_loc_root = case.get_value("DIN_LOC_ROOT")
90 testcase = case.get_value("TESTCASE")
91 expect(not (not os.path.isdir(din_loc_root) and testcase != "SBN"),
92 "inputdata root is not a directory: \"$din_loc_root\" ")
93
94 # Check that userdefine settings are specified before expanding variable
95 for vid, value in case:
96 expect(not (type(value) is str and "USERDEFINED_required_build" in value),
97 "Parameter '%s' must be defined" % vid)
98
99 # Create batch script
100 if reset or clean:
101 # Clean batch script
102
103 backup_dir = "PESetupHist/b.%s" % time.strftime("%y%m%d-%H%M%S")
104 if not os.path.isdir(backup_dir):
105 os.makedirs(backup_dir)
106
107 # back up relevant files
108 for fileglob in ["case.run", "env_build.xml", "env_mach_pes.xml", "Macros*"]:
109 for filename in glob.glob(fileglob):
110 shutil.copy(filename, backup_dir)
111 if os.path.exists("case.run"):
112 os.remove("case.run")
113
114 # only do the following if are NOT in testmode
115 if not test_mode:
116 # rebuild the models (even on restart)
117 case.set_value("BUILD_COMPLETE", False)
118
119 # backup and then clean test script
120 if os.path.exists("case.test"):
121 shutil.copy("case.test", backup_dir)
122 os.remove("case.test")
123 logger.info("Successfully cleaned test script case.test")
124
125 if os.path.exists("case.testdriver"):
126 shutil.copy("case.testdriver", backup_dir)
127 os.remove("case.testdriver")
128 logger.info("Successfully cleaned test script case.testdriver")
129
130 logger.info("Successfully cleaned batch script case.run")
131
132 logger.info("Successfully cleaned batch script case.run")
133 logger.info("Some files have been saved to %s" % backup_dir)
134
135 msg = "case.setup clean complete"
136 append_status(msg, caseroot=caseroot, sfile="CaseStatus")
137
138 if not clean:
139 drv_comp = Component()
140 models = drv_comp.get_valid_model_components()
141 models.remove("DRV")
142
143 mach = case.get_value("MACH")
144 expect(mach is not None, "xml variable MACH is not set")
145
146 # Create Macros file only if it does not exist
147 if not os.path.exists("Macros"):
148 logger.debug("Creating Macros file for %s" % mach)
149 compilers = Compilers(compiler=case.get_value("COMPILER"), machine=mach, os_=case.get_value("OS"), mpilib=case.get_value("MPILIB"))
150 compilers.write_macros_file()
151 else:
152 logger.debug("Macros script already created ...skipping")
153
154 # Set tasks to 1 if mpi-serial library
155 if case.get_value("MPILIB") == "mpi-serial":
156 for vid, value in case:
157 if vid.startswith("NTASKS_") and value != 1:
158 case.set_value(vid, 1)
159
160 # Check ninst.
161 # In CIME there can be multiple instances of each component model (an ensemble) NINST is the instance of that component.
162 for comp in models:
163 ninst = case.get_value("NINST_%s" % comp)
164 ntasks = case.get_value("NTASKS_%s" % comp)
165 if ninst > ntasks:
166 if ntasks == 1:
167 case.set_value("NTASKS_%s" % comp, ninst)
168 else:
169 expect(False, "%s NINST value %d greater than %s NTASKS %d" % (comp, ninst, comp, ntasks))
170
171 expect(not (case.get_value("BUILD_THREADED") and case.get_value("COMPILER") == "nag"),
172 "it is not possible to run with OpenMP if using the NAG Fortran compiler")
173
174 if os.path.exists("case.run"):
175 logger.info("Machine/Decomp/Pes configuration has already been done ...skipping")
176 else:
177 _check_pelayouts_require_rebuild(case, models)
178
179 if os.path.exists("LockedFiles/env_build.xml"):
180 os.remove("LockedFiles/env_build.xml")
181
182 case.flush()
183 check_lockedfiles()
184
185 pestot = int(run_cmd("Tools/taskmaker.pl -sumonly"))
186 case.set_value("TOTALPES", pestot)
187
188 # Compute cost based on PE count
189 pval = 1
190 pcnt = 0
191 while pval < pestot:
192 pval *= 2
193 pcnt += 6 # (scaling like sqrt(6/10))
194 pcost = 3 - pcnt / 10 # (3 is 64 with 6)
195
196 # Compute cost based on DEBUG
197 dcost = 3 if case.get_value("DEBUG") else 0
198
199 # Compute cost based on run length
200 # For simplicity, we use a heuristic just based on STOP_OPTION (not considering
201 # STOP_N), and only deal with options longer than ndays
202 lcost = 0
203 if "nmonth" in case.get_value("STOP_OPTION"):
204 # N months costs 30x as much as N days; since cost is based on log-base-2, add 5
205 lcost = 5
206 elif "nyear" in case.get_value("STOP_OPTION"):
207 # N years costs 365x as much as N days; since cost is based on log-base-2, add 9
208 lcost = 9
209
210 estcost = pcost + dcost + lcost
211 for cost in ["CCSM_CCOST", "CCSM_GCOST", "CCSM_TCOST", "CCSM_CCOST"]:
212 estcost += case.get_value(cost)
213
214 case.set_value("CCSM_PCOST", pcost)
215 case.set_value("CCSM_ESTCOST", estcost)
216
217 # create batch file
218 logger.info("Creating batch script case.run")
219
220 # Use BatchFactory to get the appropriate instance of a BatchMaker,
221 # use it to create our batch scripts
222 env_batch = case._get_env("batch")
223 for job in env_batch.get_jobs():
224 input_batch_script = os.path.join(case.get_value("MACHDIR"), env_batch.get_value('template', subgroup=job))
225 if job == "case.test" and testcase is not None and not test_mode:
226 logger.info("Writing %s script" % job)
227 testscript = os.path.join(cimeroot, "scripts", "Testing", "Testcases", "%s_script" % testcase)
228 # Short term fix to be removed when csh tests are removed
229 if not os.path.exists(testscript):
230 env_batch.make_batch_script(input_batch_script, job, case)
231 elif job != "case.test":
232 logger.info("Writing %s script" % job)
233 env_batch.make_batch_script(input_batch_script, job, case)
234
235 # Make a copy of env_mach_pes.xml in order to be able
236 # to check that it does not change once case.setup is invoked
237 logger.info("Locking file env_mach_pes.xml")
238 case.flush()
239 shutil.copy("env_mach_pes.xml", "LockedFiles")
240
241 # Create user_nl files for the required number of instances
242 if not os.path.exists("user_nl_cpl"):
243 logger.info("Creating user_nl_xxx files for components and cpl")
244 # loop over models
245 for model in models:
246 comp = case.get_value("COMP_%s" % model)
247 logger.info("Building %s usernl files"%model)
248 _build_usernl_files(case, model, comp)
249 if comp == "cism":
250 run_cmd("%s/../components/cism/cime_config/cism.template %s" % (cimeroot, caseroot))
251
252 _build_usernl_files(case, "drv", "cpl")
253
254 # Run preview namelists for scripts
255 logger.info("preview_namelists")
256 preview_namelists(case)
257
258 logger.info("See ./CaseDoc for component namelists")
259 logger.info("If an old case build already exists, might want to run \'case.build --clean-all\' before building")
260
261 # Create test script if appropriate
262 # Short term fix to be removed when csh tests are removed
263 if os.path.exists("env_test.xml"):
264 if not os.path.exists("case.test"):
265 logger.info("Starting testcase.setup")
266 run_cmd("./testcase.setup -caseroot %s" % caseroot)
267 logger.info("Finished testcase.setup")
268
269 msg = "case.setup complete"
270 append_status(msg, caseroot=caseroot, sfile="CaseStatus")
271
272
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/utils/python/CIME/case_setup.py b/utils/python/CIME/case_setup.py
--- a/utils/python/CIME/case_setup.py
+++ b/utils/python/CIME/case_setup.py
@@ -166,7 +166,7 @@
if ntasks == 1:
case.set_value("NTASKS_%s" % comp, ninst)
else:
- expect(False, "%s NINST value %d greater than %s NTASKS %d" % (comp, ninst, comp, ntasks))
+ expect(False, "NINST_%s value %d greater than NTASKS_%s %d" % (comp, ninst, comp, ntasks))
expect(not (case.get_value("BUILD_THREADED") and case.get_value("COMPILER") == "nag"),
"it is not possible to run with OpenMP if using the NAG Fortran compiler")
| {"golden_diff": "diff --git a/utils/python/CIME/case_setup.py b/utils/python/CIME/case_setup.py\n--- a/utils/python/CIME/case_setup.py\n+++ b/utils/python/CIME/case_setup.py\n@@ -166,7 +166,7 @@\n if ntasks == 1:\n case.set_value(\"NTASKS_%s\" % comp, ninst)\n else:\n- expect(False, \"%s NINST value %d greater than %s NTASKS %d\" % (comp, ninst, comp, ntasks))\n+ expect(False, \"NINST_%s value %d greater than NTASKS_%s %d\" % (comp, ninst, comp, ntasks))\n \n expect(not (case.get_value(\"BUILD_THREADED\") and case.get_value(\"COMPILER\") == \"nag\"),\n \"it is not possible to run with OpenMP if using the NAG Fortran compiler\")\n", "issue": "Misleading error message in case_setup.py \nI suggest changing \"%s NINST value %d greater than %s NTASKS %d\" \n(e.g ERROR: ATM NINST value 1 greater than ATM NTASKS 0)\n\nto \n\n\"NINST_%s value %d greater than NTASKS_%s %d\n(e.g. ERROR: NINST_ATM value 1 greater than NTASKS_ATM 0)\n\nto reflect the real variable name which can be queried or changed with xmlquery/xmlchange\n\n", "before_files": [{"content": "\"\"\"\nLibrary for case.setup.\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\n\nfrom CIME.check_lockedfiles import check_lockedfiles\nfrom CIME.preview_namelists import preview_namelists\nfrom CIME.XML.env_mach_pes import EnvMachPes\nfrom CIME.XML.component import Component\nfrom CIME.XML.compilers import Compilers\nfrom CIME.utils import expect, run_cmd, append_status\n\nimport shutil, time, glob\n\nlogger = logging.getLogger(__name__)\n\n###############################################################################\ndef _check_pelayouts_require_rebuild(case, models):\n###############################################################################\n \"\"\"\n Create if we require a rebuild, expects cwd is caseroot\n \"\"\"\n locked_pes = \"LockedFiles/env_mach_pes.xml\"\n if os.path.exists(locked_pes):\n # Look to see if $comp_PE_CHANGE_REQUIRES_REBUILD is defined\n # for any component\n env_mach_pes_locked = EnvMachPes(infile=locked_pes)\n for comp in models:\n if case.get_value(\"%s_PE_CHANGE_REQUIRES_REBUILD\" % comp):\n # Changing these values in env_mach_pes.xml will force\n # you to clean the corresponding component\n old_tasks = env_mach_pes_locked.get_value(\"NTASKS_%s\" % comp)\n old_threads = env_mach_pes_locked.get_value(\"NTHRDS_%s\" % comp)\n old_inst = env_mach_pes_locked.get_value(\"NINST_%s\" % comp)\n\n new_tasks = case.get_value(\"NTASKS_%s\" % comp)\n new_threads = case.get_value(\"NTHRDS_%s\" % comp)\n new_inst = case.get_value(\"NINST_%s\" % comp)\n\n if old_tasks != new_tasks or old_threads != new_threads or old_inst != new_inst:\n logger.warn(\"%s pe change requires clean build\" % comp)\n cleanflag = comp.lower()\n run_cmd(\"./case.build --clean %s\" % cleanflag)\n\n os.remove(locked_pes)\n\n###############################################################################\ndef _build_usernl_files(case, model, comp):\n###############################################################################\n \"\"\"\n Create user_nl_xxx files, expects cwd is caseroot\n \"\"\"\n model = model.upper()\n model_file = case.get_value(\"CONFIG_%s_FILE\" % model)\n model_dir = os.path.dirname(model_file)\n\n expect(os.path.isdir(model_dir),\n \"cannot find cime_config directory %s for component %s\" % (model_dir, comp))\n\n if comp == \"cpl\":\n if not os.path.exists(\"user_nl_cpl\"):\n shutil.copy(os.path.join(model_dir, \"user_nl_cpl\"), \".\")\n else:\n ninst = case.get_value(\"NINST_%s\" % model)\n nlfile = \"user_nl_%s\" % comp\n model_nl = os.path.join(model_dir, nlfile)\n if os.path.exists(model_nl):\n if ninst > 1:\n for inst_counter in xrange(1, ninst+1):\n case_nlfile = \"%s_%04d\" % (nlfile, inst_counter)\n if not os.path.exists(case_nlfile):\n shutil.copy(model_nl, case_nlfile)\n else:\n if not os.path.exists(nlfile):\n shutil.copy(model_nl, nlfile)\n\n###############################################################################\ndef case_setup(case, clean=False, test_mode=False, reset=False):\n###############################################################################\n caseroot = case.get_value(\"CASEROOT\")\n os.chdir(caseroot)\n msg = \"case.setup starting\"\n append_status(msg, caseroot=caseroot, sfile=\"CaseStatus\")\n\n cimeroot = os.environ[\"CIMEROOT\"]\n\n # Check that $DIN_LOC_ROOT exists - and abort if not a namelist compare tests\n din_loc_root = case.get_value(\"DIN_LOC_ROOT\")\n testcase = case.get_value(\"TESTCASE\")\n expect(not (not os.path.isdir(din_loc_root) and testcase != \"SBN\"),\n \"inputdata root is not a directory: \\\"$din_loc_root\\\" \")\n\n # Check that userdefine settings are specified before expanding variable\n for vid, value in case:\n expect(not (type(value) is str and \"USERDEFINED_required_build\" in value),\n \"Parameter '%s' must be defined\" % vid)\n\n # Create batch script\n if reset or clean:\n # Clean batch script\n\n backup_dir = \"PESetupHist/b.%s\" % time.strftime(\"%y%m%d-%H%M%S\")\n if not os.path.isdir(backup_dir):\n os.makedirs(backup_dir)\n\n # back up relevant files\n for fileglob in [\"case.run\", \"env_build.xml\", \"env_mach_pes.xml\", \"Macros*\"]:\n for filename in glob.glob(fileglob):\n shutil.copy(filename, backup_dir)\n if os.path.exists(\"case.run\"):\n os.remove(\"case.run\")\n\n # only do the following if are NOT in testmode\n if not test_mode:\n # rebuild the models (even on restart)\n case.set_value(\"BUILD_COMPLETE\", False)\n\n # backup and then clean test script\n if os.path.exists(\"case.test\"):\n shutil.copy(\"case.test\", backup_dir)\n os.remove(\"case.test\")\n logger.info(\"Successfully cleaned test script case.test\")\n\n if os.path.exists(\"case.testdriver\"):\n shutil.copy(\"case.testdriver\", backup_dir)\n os.remove(\"case.testdriver\")\n logger.info(\"Successfully cleaned test script case.testdriver\")\n\n logger.info(\"Successfully cleaned batch script case.run\")\n\n logger.info(\"Successfully cleaned batch script case.run\")\n logger.info(\"Some files have been saved to %s\" % backup_dir)\n\n msg = \"case.setup clean complete\"\n append_status(msg, caseroot=caseroot, sfile=\"CaseStatus\")\n\n if not clean:\n drv_comp = Component()\n models = drv_comp.get_valid_model_components()\n models.remove(\"DRV\")\n\n mach = case.get_value(\"MACH\")\n expect(mach is not None, \"xml variable MACH is not set\")\n\n # Create Macros file only if it does not exist\n if not os.path.exists(\"Macros\"):\n logger.debug(\"Creating Macros file for %s\" % mach)\n compilers = Compilers(compiler=case.get_value(\"COMPILER\"), machine=mach, os_=case.get_value(\"OS\"), mpilib=case.get_value(\"MPILIB\"))\n compilers.write_macros_file()\n else:\n logger.debug(\"Macros script already created ...skipping\")\n\n # Set tasks to 1 if mpi-serial library\n if case.get_value(\"MPILIB\") == \"mpi-serial\":\n for vid, value in case:\n if vid.startswith(\"NTASKS_\") and value != 1:\n case.set_value(vid, 1)\n\n # Check ninst.\n # In CIME there can be multiple instances of each component model (an ensemble) NINST is the instance of that component.\n for comp in models:\n ninst = case.get_value(\"NINST_%s\" % comp)\n ntasks = case.get_value(\"NTASKS_%s\" % comp)\n if ninst > ntasks:\n if ntasks == 1:\n case.set_value(\"NTASKS_%s\" % comp, ninst)\n else:\n expect(False, \"%s NINST value %d greater than %s NTASKS %d\" % (comp, ninst, comp, ntasks))\n\n expect(not (case.get_value(\"BUILD_THREADED\") and case.get_value(\"COMPILER\") == \"nag\"),\n \"it is not possible to run with OpenMP if using the NAG Fortran compiler\")\n\n if os.path.exists(\"case.run\"):\n logger.info(\"Machine/Decomp/Pes configuration has already been done ...skipping\")\n else:\n _check_pelayouts_require_rebuild(case, models)\n\n if os.path.exists(\"LockedFiles/env_build.xml\"):\n os.remove(\"LockedFiles/env_build.xml\")\n\n case.flush()\n check_lockedfiles()\n\n pestot = int(run_cmd(\"Tools/taskmaker.pl -sumonly\"))\n case.set_value(\"TOTALPES\", pestot)\n\n # Compute cost based on PE count\n pval = 1\n pcnt = 0\n while pval < pestot:\n pval *= 2\n pcnt += 6 # (scaling like sqrt(6/10))\n pcost = 3 - pcnt / 10 # (3 is 64 with 6)\n\n # Compute cost based on DEBUG\n dcost = 3 if case.get_value(\"DEBUG\") else 0\n\n # Compute cost based on run length\n # For simplicity, we use a heuristic just based on STOP_OPTION (not considering\n # STOP_N), and only deal with options longer than ndays\n lcost = 0\n if \"nmonth\" in case.get_value(\"STOP_OPTION\"):\n # N months costs 30x as much as N days; since cost is based on log-base-2, add 5\n lcost = 5\n elif \"nyear\" in case.get_value(\"STOP_OPTION\"):\n # N years costs 365x as much as N days; since cost is based on log-base-2, add 9\n lcost = 9\n\n estcost = pcost + dcost + lcost\n for cost in [\"CCSM_CCOST\", \"CCSM_GCOST\", \"CCSM_TCOST\", \"CCSM_CCOST\"]:\n estcost += case.get_value(cost)\n\n case.set_value(\"CCSM_PCOST\", pcost)\n case.set_value(\"CCSM_ESTCOST\", estcost)\n\n # create batch file\n logger.info(\"Creating batch script case.run\")\n\n # Use BatchFactory to get the appropriate instance of a BatchMaker,\n # use it to create our batch scripts\n env_batch = case._get_env(\"batch\")\n for job in env_batch.get_jobs():\n input_batch_script = os.path.join(case.get_value(\"MACHDIR\"), env_batch.get_value('template', subgroup=job))\n if job == \"case.test\" and testcase is not None and not test_mode:\n logger.info(\"Writing %s script\" % job)\n testscript = os.path.join(cimeroot, \"scripts\", \"Testing\", \"Testcases\", \"%s_script\" % testcase)\n # Short term fix to be removed when csh tests are removed\n if not os.path.exists(testscript):\n env_batch.make_batch_script(input_batch_script, job, case)\n elif job != \"case.test\":\n logger.info(\"Writing %s script\" % job)\n env_batch.make_batch_script(input_batch_script, job, case)\n\n # Make a copy of env_mach_pes.xml in order to be able\n # to check that it does not change once case.setup is invoked\n logger.info(\"Locking file env_mach_pes.xml\")\n case.flush()\n shutil.copy(\"env_mach_pes.xml\", \"LockedFiles\")\n\n # Create user_nl files for the required number of instances\n if not os.path.exists(\"user_nl_cpl\"):\n logger.info(\"Creating user_nl_xxx files for components and cpl\")\n # loop over models\n for model in models:\n comp = case.get_value(\"COMP_%s\" % model)\n logger.info(\"Building %s usernl files\"%model)\n _build_usernl_files(case, model, comp)\n if comp == \"cism\":\n run_cmd(\"%s/../components/cism/cime_config/cism.template %s\" % (cimeroot, caseroot))\n\n _build_usernl_files(case, \"drv\", \"cpl\")\n\n # Run preview namelists for scripts\n logger.info(\"preview_namelists\")\n preview_namelists(case)\n\n logger.info(\"See ./CaseDoc for component namelists\")\n logger.info(\"If an old case build already exists, might want to run \\'case.build --clean-all\\' before building\")\n\n # Create test script if appropriate\n # Short term fix to be removed when csh tests are removed\n if os.path.exists(\"env_test.xml\"):\n if not os.path.exists(\"case.test\"):\n logger.info(\"Starting testcase.setup\")\n run_cmd(\"./testcase.setup -caseroot %s\" % caseroot)\n logger.info(\"Finished testcase.setup\")\n\n msg = \"case.setup complete\"\n append_status(msg, caseroot=caseroot, sfile=\"CaseStatus\")\n\n", "path": "utils/python/CIME/case_setup.py"}], "after_files": [{"content": "\"\"\"\nLibrary for case.setup.\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\n\nfrom CIME.check_lockedfiles import check_lockedfiles\nfrom CIME.preview_namelists import preview_namelists\nfrom CIME.XML.env_mach_pes import EnvMachPes\nfrom CIME.XML.component import Component\nfrom CIME.XML.compilers import Compilers\nfrom CIME.utils import expect, run_cmd, append_status\n\nimport shutil, time, glob\n\nlogger = logging.getLogger(__name__)\n\n###############################################################################\ndef _check_pelayouts_require_rebuild(case, models):\n###############################################################################\n \"\"\"\n Create if we require a rebuild, expects cwd is caseroot\n \"\"\"\n locked_pes = \"LockedFiles/env_mach_pes.xml\"\n if os.path.exists(locked_pes):\n # Look to see if $comp_PE_CHANGE_REQUIRES_REBUILD is defined\n # for any component\n env_mach_pes_locked = EnvMachPes(infile=locked_pes)\n for comp in models:\n if case.get_value(\"%s_PE_CHANGE_REQUIRES_REBUILD\" % comp):\n # Changing these values in env_mach_pes.xml will force\n # you to clean the corresponding component\n old_tasks = env_mach_pes_locked.get_value(\"NTASKS_%s\" % comp)\n old_threads = env_mach_pes_locked.get_value(\"NTHRDS_%s\" % comp)\n old_inst = env_mach_pes_locked.get_value(\"NINST_%s\" % comp)\n\n new_tasks = case.get_value(\"NTASKS_%s\" % comp)\n new_threads = case.get_value(\"NTHRDS_%s\" % comp)\n new_inst = case.get_value(\"NINST_%s\" % comp)\n\n if old_tasks != new_tasks or old_threads != new_threads or old_inst != new_inst:\n logger.warn(\"%s pe change requires clean build\" % comp)\n cleanflag = comp.lower()\n run_cmd(\"./case.build --clean %s\" % cleanflag)\n\n os.remove(locked_pes)\n\n###############################################################################\ndef _build_usernl_files(case, model, comp):\n###############################################################################\n \"\"\"\n Create user_nl_xxx files, expects cwd is caseroot\n \"\"\"\n model = model.upper()\n model_file = case.get_value(\"CONFIG_%s_FILE\" % model)\n model_dir = os.path.dirname(model_file)\n\n expect(os.path.isdir(model_dir),\n \"cannot find cime_config directory %s for component %s\" % (model_dir, comp))\n\n if comp == \"cpl\":\n if not os.path.exists(\"user_nl_cpl\"):\n shutil.copy(os.path.join(model_dir, \"user_nl_cpl\"), \".\")\n else:\n ninst = case.get_value(\"NINST_%s\" % model)\n nlfile = \"user_nl_%s\" % comp\n model_nl = os.path.join(model_dir, nlfile)\n if os.path.exists(model_nl):\n if ninst > 1:\n for inst_counter in xrange(1, ninst+1):\n case_nlfile = \"%s_%04d\" % (nlfile, inst_counter)\n if not os.path.exists(case_nlfile):\n shutil.copy(model_nl, case_nlfile)\n else:\n if not os.path.exists(nlfile):\n shutil.copy(model_nl, nlfile)\n\n###############################################################################\ndef case_setup(case, clean=False, test_mode=False, reset=False):\n###############################################################################\n caseroot = case.get_value(\"CASEROOT\")\n os.chdir(caseroot)\n msg = \"case.setup starting\"\n append_status(msg, caseroot=caseroot, sfile=\"CaseStatus\")\n\n cimeroot = os.environ[\"CIMEROOT\"]\n\n # Check that $DIN_LOC_ROOT exists - and abort if not a namelist compare tests\n din_loc_root = case.get_value(\"DIN_LOC_ROOT\")\n testcase = case.get_value(\"TESTCASE\")\n expect(not (not os.path.isdir(din_loc_root) and testcase != \"SBN\"),\n \"inputdata root is not a directory: \\\"$din_loc_root\\\" \")\n\n # Check that userdefine settings are specified before expanding variable\n for vid, value in case:\n expect(not (type(value) is str and \"USERDEFINED_required_build\" in value),\n \"Parameter '%s' must be defined\" % vid)\n\n # Create batch script\n if reset or clean:\n # Clean batch script\n\n backup_dir = \"PESetupHist/b.%s\" % time.strftime(\"%y%m%d-%H%M%S\")\n if not os.path.isdir(backup_dir):\n os.makedirs(backup_dir)\n\n # back up relevant files\n for fileglob in [\"case.run\", \"env_build.xml\", \"env_mach_pes.xml\", \"Macros*\"]:\n for filename in glob.glob(fileglob):\n shutil.copy(filename, backup_dir)\n if os.path.exists(\"case.run\"):\n os.remove(\"case.run\")\n\n # only do the following if are NOT in testmode\n if not test_mode:\n # rebuild the models (even on restart)\n case.set_value(\"BUILD_COMPLETE\", False)\n\n # backup and then clean test script\n if os.path.exists(\"case.test\"):\n shutil.copy(\"case.test\", backup_dir)\n os.remove(\"case.test\")\n logger.info(\"Successfully cleaned test script case.test\")\n\n if os.path.exists(\"case.testdriver\"):\n shutil.copy(\"case.testdriver\", backup_dir)\n os.remove(\"case.testdriver\")\n logger.info(\"Successfully cleaned test script case.testdriver\")\n\n logger.info(\"Successfully cleaned batch script case.run\")\n\n logger.info(\"Successfully cleaned batch script case.run\")\n logger.info(\"Some files have been saved to %s\" % backup_dir)\n\n msg = \"case.setup clean complete\"\n append_status(msg, caseroot=caseroot, sfile=\"CaseStatus\")\n\n if not clean:\n drv_comp = Component()\n models = drv_comp.get_valid_model_components()\n models.remove(\"DRV\")\n\n mach = case.get_value(\"MACH\")\n expect(mach is not None, \"xml variable MACH is not set\")\n\n # Create Macros file only if it does not exist\n if not os.path.exists(\"Macros\"):\n logger.debug(\"Creating Macros file for %s\" % mach)\n compilers = Compilers(compiler=case.get_value(\"COMPILER\"), machine=mach, os_=case.get_value(\"OS\"), mpilib=case.get_value(\"MPILIB\"))\n compilers.write_macros_file()\n else:\n logger.debug(\"Macros script already created ...skipping\")\n\n # Set tasks to 1 if mpi-serial library\n if case.get_value(\"MPILIB\") == \"mpi-serial\":\n for vid, value in case:\n if vid.startswith(\"NTASKS_\") and value != 1:\n case.set_value(vid, 1)\n\n # Check ninst.\n # In CIME there can be multiple instances of each component model (an ensemble) NINST is the instance of that component.\n for comp in models:\n ninst = case.get_value(\"NINST_%s\" % comp)\n ntasks = case.get_value(\"NTASKS_%s\" % comp)\n if ninst > ntasks:\n if ntasks == 1:\n case.set_value(\"NTASKS_%s\" % comp, ninst)\n else:\n expect(False, \"NINST_%s value %d greater than NTASKS_%s %d\" % (comp, ninst, comp, ntasks))\n\n expect(not (case.get_value(\"BUILD_THREADED\") and case.get_value(\"COMPILER\") == \"nag\"),\n \"it is not possible to run with OpenMP if using the NAG Fortran compiler\")\n\n if os.path.exists(\"case.run\"):\n logger.info(\"Machine/Decomp/Pes configuration has already been done ...skipping\")\n else:\n _check_pelayouts_require_rebuild(case, models)\n\n if os.path.exists(\"LockedFiles/env_build.xml\"):\n os.remove(\"LockedFiles/env_build.xml\")\n\n case.flush()\n check_lockedfiles()\n\n pestot = int(run_cmd(\"Tools/taskmaker.pl -sumonly\"))\n case.set_value(\"TOTALPES\", pestot)\n\n # Compute cost based on PE count\n pval = 1\n pcnt = 0\n while pval < pestot:\n pval *= 2\n pcnt += 6 # (scaling like sqrt(6/10))\n pcost = 3 - pcnt / 10 # (3 is 64 with 6)\n\n # Compute cost based on DEBUG\n dcost = 3 if case.get_value(\"DEBUG\") else 0\n\n # Compute cost based on run length\n # For simplicity, we use a heuristic just based on STOP_OPTION (not considering\n # STOP_N), and only deal with options longer than ndays\n lcost = 0\n if \"nmonth\" in case.get_value(\"STOP_OPTION\"):\n # N months costs 30x as much as N days; since cost is based on log-base-2, add 5\n lcost = 5\n elif \"nyear\" in case.get_value(\"STOP_OPTION\"):\n # N years costs 365x as much as N days; since cost is based on log-base-2, add 9\n lcost = 9\n\n estcost = pcost + dcost + lcost\n for cost in [\"CCSM_CCOST\", \"CCSM_GCOST\", \"CCSM_TCOST\", \"CCSM_CCOST\"]:\n estcost += case.get_value(cost)\n\n case.set_value(\"CCSM_PCOST\", pcost)\n case.set_value(\"CCSM_ESTCOST\", estcost)\n\n # create batch file\n logger.info(\"Creating batch script case.run\")\n\n # Use BatchFactory to get the appropriate instance of a BatchMaker,\n # use it to create our batch scripts\n env_batch = case._get_env(\"batch\")\n for job in env_batch.get_jobs():\n input_batch_script = os.path.join(case.get_value(\"MACHDIR\"), env_batch.get_value('template', subgroup=job))\n if job == \"case.test\" and testcase is not None and not test_mode:\n logger.info(\"Writing %s script\" % job)\n testscript = os.path.join(cimeroot, \"scripts\", \"Testing\", \"Testcases\", \"%s_script\" % testcase)\n # Short term fix to be removed when csh tests are removed\n if not os.path.exists(testscript):\n env_batch.make_batch_script(input_batch_script, job, case)\n elif job != \"case.test\":\n logger.info(\"Writing %s script\" % job)\n env_batch.make_batch_script(input_batch_script, job, case)\n\n # Make a copy of env_mach_pes.xml in order to be able\n # to check that it does not change once case.setup is invoked\n logger.info(\"Locking file env_mach_pes.xml\")\n case.flush()\n shutil.copy(\"env_mach_pes.xml\", \"LockedFiles\")\n\n # Create user_nl files for the required number of instances\n if not os.path.exists(\"user_nl_cpl\"):\n logger.info(\"Creating user_nl_xxx files for components and cpl\")\n # loop over models\n for model in models:\n comp = case.get_value(\"COMP_%s\" % model)\n logger.info(\"Building %s usernl files\"%model)\n _build_usernl_files(case, model, comp)\n if comp == \"cism\":\n run_cmd(\"%s/../components/cism/cime_config/cism.template %s\" % (cimeroot, caseroot))\n\n _build_usernl_files(case, \"drv\", \"cpl\")\n\n # Run preview namelists for scripts\n logger.info(\"preview_namelists\")\n preview_namelists(case)\n\n logger.info(\"See ./CaseDoc for component namelists\")\n logger.info(\"If an old case build already exists, might want to run \\'case.build --clean-all\\' before building\")\n\n # Create test script if appropriate\n # Short term fix to be removed when csh tests are removed\n if os.path.exists(\"env_test.xml\"):\n if not os.path.exists(\"case.test\"):\n logger.info(\"Starting testcase.setup\")\n run_cmd(\"./testcase.setup -caseroot %s\" % caseroot)\n logger.info(\"Finished testcase.setup\")\n\n msg = \"case.setup complete\"\n append_status(msg, caseroot=caseroot, sfile=\"CaseStatus\")\n\n", "path": "utils/python/CIME/case_setup.py"}]} | 3,785 | 202 |
gh_patches_debug_40365 | rasdani/github-patches | git_diff | freqtrade__freqtrade-6560 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
inheritance strategy can not load parameter file shared from base class
<!--
## Describe your environment
* Operating system: ubuntu
* Python Version: 3.8.10`
* CCXT version: _1.74.63
* Freqtrade Version: fraqtrade develop
## Describe the problem:
if StragegyA derived from StragegyB written in same .py file,
then both of them share the same json file generated by Hyperopt.
They would could conflict with each other when using Hyperopt to optimize one of them after the other Strategy was optimized.
### Steps to reproduce:
1. StragegyB derived from StragegyA and they are in one .py file
2. optimize StragegyA using Hyperopt ( get file A.json)
3. optimize StragegyB using Hyperopt
4. switching step3 and step2 is the same result
### Observed Results:
load file A.json as StragegyB parameter file, then ERROR
### Relevant code exceptions or logs
2022-03-20 00:59:44,224 - freqtrade.optimize.hyperopt_tools - INFO - Dumping parameters to /home/bourne/freqtrade-develop/user_data/strategies/AStrategy.json
...
2022-03-20 01:01:06,082 - freqtrade.resolvers.iresolver - INFO - Using resolved strategy BStrategy from '/home/bourne/freqtrade-develop/user_data/strategies/AStrategy.py'...
2022-03-20 01:01:06,082 - freqtrade.strategy.hyper - INFO - Loading parameters from file /home/bourne/freqtrade-develop/user_data/strategies/AStrategy.json
2022-03-20 01:01:06,082 - freqtrade - ERROR - Invalid parameter file provided.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `freqtrade/resolvers/iresolver.py`
Content:
```
1 # pragma pylint: disable=attribute-defined-outside-init
2
3 """
4 This module load custom objects
5 """
6 import importlib.util
7 import inspect
8 import logging
9 from pathlib import Path
10 from typing import Any, Dict, Iterator, List, Optional, Tuple, Type, Union
11
12 from freqtrade.exceptions import OperationalException
13
14
15 logger = logging.getLogger(__name__)
16
17
18 class IResolver:
19 """
20 This class contains all the logic to load custom classes
21 """
22 # Childclasses need to override this
23 object_type: Type[Any]
24 object_type_str: str
25 user_subdir: Optional[str] = None
26 initial_search_path: Optional[Path]
27
28 @classmethod
29 def build_search_paths(cls, config: Dict[str, Any], user_subdir: Optional[str] = None,
30 extra_dir: Optional[str] = None) -> List[Path]:
31
32 abs_paths: List[Path] = []
33 if cls.initial_search_path:
34 abs_paths.append(cls.initial_search_path)
35
36 if user_subdir:
37 abs_paths.insert(0, config['user_data_dir'].joinpath(user_subdir))
38
39 if extra_dir:
40 # Add extra directory to the top of the search paths
41 abs_paths.insert(0, Path(extra_dir).resolve())
42
43 return abs_paths
44
45 @classmethod
46 def _get_valid_object(cls, module_path: Path, object_name: Optional[str],
47 enum_failed: bool = False) -> Iterator[Any]:
48 """
49 Generator returning objects with matching object_type and object_name in the path given.
50 :param module_path: absolute path to the module
51 :param object_name: Class name of the object
52 :param enum_failed: If True, will return None for modules which fail.
53 Otherwise, failing modules are skipped.
54 :return: generator containing tuple of matching objects
55 Tuple format: [Object, source]
56 """
57
58 # Generate spec based on absolute path
59 # Pass object_name as first argument to have logging print a reasonable name.
60 spec = importlib.util.spec_from_file_location(object_name or "", str(module_path))
61 if not spec:
62 return iter([None])
63
64 module = importlib.util.module_from_spec(spec)
65 try:
66 spec.loader.exec_module(module) # type: ignore # importlib does not use typehints
67 except (ModuleNotFoundError, SyntaxError, ImportError, NameError) as err:
68 # Catch errors in case a specific module is not installed
69 logger.warning(f"Could not import {module_path} due to '{err}'")
70 if enum_failed:
71 return iter([None])
72
73 valid_objects_gen = (
74 (obj, inspect.getsource(module)) for
75 name, obj in inspect.getmembers(
76 module, inspect.isclass) if ((object_name is None or object_name == name)
77 and issubclass(obj, cls.object_type)
78 and obj is not cls.object_type)
79 )
80 return valid_objects_gen
81
82 @classmethod
83 def _search_object(cls, directory: Path, *, object_name: str, add_source: bool = False
84 ) -> Union[Tuple[Any, Path], Tuple[None, None]]:
85 """
86 Search for the objectname in the given directory
87 :param directory: relative or absolute directory path
88 :param object_name: ClassName of the object to load
89 :return: object class
90 """
91 logger.debug(f"Searching for {cls.object_type.__name__} {object_name} in '{directory}'")
92 for entry in directory.iterdir():
93 # Only consider python files
94 if entry.suffix != '.py':
95 logger.debug('Ignoring %s', entry)
96 continue
97 if entry.is_symlink() and not entry.is_file():
98 logger.debug('Ignoring broken symlink %s', entry)
99 continue
100 module_path = entry.resolve()
101
102 obj = next(cls._get_valid_object(module_path, object_name), None)
103
104 if obj:
105 obj[0].__file__ = str(entry)
106 if add_source:
107 obj[0].__source__ = obj[1]
108 return (obj[0], module_path)
109 return (None, None)
110
111 @classmethod
112 def _load_object(cls, paths: List[Path], *, object_name: str, add_source: bool = False,
113 kwargs: dict = {}) -> Optional[Any]:
114 """
115 Try to load object from path list.
116 """
117
118 for _path in paths:
119 try:
120 (module, module_path) = cls._search_object(directory=_path,
121 object_name=object_name,
122 add_source=add_source)
123 if module:
124 logger.info(
125 f"Using resolved {cls.object_type.__name__.lower()[1:]} {object_name} "
126 f"from '{module_path}'...")
127 return module(**kwargs)
128 except FileNotFoundError:
129 logger.warning('Path "%s" does not exist.', _path.resolve())
130
131 return None
132
133 @classmethod
134 def load_object(cls, object_name: str, config: dict, *, kwargs: dict,
135 extra_dir: Optional[str] = None) -> Any:
136 """
137 Search and loads the specified object as configured in hte child class.
138 :param object_name: name of the module to import
139 :param config: configuration dictionary
140 :param extra_dir: additional directory to search for the given pairlist
141 :raises: OperationalException if the class is invalid or does not exist.
142 :return: Object instance or None
143 """
144
145 abs_paths = cls.build_search_paths(config,
146 user_subdir=cls.user_subdir,
147 extra_dir=extra_dir)
148
149 found_object = cls._load_object(paths=abs_paths, object_name=object_name,
150 kwargs=kwargs)
151 if found_object:
152 return found_object
153 raise OperationalException(
154 f"Impossible to load {cls.object_type_str} '{object_name}'. This class does not exist "
155 "or contains Python code errors."
156 )
157
158 @classmethod
159 def search_all_objects(cls, directory: Path,
160 enum_failed: bool) -> List[Dict[str, Any]]:
161 """
162 Searches a directory for valid objects
163 :param directory: Path to search
164 :param enum_failed: If True, will return None for modules which fail.
165 Otherwise, failing modules are skipped.
166 :return: List of dicts containing 'name', 'class' and 'location' entries
167 """
168 logger.debug(f"Searching for {cls.object_type.__name__} '{directory}'")
169 objects = []
170 for entry in directory.iterdir():
171 # Only consider python files
172 if entry.suffix != '.py':
173 logger.debug('Ignoring %s', entry)
174 continue
175 module_path = entry.resolve()
176 logger.debug(f"Path {module_path}")
177 for obj in cls._get_valid_object(module_path, object_name=None,
178 enum_failed=enum_failed):
179 objects.append(
180 {'name': obj[0].__name__ if obj is not None else '',
181 'class': obj[0] if obj is not None else None,
182 'location': entry,
183 })
184 return objects
185
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/freqtrade/resolvers/iresolver.py b/freqtrade/resolvers/iresolver.py
--- a/freqtrade/resolvers/iresolver.py
+++ b/freqtrade/resolvers/iresolver.py
@@ -6,6 +6,7 @@
import importlib.util
import inspect
import logging
+import sys
from pathlib import Path
from typing import Any, Dict, Iterator, List, Optional, Tuple, Type, Union
@@ -15,6 +16,22 @@
logger = logging.getLogger(__name__)
+class PathModifier:
+ def __init__(self, path: Path):
+ self.path = path
+
+ def __enter__(self):
+ """Inject path to allow importing with relative imports."""
+ sys.path.insert(0, str(self.path))
+ return self
+
+ def __exit__(self, exc_type, exc_val, exc_tb):
+ """Undo insertion of local path."""
+ str_path = str(self.path)
+ if str_path in sys.path:
+ sys.path.remove(str_path)
+
+
class IResolver:
"""
This class contains all the logic to load custom classes
@@ -57,27 +74,32 @@
# Generate spec based on absolute path
# Pass object_name as first argument to have logging print a reasonable name.
- spec = importlib.util.spec_from_file_location(object_name or "", str(module_path))
- if not spec:
- return iter([None])
-
- module = importlib.util.module_from_spec(spec)
- try:
- spec.loader.exec_module(module) # type: ignore # importlib does not use typehints
- except (ModuleNotFoundError, SyntaxError, ImportError, NameError) as err:
- # Catch errors in case a specific module is not installed
- logger.warning(f"Could not import {module_path} due to '{err}'")
- if enum_failed:
+ with PathModifier(module_path.parent):
+ module_name = module_path.stem or ""
+ spec = importlib.util.spec_from_file_location(module_name, str(module_path))
+ if not spec:
return iter([None])
- valid_objects_gen = (
- (obj, inspect.getsource(module)) for
- name, obj in inspect.getmembers(
- module, inspect.isclass) if ((object_name is None or object_name == name)
- and issubclass(obj, cls.object_type)
- and obj is not cls.object_type)
- )
- return valid_objects_gen
+ module = importlib.util.module_from_spec(spec)
+ try:
+ spec.loader.exec_module(module) # type: ignore # importlib does not use typehints
+ except (ModuleNotFoundError, SyntaxError, ImportError, NameError) as err:
+ # Catch errors in case a specific module is not installed
+ logger.warning(f"Could not import {module_path} due to '{err}'")
+ if enum_failed:
+ return iter([None])
+
+ valid_objects_gen = (
+ (obj, inspect.getsource(module)) for
+ name, obj in inspect.getmembers(
+ module, inspect.isclass) if ((object_name is None or object_name == name)
+ and issubclass(obj, cls.object_type)
+ and obj is not cls.object_type
+ and obj.__module__ == module_name
+ )
+ )
+ # The __module__ check ensures we only use strategies that are defined in this folder.
+ return valid_objects_gen
@classmethod
def _search_object(cls, directory: Path, *, object_name: str, add_source: bool = False
| {"golden_diff": "diff --git a/freqtrade/resolvers/iresolver.py b/freqtrade/resolvers/iresolver.py\n--- a/freqtrade/resolvers/iresolver.py\n+++ b/freqtrade/resolvers/iresolver.py\n@@ -6,6 +6,7 @@\n import importlib.util\n import inspect\n import logging\n+import sys\n from pathlib import Path\n from typing import Any, Dict, Iterator, List, Optional, Tuple, Type, Union\n \n@@ -15,6 +16,22 @@\n logger = logging.getLogger(__name__)\n \n \n+class PathModifier:\n+ def __init__(self, path: Path):\n+ self.path = path\n+\n+ def __enter__(self):\n+ \"\"\"Inject path to allow importing with relative imports.\"\"\"\n+ sys.path.insert(0, str(self.path))\n+ return self\n+\n+ def __exit__(self, exc_type, exc_val, exc_tb):\n+ \"\"\"Undo insertion of local path.\"\"\"\n+ str_path = str(self.path)\n+ if str_path in sys.path:\n+ sys.path.remove(str_path)\n+\n+\n class IResolver:\n \"\"\"\n This class contains all the logic to load custom classes\n@@ -57,27 +74,32 @@\n \n # Generate spec based on absolute path\n # Pass object_name as first argument to have logging print a reasonable name.\n- spec = importlib.util.spec_from_file_location(object_name or \"\", str(module_path))\n- if not spec:\n- return iter([None])\n-\n- module = importlib.util.module_from_spec(spec)\n- try:\n- spec.loader.exec_module(module) # type: ignore # importlib does not use typehints\n- except (ModuleNotFoundError, SyntaxError, ImportError, NameError) as err:\n- # Catch errors in case a specific module is not installed\n- logger.warning(f\"Could not import {module_path} due to '{err}'\")\n- if enum_failed:\n+ with PathModifier(module_path.parent):\n+ module_name = module_path.stem or \"\"\n+ spec = importlib.util.spec_from_file_location(module_name, str(module_path))\n+ if not spec:\n return iter([None])\n \n- valid_objects_gen = (\n- (obj, inspect.getsource(module)) for\n- name, obj in inspect.getmembers(\n- module, inspect.isclass) if ((object_name is None or object_name == name)\n- and issubclass(obj, cls.object_type)\n- and obj is not cls.object_type)\n- )\n- return valid_objects_gen\n+ module = importlib.util.module_from_spec(spec)\n+ try:\n+ spec.loader.exec_module(module) # type: ignore # importlib does not use typehints\n+ except (ModuleNotFoundError, SyntaxError, ImportError, NameError) as err:\n+ # Catch errors in case a specific module is not installed\n+ logger.warning(f\"Could not import {module_path} due to '{err}'\")\n+ if enum_failed:\n+ return iter([None])\n+\n+ valid_objects_gen = (\n+ (obj, inspect.getsource(module)) for\n+ name, obj in inspect.getmembers(\n+ module, inspect.isclass) if ((object_name is None or object_name == name)\n+ and issubclass(obj, cls.object_type)\n+ and obj is not cls.object_type\n+ and obj.__module__ == module_name\n+ )\n+ )\n+ # The __module__ check ensures we only use strategies that are defined in this folder.\n+ return valid_objects_gen\n \n @classmethod\n def _search_object(cls, directory: Path, *, object_name: str, add_source: bool = False\n", "issue": "inheritance strategy can not load parameter file shared from base class\n<!-- \r\n\r\n\r\n## Describe your environment\r\n\r\n * Operating system: ubuntu\r\n * Python Version: 3.8.10`\r\n * CCXT version: _1.74.63\r\n * Freqtrade Version: fraqtrade develop\r\n \r\n\r\n## Describe the problem:\r\n\r\nif StragegyA derived from StragegyB written in same .py file, \r\nthen both of them share the same json file generated by Hyperopt.\r\n\r\nThey would could conflict with each other when using Hyperopt to optimize one of them after the other Strategy was optimized.\r\n\r\n\r\n### Steps to reproduce:\r\n\r\n 1. StragegyB derived from StragegyA and they are in one .py file\r\n 2. optimize StragegyA using Hyperopt ( get file A.json)\r\n 3. optimize StragegyB using Hyperopt \r\n 4. switching step3 and step2 is the same result\r\n \r\n### Observed Results:\r\nload file A.json as StragegyB parameter file, then ERROR\r\n\r\n### Relevant code exceptions or logs\r\n\r\n2022-03-20 00:59:44,224 - freqtrade.optimize.hyperopt_tools - INFO - Dumping parameters to /home/bourne/freqtrade-develop/user_data/strategies/AStrategy.json\r\n\r\n...\r\n\r\n2022-03-20 01:01:06,082 - freqtrade.resolvers.iresolver - INFO - Using resolved strategy BStrategy from '/home/bourne/freqtrade-develop/user_data/strategies/AStrategy.py'...\r\n2022-03-20 01:01:06,082 - freqtrade.strategy.hyper - INFO - Loading parameters from file /home/bourne/freqtrade-develop/user_data/strategies/AStrategy.json\r\n2022-03-20 01:01:06,082 - freqtrade - ERROR - Invalid parameter file provided.\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# pragma pylint: disable=attribute-defined-outside-init\n\n\"\"\"\nThis module load custom objects\n\"\"\"\nimport importlib.util\nimport inspect\nimport logging\nfrom pathlib import Path\nfrom typing import Any, Dict, Iterator, List, Optional, Tuple, Type, Union\n\nfrom freqtrade.exceptions import OperationalException\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass IResolver:\n \"\"\"\n This class contains all the logic to load custom classes\n \"\"\"\n # Childclasses need to override this\n object_type: Type[Any]\n object_type_str: str\n user_subdir: Optional[str] = None\n initial_search_path: Optional[Path]\n\n @classmethod\n def build_search_paths(cls, config: Dict[str, Any], user_subdir: Optional[str] = None,\n extra_dir: Optional[str] = None) -> List[Path]:\n\n abs_paths: List[Path] = []\n if cls.initial_search_path:\n abs_paths.append(cls.initial_search_path)\n\n if user_subdir:\n abs_paths.insert(0, config['user_data_dir'].joinpath(user_subdir))\n\n if extra_dir:\n # Add extra directory to the top of the search paths\n abs_paths.insert(0, Path(extra_dir).resolve())\n\n return abs_paths\n\n @classmethod\n def _get_valid_object(cls, module_path: Path, object_name: Optional[str],\n enum_failed: bool = False) -> Iterator[Any]:\n \"\"\"\n Generator returning objects with matching object_type and object_name in the path given.\n :param module_path: absolute path to the module\n :param object_name: Class name of the object\n :param enum_failed: If True, will return None for modules which fail.\n Otherwise, failing modules are skipped.\n :return: generator containing tuple of matching objects\n Tuple format: [Object, source]\n \"\"\"\n\n # Generate spec based on absolute path\n # Pass object_name as first argument to have logging print a reasonable name.\n spec = importlib.util.spec_from_file_location(object_name or \"\", str(module_path))\n if not spec:\n return iter([None])\n\n module = importlib.util.module_from_spec(spec)\n try:\n spec.loader.exec_module(module) # type: ignore # importlib does not use typehints\n except (ModuleNotFoundError, SyntaxError, ImportError, NameError) as err:\n # Catch errors in case a specific module is not installed\n logger.warning(f\"Could not import {module_path} due to '{err}'\")\n if enum_failed:\n return iter([None])\n\n valid_objects_gen = (\n (obj, inspect.getsource(module)) for\n name, obj in inspect.getmembers(\n module, inspect.isclass) if ((object_name is None or object_name == name)\n and issubclass(obj, cls.object_type)\n and obj is not cls.object_type)\n )\n return valid_objects_gen\n\n @classmethod\n def _search_object(cls, directory: Path, *, object_name: str, add_source: bool = False\n ) -> Union[Tuple[Any, Path], Tuple[None, None]]:\n \"\"\"\n Search for the objectname in the given directory\n :param directory: relative or absolute directory path\n :param object_name: ClassName of the object to load\n :return: object class\n \"\"\"\n logger.debug(f\"Searching for {cls.object_type.__name__} {object_name} in '{directory}'\")\n for entry in directory.iterdir():\n # Only consider python files\n if entry.suffix != '.py':\n logger.debug('Ignoring %s', entry)\n continue\n if entry.is_symlink() and not entry.is_file():\n logger.debug('Ignoring broken symlink %s', entry)\n continue\n module_path = entry.resolve()\n\n obj = next(cls._get_valid_object(module_path, object_name), None)\n\n if obj:\n obj[0].__file__ = str(entry)\n if add_source:\n obj[0].__source__ = obj[1]\n return (obj[0], module_path)\n return (None, None)\n\n @classmethod\n def _load_object(cls, paths: List[Path], *, object_name: str, add_source: bool = False,\n kwargs: dict = {}) -> Optional[Any]:\n \"\"\"\n Try to load object from path list.\n \"\"\"\n\n for _path in paths:\n try:\n (module, module_path) = cls._search_object(directory=_path,\n object_name=object_name,\n add_source=add_source)\n if module:\n logger.info(\n f\"Using resolved {cls.object_type.__name__.lower()[1:]} {object_name} \"\n f\"from '{module_path}'...\")\n return module(**kwargs)\n except FileNotFoundError:\n logger.warning('Path \"%s\" does not exist.', _path.resolve())\n\n return None\n\n @classmethod\n def load_object(cls, object_name: str, config: dict, *, kwargs: dict,\n extra_dir: Optional[str] = None) -> Any:\n \"\"\"\n Search and loads the specified object as configured in hte child class.\n :param object_name: name of the module to import\n :param config: configuration dictionary\n :param extra_dir: additional directory to search for the given pairlist\n :raises: OperationalException if the class is invalid or does not exist.\n :return: Object instance or None\n \"\"\"\n\n abs_paths = cls.build_search_paths(config,\n user_subdir=cls.user_subdir,\n extra_dir=extra_dir)\n\n found_object = cls._load_object(paths=abs_paths, object_name=object_name,\n kwargs=kwargs)\n if found_object:\n return found_object\n raise OperationalException(\n f\"Impossible to load {cls.object_type_str} '{object_name}'. This class does not exist \"\n \"or contains Python code errors.\"\n )\n\n @classmethod\n def search_all_objects(cls, directory: Path,\n enum_failed: bool) -> List[Dict[str, Any]]:\n \"\"\"\n Searches a directory for valid objects\n :param directory: Path to search\n :param enum_failed: If True, will return None for modules which fail.\n Otherwise, failing modules are skipped.\n :return: List of dicts containing 'name', 'class' and 'location' entries\n \"\"\"\n logger.debug(f\"Searching for {cls.object_type.__name__} '{directory}'\")\n objects = []\n for entry in directory.iterdir():\n # Only consider python files\n if entry.suffix != '.py':\n logger.debug('Ignoring %s', entry)\n continue\n module_path = entry.resolve()\n logger.debug(f\"Path {module_path}\")\n for obj in cls._get_valid_object(module_path, object_name=None,\n enum_failed=enum_failed):\n objects.append(\n {'name': obj[0].__name__ if obj is not None else '',\n 'class': obj[0] if obj is not None else None,\n 'location': entry,\n })\n return objects\n", "path": "freqtrade/resolvers/iresolver.py"}], "after_files": [{"content": "# pragma pylint: disable=attribute-defined-outside-init\n\n\"\"\"\nThis module load custom objects\n\"\"\"\nimport importlib.util\nimport inspect\nimport logging\nimport sys\nfrom pathlib import Path\nfrom typing import Any, Dict, Iterator, List, Optional, Tuple, Type, Union\n\nfrom freqtrade.exceptions import OperationalException\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass PathModifier:\n def __init__(self, path: Path):\n self.path = path\n\n def __enter__(self):\n \"\"\"Inject path to allow importing with relative imports.\"\"\"\n sys.path.insert(0, str(self.path))\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n \"\"\"Undo insertion of local path.\"\"\"\n str_path = str(self.path)\n if str_path in sys.path:\n sys.path.remove(str_path)\n\n\nclass IResolver:\n \"\"\"\n This class contains all the logic to load custom classes\n \"\"\"\n # Childclasses need to override this\n object_type: Type[Any]\n object_type_str: str\n user_subdir: Optional[str] = None\n initial_search_path: Optional[Path]\n\n @classmethod\n def build_search_paths(cls, config: Dict[str, Any], user_subdir: Optional[str] = None,\n extra_dir: Optional[str] = None) -> List[Path]:\n\n abs_paths: List[Path] = []\n if cls.initial_search_path:\n abs_paths.append(cls.initial_search_path)\n\n if user_subdir:\n abs_paths.insert(0, config['user_data_dir'].joinpath(user_subdir))\n\n if extra_dir:\n # Add extra directory to the top of the search paths\n abs_paths.insert(0, Path(extra_dir).resolve())\n\n return abs_paths\n\n @classmethod\n def _get_valid_object(cls, module_path: Path, object_name: Optional[str],\n enum_failed: bool = False) -> Iterator[Any]:\n \"\"\"\n Generator returning objects with matching object_type and object_name in the path given.\n :param module_path: absolute path to the module\n :param object_name: Class name of the object\n :param enum_failed: If True, will return None for modules which fail.\n Otherwise, failing modules are skipped.\n :return: generator containing tuple of matching objects\n Tuple format: [Object, source]\n \"\"\"\n\n # Generate spec based on absolute path\n # Pass object_name as first argument to have logging print a reasonable name.\n with PathModifier(module_path.parent):\n module_name = module_path.stem or \"\"\n spec = importlib.util.spec_from_file_location(module_name, str(module_path))\n if not spec:\n return iter([None])\n\n module = importlib.util.module_from_spec(spec)\n try:\n spec.loader.exec_module(module) # type: ignore # importlib does not use typehints\n except (ModuleNotFoundError, SyntaxError, ImportError, NameError) as err:\n # Catch errors in case a specific module is not installed\n logger.warning(f\"Could not import {module_path} due to '{err}'\")\n if enum_failed:\n return iter([None])\n\n valid_objects_gen = (\n (obj, inspect.getsource(module)) for\n name, obj in inspect.getmembers(\n module, inspect.isclass) if ((object_name is None or object_name == name)\n and issubclass(obj, cls.object_type)\n and obj is not cls.object_type\n and obj.__module__ == module_name\n )\n )\n # The __module__ check ensures we only use strategies that are defined in this folder.\n return valid_objects_gen\n\n @classmethod\n def _search_object(cls, directory: Path, *, object_name: str, add_source: bool = False\n ) -> Union[Tuple[Any, Path], Tuple[None, None]]:\n \"\"\"\n Search for the objectname in the given directory\n :param directory: relative or absolute directory path\n :param object_name: ClassName of the object to load\n :return: object class\n \"\"\"\n logger.debug(f\"Searching for {cls.object_type.__name__} {object_name} in '{directory}'\")\n for entry in directory.iterdir():\n # Only consider python files\n if entry.suffix != '.py':\n logger.debug('Ignoring %s', entry)\n continue\n if entry.is_symlink() and not entry.is_file():\n logger.debug('Ignoring broken symlink %s', entry)\n continue\n module_path = entry.resolve()\n\n obj = next(cls._get_valid_object(module_path, object_name), None)\n\n if obj:\n obj[0].__file__ = str(entry)\n if add_source:\n obj[0].__source__ = obj[1]\n return (obj[0], module_path)\n return (None, None)\n\n @classmethod\n def _load_object(cls, paths: List[Path], *, object_name: str, add_source: bool = False,\n kwargs: dict = {}) -> Optional[Any]:\n \"\"\"\n Try to load object from path list.\n \"\"\"\n\n for _path in paths:\n try:\n (module, module_path) = cls._search_object(directory=_path,\n object_name=object_name,\n add_source=add_source)\n if module:\n logger.info(\n f\"Using resolved {cls.object_type.__name__.lower()[1:]} {object_name} \"\n f\"from '{module_path}'...\")\n return module(**kwargs)\n except FileNotFoundError:\n logger.warning('Path \"%s\" does not exist.', _path.resolve())\n\n return None\n\n @classmethod\n def load_object(cls, object_name: str, config: dict, *, kwargs: dict,\n extra_dir: Optional[str] = None) -> Any:\n \"\"\"\n Search and loads the specified object as configured in hte child class.\n :param object_name: name of the module to import\n :param config: configuration dictionary\n :param extra_dir: additional directory to search for the given pairlist\n :raises: OperationalException if the class is invalid or does not exist.\n :return: Object instance or None\n \"\"\"\n\n abs_paths = cls.build_search_paths(config,\n user_subdir=cls.user_subdir,\n extra_dir=extra_dir)\n\n found_object = cls._load_object(paths=abs_paths, object_name=object_name,\n kwargs=kwargs)\n if found_object:\n return found_object\n raise OperationalException(\n f\"Impossible to load {cls.object_type_str} '{object_name}'. This class does not exist \"\n \"or contains Python code errors.\"\n )\n\n @classmethod\n def search_all_objects(cls, directory: Path,\n enum_failed: bool) -> List[Dict[str, Any]]:\n \"\"\"\n Searches a directory for valid objects\n :param directory: Path to search\n :param enum_failed: If True, will return None for modules which fail.\n Otherwise, failing modules are skipped.\n :return: List of dicts containing 'name', 'class' and 'location' entries\n \"\"\"\n logger.debug(f\"Searching for {cls.object_type.__name__} '{directory}'\")\n objects = []\n for entry in directory.iterdir():\n # Only consider python files\n if entry.suffix != '.py':\n logger.debug('Ignoring %s', entry)\n continue\n module_path = entry.resolve()\n logger.debug(f\"Path {module_path}\")\n for obj in cls._get_valid_object(module_path, object_name=None,\n enum_failed=enum_failed):\n objects.append(\n {'name': obj[0].__name__ if obj is not None else '',\n 'class': obj[0] if obj is not None else None,\n 'location': entry,\n })\n return objects\n", "path": "freqtrade/resolvers/iresolver.py"}]} | 2,667 | 796 |
gh_patches_debug_9537 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-1452 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SMAPE formula typo
## 📚 Documentation
There's a typo in the [SMAPE formula](https://torchmetrics.readthedocs.io/en/stable/regression/symmetric_mean_absolute_percentage_error.html). It should be `{SMAPE} = \frac{2}{n}\sum_1^n\frac{| y_i - \hat{y_i} |}{\max(| y_i | + | \hat{y_i} |, \epsilon)}` instead of `{SMAPE} = \frac{2}{n}\sum_1^n max(\frac{| y_i - \hat{y_i} |}{| y_i | + | \hat{y_i} |, \epsilon})`. The attached screenshot shows the typo and its correction.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/torchmetrics/regression/symmetric_mape.py`
Content:
```
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Any
15
16 from torch import Tensor, tensor
17
18 from torchmetrics.functional.regression.symmetric_mape import (
19 _symmetric_mean_absolute_percentage_error_compute,
20 _symmetric_mean_absolute_percentage_error_update,
21 )
22 from torchmetrics.metric import Metric
23
24
25 class SymmetricMeanAbsolutePercentageError(Metric):
26 r"""Computes symmetric mean absolute percentage error (`SMAPE`_).
27
28 .. math:: \text{SMAPE} = \frac{2}{n}\sum_1^n max(\frac{| y_i - \hat{y_i} |}{| y_i | + | \hat{y_i} |, \epsilon})
29
30 Where :math:`y` is a tensor of target values, and :math:`\hat{y}` is a tensor of predictions.
31
32 As input to ``forward`` and ``update`` the metric accepts the following input:
33
34 - ``preds`` (:class:`~torch.Tensor`): Predictions from model
35 - ``target`` (:class:`~torch.Tensor`): Ground truth values
36
37 As output of ``forward`` and ``compute`` the metric returns the following output:
38
39 - ``smape`` (:class:`~torch.Tensor`): A tensor with non-negative floating point smape value between 0 and 1
40
41 Args:
42 kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
43
44 Example:
45 >>> from torchmetrics import SymmetricMeanAbsolutePercentageError
46 >>> target = tensor([1, 10, 1e6])
47 >>> preds = tensor([0.9, 15, 1.2e6])
48 >>> smape = SymmetricMeanAbsolutePercentageError()
49 >>> smape(preds, target)
50 tensor(0.2290)
51 """
52 is_differentiable: bool = True
53 higher_is_better: bool = False
54 full_state_update: bool = False
55 sum_abs_per_error: Tensor
56 total: Tensor
57
58 def __init__(
59 self,
60 **kwargs: Any,
61 ) -> None:
62 super().__init__(**kwargs)
63
64 self.add_state("sum_abs_per_error", default=tensor(0.0), dist_reduce_fx="sum")
65 self.add_state("total", default=tensor(0.0), dist_reduce_fx="sum")
66
67 def update(self, preds: Tensor, target: Tensor) -> None: # type: ignore
68 """Update state with predictions and targets."""
69 sum_abs_per_error, num_obs = _symmetric_mean_absolute_percentage_error_update(preds, target)
70
71 self.sum_abs_per_error += sum_abs_per_error
72 self.total += num_obs
73
74 def compute(self) -> Tensor:
75 """Computes mean absolute percentage error over state."""
76 return _symmetric_mean_absolute_percentage_error_compute(self.sum_abs_per_error, self.total)
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/torchmetrics/regression/symmetric_mape.py b/src/torchmetrics/regression/symmetric_mape.py
--- a/src/torchmetrics/regression/symmetric_mape.py
+++ b/src/torchmetrics/regression/symmetric_mape.py
@@ -25,7 +25,7 @@
class SymmetricMeanAbsolutePercentageError(Metric):
r"""Computes symmetric mean absolute percentage error (`SMAPE`_).
- .. math:: \text{SMAPE} = \frac{2}{n}\sum_1^n max(\frac{| y_i - \hat{y_i} |}{| y_i | + | \hat{y_i} |, \epsilon})
+ .. math:: \text{SMAPE} = \frac{2}{n}\sum_1^n\frac{| y_i - \hat{y_i} |}{\max(| y_i | + | \hat{y_i} |, \epsilon)}
Where :math:`y` is a tensor of target values, and :math:`\hat{y}` is a tensor of predictions.
| {"golden_diff": "diff --git a/src/torchmetrics/regression/symmetric_mape.py b/src/torchmetrics/regression/symmetric_mape.py\n--- a/src/torchmetrics/regression/symmetric_mape.py\n+++ b/src/torchmetrics/regression/symmetric_mape.py\n@@ -25,7 +25,7 @@\n class SymmetricMeanAbsolutePercentageError(Metric):\n r\"\"\"Computes symmetric mean absolute percentage error (`SMAPE`_).\n \n- .. math:: \\text{SMAPE} = \\frac{2}{n}\\sum_1^n max(\\frac{| y_i - \\hat{y_i} |}{| y_i | + | \\hat{y_i} |, \\epsilon})\n+ .. math:: \\text{SMAPE} = \\frac{2}{n}\\sum_1^n\\frac{| y_i - \\hat{y_i} |}{\\max(| y_i | + | \\hat{y_i} |, \\epsilon)}\n \n Where :math:`y` is a tensor of target values, and :math:`\\hat{y}` is a tensor of predictions.\n", "issue": "SMAPE formula typo\n## \ud83d\udcda Documentation\r\n\r\n\r\nThere's a typo in the [SMAPE formula](https://torchmetrics.readthedocs.io/en/stable/regression/symmetric_mean_absolute_percentage_error.html). It should be `{SMAPE} = \\frac{2}{n}\\sum_1^n\\frac{| y_i - \\hat{y_i} |}{\\max(| y_i | + | \\hat{y_i} |, \\epsilon)}` instead of `{SMAPE} = \\frac{2}{n}\\sum_1^n max(\\frac{| y_i - \\hat{y_i} |}{| y_i | + | \\hat{y_i} |, \\epsilon})`. The attached screenshot shows the typo and its correction.\r\n\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any\n\nfrom torch import Tensor, tensor\n\nfrom torchmetrics.functional.regression.symmetric_mape import (\n _symmetric_mean_absolute_percentage_error_compute,\n _symmetric_mean_absolute_percentage_error_update,\n)\nfrom torchmetrics.metric import Metric\n\n\nclass SymmetricMeanAbsolutePercentageError(Metric):\n r\"\"\"Computes symmetric mean absolute percentage error (`SMAPE`_).\n\n .. math:: \\text{SMAPE} = \\frac{2}{n}\\sum_1^n max(\\frac{| y_i - \\hat{y_i} |}{| y_i | + | \\hat{y_i} |, \\epsilon})\n\n Where :math:`y` is a tensor of target values, and :math:`\\hat{y}` is a tensor of predictions.\n\n As input to ``forward`` and ``update`` the metric accepts the following input:\n\n - ``preds`` (:class:`~torch.Tensor`): Predictions from model\n - ``target`` (:class:`~torch.Tensor`): Ground truth values\n\n As output of ``forward`` and ``compute`` the metric returns the following output:\n\n - ``smape`` (:class:`~torch.Tensor`): A tensor with non-negative floating point smape value between 0 and 1\n\n Args:\n kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.\n\n Example:\n >>> from torchmetrics import SymmetricMeanAbsolutePercentageError\n >>> target = tensor([1, 10, 1e6])\n >>> preds = tensor([0.9, 15, 1.2e6])\n >>> smape = SymmetricMeanAbsolutePercentageError()\n >>> smape(preds, target)\n tensor(0.2290)\n \"\"\"\n is_differentiable: bool = True\n higher_is_better: bool = False\n full_state_update: bool = False\n sum_abs_per_error: Tensor\n total: Tensor\n\n def __init__(\n self,\n **kwargs: Any,\n ) -> None:\n super().__init__(**kwargs)\n\n self.add_state(\"sum_abs_per_error\", default=tensor(0.0), dist_reduce_fx=\"sum\")\n self.add_state(\"total\", default=tensor(0.0), dist_reduce_fx=\"sum\")\n\n def update(self, preds: Tensor, target: Tensor) -> None: # type: ignore\n \"\"\"Update state with predictions and targets.\"\"\"\n sum_abs_per_error, num_obs = _symmetric_mean_absolute_percentage_error_update(preds, target)\n\n self.sum_abs_per_error += sum_abs_per_error\n self.total += num_obs\n\n def compute(self) -> Tensor:\n \"\"\"Computes mean absolute percentage error over state.\"\"\"\n return _symmetric_mean_absolute_percentage_error_compute(self.sum_abs_per_error, self.total)\n", "path": "src/torchmetrics/regression/symmetric_mape.py"}], "after_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any\n\nfrom torch import Tensor, tensor\n\nfrom torchmetrics.functional.regression.symmetric_mape import (\n _symmetric_mean_absolute_percentage_error_compute,\n _symmetric_mean_absolute_percentage_error_update,\n)\nfrom torchmetrics.metric import Metric\n\n\nclass SymmetricMeanAbsolutePercentageError(Metric):\n r\"\"\"Computes symmetric mean absolute percentage error (`SMAPE`_).\n\n .. math:: \\text{SMAPE} = \\frac{2}{n}\\sum_1^n\\frac{| y_i - \\hat{y_i} |}{\\max(| y_i | + | \\hat{y_i} |, \\epsilon)}\n\n Where :math:`y` is a tensor of target values, and :math:`\\hat{y}` is a tensor of predictions.\n\n As input to ``forward`` and ``update`` the metric accepts the following input:\n\n - ``preds`` (:class:`~torch.Tensor`): Predictions from model\n - ``target`` (:class:`~torch.Tensor`): Ground truth values\n\n As output of ``forward`` and ``compute`` the metric returns the following output:\n\n - ``smape`` (:class:`~torch.Tensor`): A tensor with non-negative floating point smape value between 0 and 1\n\n Args:\n kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.\n\n Example:\n >>> from torchmetrics import SymmetricMeanAbsolutePercentageError\n >>> target = tensor([1, 10, 1e6])\n >>> preds = tensor([0.9, 15, 1.2e6])\n >>> smape = SymmetricMeanAbsolutePercentageError()\n >>> smape(preds, target)\n tensor(0.2290)\n \"\"\"\n is_differentiable: bool = True\n higher_is_better: bool = False\n full_state_update: bool = False\n sum_abs_per_error: Tensor\n total: Tensor\n\n def __init__(\n self,\n **kwargs: Any,\n ) -> None:\n super().__init__(**kwargs)\n\n self.add_state(\"sum_abs_per_error\", default=tensor(0.0), dist_reduce_fx=\"sum\")\n self.add_state(\"total\", default=tensor(0.0), dist_reduce_fx=\"sum\")\n\n def update(self, preds: Tensor, target: Tensor) -> None: # type: ignore\n \"\"\"Update state with predictions and targets.\"\"\"\n sum_abs_per_error, num_obs = _symmetric_mean_absolute_percentage_error_update(preds, target)\n\n self.sum_abs_per_error += sum_abs_per_error\n self.total += num_obs\n\n def compute(self) -> Tensor:\n \"\"\"Computes mean absolute percentage error over state.\"\"\"\n return _symmetric_mean_absolute_percentage_error_compute(self.sum_abs_per_error, self.total)\n", "path": "src/torchmetrics/regression/symmetric_mape.py"}]} | 1,379 | 241 |
gh_patches_debug_11153 | rasdani/github-patches | git_diff | open-mmlab__mmsegmentation-19 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FileNotFoundError: [Errno 2] No such file or directory: 'VOCdevkit/VOCaug/dataset/trainval.txt'
https://github.com/open-mmlab/mmsegmentation/blob/1c3f54765981ba352d4cf6582edb1c8915e51d71/tools/convert_datasets/voc_aug.py#L53
Directory `VOCdevkit/VOCaug/dataset` does not exist `trainval.txt`, `trainval.txt` is the merger of `train.txt` and `val.txt`?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/convert_datasets/voc_aug.py`
Content:
```
1 import argparse
2 import os.path as osp
3 from functools import partial
4
5 import mmcv
6 import numpy as np
7 from PIL import Image
8 from scipy.io import loadmat
9
10 AUG_LEN = 10582
11
12
13 def convert_mat(mat_file, in_dir, out_dir):
14 data = loadmat(osp.join(in_dir, mat_file))
15 mask = data['GTcls'][0]['Segmentation'][0].astype(np.uint8)
16 seg_filename = osp.join(out_dir, mat_file.replace('.mat', '.png'))
17 Image.fromarray(mask).save(seg_filename, 'PNG')
18
19
20 def generate_aug_list(merged_list, excluded_list):
21 return list(set(merged_list) - set(excluded_list))
22
23
24 def parse_args():
25 parser = argparse.ArgumentParser(
26 description='Convert PASCAL VOC annotations to mmsegmentation format')
27 parser.add_argument('devkit_path', help='pascal voc devkit path')
28 parser.add_argument('aug_path', help='pascal voc aug path')
29 parser.add_argument('-o', '--out_dir', help='output path')
30 parser.add_argument(
31 '--nproc', default=1, type=int, help='number of process')
32 args = parser.parse_args()
33 return args
34
35
36 def main():
37 args = parse_args()
38 devkit_path = args.devkit_path
39 aug_path = args.aug_path
40 nproc = args.nproc
41 if args.out_dir is None:
42 out_dir = osp.join(devkit_path, 'VOC2012', 'SegmentationClassAug')
43 else:
44 out_dir = args.out_dir
45 mmcv.mkdir_or_exist(out_dir)
46 in_dir = osp.join(aug_path, 'dataset', 'cls')
47
48 mmcv.track_parallel_progress(
49 partial(convert_mat, in_dir=in_dir, out_dir=out_dir),
50 list(mmcv.scandir(in_dir, suffix='.mat')),
51 nproc=nproc)
52
53 with open(osp.join(aug_path, 'dataset', 'trainval.txt')) as f:
54 full_aug_list = [line.strip() for line in f]
55 with open(
56 osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',
57 'train.txt')) as f:
58 ori_train_list = [line.strip() for line in f]
59 with open(
60 osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',
61 'val.txt')) as f:
62 val_list = [line.strip() for line in f]
63
64 aug_train_list = generate_aug_list(ori_train_list + full_aug_list,
65 val_list)
66 assert len(aug_train_list) == AUG_LEN, 'len(aug_train_list) != {}'.format(
67 AUG_LEN)
68
69 with open(
70 osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',
71 'trainaug.txt'), 'w') as f:
72 f.writelines(line + '\n' for line in aug_train_list)
73
74 aug_list = generate_aug_list(full_aug_list, ori_train_list + val_list)
75 assert len(aug_list) == AUG_LEN - len(
76 ori_train_list), 'len(aug_list) != {}'.format(AUG_LEN -
77 len(ori_train_list))
78 with open(
79 osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation', 'aug.txt'),
80 'w') as f:
81 f.writelines(line + '\n' for line in aug_list)
82
83 print('Done!')
84
85
86 if __name__ == '__main__':
87 main()
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/tools/convert_datasets/voc_aug.py b/tools/convert_datasets/voc_aug.py
--- a/tools/convert_datasets/voc_aug.py
+++ b/tools/convert_datasets/voc_aug.py
@@ -50,8 +50,12 @@
list(mmcv.scandir(in_dir, suffix='.mat')),
nproc=nproc)
- with open(osp.join(aug_path, 'dataset', 'trainval.txt')) as f:
- full_aug_list = [line.strip() for line in f]
+ full_aug_list = []
+ with open(osp.join(aug_path, 'dataset', 'train.txt')) as f:
+ full_aug_list += [line.strip() for line in f]
+ with open(osp.join(aug_path, 'dataset', 'val.txt')) as f:
+ full_aug_list += [line.strip() for line in f]
+
with open(
osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',
'train.txt')) as f:
| {"golden_diff": "diff --git a/tools/convert_datasets/voc_aug.py b/tools/convert_datasets/voc_aug.py\n--- a/tools/convert_datasets/voc_aug.py\n+++ b/tools/convert_datasets/voc_aug.py\n@@ -50,8 +50,12 @@\n list(mmcv.scandir(in_dir, suffix='.mat')),\n nproc=nproc)\n \n- with open(osp.join(aug_path, 'dataset', 'trainval.txt')) as f:\n- full_aug_list = [line.strip() for line in f]\n+ full_aug_list = []\n+ with open(osp.join(aug_path, 'dataset', 'train.txt')) as f:\n+ full_aug_list += [line.strip() for line in f]\n+ with open(osp.join(aug_path, 'dataset', 'val.txt')) as f:\n+ full_aug_list += [line.strip() for line in f]\n+\n with open(\n osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',\n 'train.txt')) as f:\n", "issue": "FileNotFoundError: [Errno 2] No such file or directory: 'VOCdevkit/VOCaug/dataset/trainval.txt'\nhttps://github.com/open-mmlab/mmsegmentation/blob/1c3f54765981ba352d4cf6582edb1c8915e51d71/tools/convert_datasets/voc_aug.py#L53\r\n\r\nDirectory `VOCdevkit/VOCaug/dataset` does not exist `trainval.txt`, `trainval.txt` is the merger of `train.txt` and `val.txt`?\n", "before_files": [{"content": "import argparse\nimport os.path as osp\nfrom functools import partial\n\nimport mmcv\nimport numpy as np\nfrom PIL import Image\nfrom scipy.io import loadmat\n\nAUG_LEN = 10582\n\n\ndef convert_mat(mat_file, in_dir, out_dir):\n data = loadmat(osp.join(in_dir, mat_file))\n mask = data['GTcls'][0]['Segmentation'][0].astype(np.uint8)\n seg_filename = osp.join(out_dir, mat_file.replace('.mat', '.png'))\n Image.fromarray(mask).save(seg_filename, 'PNG')\n\n\ndef generate_aug_list(merged_list, excluded_list):\n return list(set(merged_list) - set(excluded_list))\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(\n description='Convert PASCAL VOC annotations to mmsegmentation format')\n parser.add_argument('devkit_path', help='pascal voc devkit path')\n parser.add_argument('aug_path', help='pascal voc aug path')\n parser.add_argument('-o', '--out_dir', help='output path')\n parser.add_argument(\n '--nproc', default=1, type=int, help='number of process')\n args = parser.parse_args()\n return args\n\n\ndef main():\n args = parse_args()\n devkit_path = args.devkit_path\n aug_path = args.aug_path\n nproc = args.nproc\n if args.out_dir is None:\n out_dir = osp.join(devkit_path, 'VOC2012', 'SegmentationClassAug')\n else:\n out_dir = args.out_dir\n mmcv.mkdir_or_exist(out_dir)\n in_dir = osp.join(aug_path, 'dataset', 'cls')\n\n mmcv.track_parallel_progress(\n partial(convert_mat, in_dir=in_dir, out_dir=out_dir),\n list(mmcv.scandir(in_dir, suffix='.mat')),\n nproc=nproc)\n\n with open(osp.join(aug_path, 'dataset', 'trainval.txt')) as f:\n full_aug_list = [line.strip() for line in f]\n with open(\n osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',\n 'train.txt')) as f:\n ori_train_list = [line.strip() for line in f]\n with open(\n osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',\n 'val.txt')) as f:\n val_list = [line.strip() for line in f]\n\n aug_train_list = generate_aug_list(ori_train_list + full_aug_list,\n val_list)\n assert len(aug_train_list) == AUG_LEN, 'len(aug_train_list) != {}'.format(\n AUG_LEN)\n\n with open(\n osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',\n 'trainaug.txt'), 'w') as f:\n f.writelines(line + '\\n' for line in aug_train_list)\n\n aug_list = generate_aug_list(full_aug_list, ori_train_list + val_list)\n assert len(aug_list) == AUG_LEN - len(\n ori_train_list), 'len(aug_list) != {}'.format(AUG_LEN -\n len(ori_train_list))\n with open(\n osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation', 'aug.txt'),\n 'w') as f:\n f.writelines(line + '\\n' for line in aug_list)\n\n print('Done!')\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/convert_datasets/voc_aug.py"}], "after_files": [{"content": "import argparse\nimport os.path as osp\nfrom functools import partial\n\nimport mmcv\nimport numpy as np\nfrom PIL import Image\nfrom scipy.io import loadmat\n\nAUG_LEN = 10582\n\n\ndef convert_mat(mat_file, in_dir, out_dir):\n data = loadmat(osp.join(in_dir, mat_file))\n mask = data['GTcls'][0]['Segmentation'][0].astype(np.uint8)\n seg_filename = osp.join(out_dir, mat_file.replace('.mat', '.png'))\n Image.fromarray(mask).save(seg_filename, 'PNG')\n\n\ndef generate_aug_list(merged_list, excluded_list):\n return list(set(merged_list) - set(excluded_list))\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(\n description='Convert PASCAL VOC annotations to mmsegmentation format')\n parser.add_argument('devkit_path', help='pascal voc devkit path')\n parser.add_argument('aug_path', help='pascal voc aug path')\n parser.add_argument('-o', '--out_dir', help='output path')\n parser.add_argument(\n '--nproc', default=1, type=int, help='number of process')\n args = parser.parse_args()\n return args\n\n\ndef main():\n args = parse_args()\n devkit_path = args.devkit_path\n aug_path = args.aug_path\n nproc = args.nproc\n if args.out_dir is None:\n out_dir = osp.join(devkit_path, 'VOC2012', 'SegmentationClassAug')\n else:\n out_dir = args.out_dir\n mmcv.mkdir_or_exist(out_dir)\n in_dir = osp.join(aug_path, 'dataset', 'cls')\n\n mmcv.track_parallel_progress(\n partial(convert_mat, in_dir=in_dir, out_dir=out_dir),\n list(mmcv.scandir(in_dir, suffix='.mat')),\n nproc=nproc)\n\n full_aug_list = []\n with open(osp.join(aug_path, 'dataset', 'train.txt')) as f:\n full_aug_list += [line.strip() for line in f]\n with open(osp.join(aug_path, 'dataset', 'val.txt')) as f:\n full_aug_list += [line.strip() for line in f]\n\n with open(\n osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',\n 'train.txt')) as f:\n ori_train_list = [line.strip() for line in f]\n with open(\n osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',\n 'val.txt')) as f:\n val_list = [line.strip() for line in f]\n\n aug_train_list = generate_aug_list(ori_train_list + full_aug_list,\n val_list)\n assert len(aug_train_list) == AUG_LEN, 'len(aug_train_list) != {}'.format(\n AUG_LEN)\n\n with open(\n osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation',\n 'trainaug.txt'), 'w') as f:\n f.writelines(line + '\\n' for line in aug_train_list)\n\n aug_list = generate_aug_list(full_aug_list, ori_train_list + val_list)\n assert len(aug_list) == AUG_LEN - len(\n ori_train_list), 'len(aug_list) != {}'.format(AUG_LEN -\n len(ori_train_list))\n with open(\n osp.join(devkit_path, 'VOC2012/ImageSets/Segmentation', 'aug.txt'),\n 'w') as f:\n f.writelines(line + '\\n' for line in aug_list)\n\n print('Done!')\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/convert_datasets/voc_aug.py"}]} | 1,328 | 228 |
gh_patches_debug_27576 | rasdani/github-patches | git_diff | pytorch__text-192 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
len of iterator incorrect for dynamic batching
The `__len__` method of `Iterator` (defined [here](https://github.com/pytorch/text/blob/master/torchtext/data/iterator.py#L157)) returns a wrong result for dynamic batching (i.e. if [batch_size_fn](https://github.com/pytorch/text/blob/master/torchtext/data/iterator.py#L50) is not None).
For example if we set `batch_size_fn` to
```
lambda x, n, b: b + len(x.text)
```
there might be more or fewer batches in the dataset than the `__len__` outputs.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchtext/data/iterator.py`
Content:
```
1 from __future__ import division
2
3 import math
4 import random
5 from contextlib import contextmanager
6 from copy import deepcopy
7
8 from .batch import Batch
9 from .dataset import Dataset
10
11
12 class RandomShuffler(object):
13 """Use random functions while keeping track of the random state to make it
14 reproducible and deterministic."""
15
16 def __init__(self, random_state=None):
17 self._random_state = random_state
18 if self._random_state is None:
19 self._random_state = random.getstate()
20
21 @contextmanager
22 def use_internal_state(self):
23 """Use a specific RNG state."""
24 old_state = random.getstate()
25 random.setstate(self._random_state)
26 yield
27 self._random_state = random.getstate()
28 random.setstate(old_state)
29
30 @property
31 def random_state(self):
32 return deepcopy(self._random_state)
33
34 @random_state.setter
35 def random_state(self, s):
36 self._random_state = s
37
38 def __call__(self, data):
39 """Shuffle and return a new list."""
40 with self.use_internal_state():
41 return random.sample(data, len(data))
42
43
44 class Iterator(object):
45 """Defines an iterator that loads batches of data from a Dataset.
46
47 Attributes:
48 dataset: The Dataset object to load Examples from.
49 batch_size: Batch size.
50 batch_size_fn: Function of three arguments (new example to add, current
51 count of examples in the batch, and current effective batch size)
52 that returns the new effective batch size resulting from adding
53 that example to a batch. This is useful for dynamic batching, where
54 this function would add to the current effective batch size the
55 number of tokens in the new example.
56 sort_key: A key to use for sorting examples in order to batch together
57 examples with similar lengths and minimize padding. The sort_key
58 provided to the Iterator constructor overrides the sort_key
59 attribute of the Dataset, or defers to it if None.
60 train: Whether the iterator represents a train set.
61 repeat: Whether to repeat the iterator for multiple epochs.
62 shuffle: Whether to shuffle examples between epochs.
63 sort: Whether to sort examples according to self.sort_key.
64 Note that repeat, shuffle, and sort default to train, train, and
65 (not train).
66 sort_within_batch: Whether to sort (in descending order according to
67 self.sort_key) within each batch. If None, defaults to self.sort.
68 If self.sort is True and this is False, the batch is left in the
69 original (ascending) sorted order.
70 device: Device to create batches on. Use -1 for CPU and None for the
71 currently active GPU device.
72 """
73
74 def __init__(self, dataset, batch_size, sort_key=None, device=None,
75 batch_size_fn=lambda new, count, sofar: count, train=True,
76 repeat=None, shuffle=None, sort=None,
77 sort_within_batch=None):
78 self.batch_size, self.train, self.dataset = batch_size, train, dataset
79 self.batch_size_fn = batch_size_fn
80 self.iterations = 0
81 self.repeat = train if repeat is None else repeat
82 self.shuffle = train if shuffle is None else shuffle
83 self.sort = not train if sort is None else sort
84 if sort_within_batch is None:
85 self.sort_within_batch = self.sort
86 else:
87 self.sort_within_batch = sort_within_batch
88 if sort_key is None:
89 self.sort_key = dataset.sort_key
90 else:
91 self.sort_key = sort_key
92 self.device = device
93
94 self.random_shuffler = RandomShuffler()
95
96 # For state loading/saving only
97 self._iterations_this_epoch = 0
98 self._random_state_this_epoch = None
99 self._restored_from_state = False
100
101 @classmethod
102 def splits(cls, datasets, batch_sizes=None, **kwargs):
103 """Create Iterator objects for multiple splits of a dataset.
104
105 Arguments:
106 datasets: Tuple of Dataset objects corresponding to the splits. The
107 first such object should be the train set.
108 batch_sizes: Tuple of batch sizes to use for the different splits,
109 or None to use the same batch_size for all splits.
110 Remaining keyword arguments: Passed to the constructor of the
111 iterator class being used.
112 """
113 if batch_sizes is None:
114 batch_sizes = [kwargs.pop('batch_size')] * len(datasets)
115 ret = []
116 for i in range(len(datasets)):
117 train = i == 0
118 ret.append(cls(
119 datasets[i], batch_size=batch_sizes[i], train=train, **kwargs))
120 return tuple(ret)
121
122 def data(self):
123 """Return the examples in the dataset in order, sorted, or shuffled."""
124 if self.sort:
125 xs = sorted(self.dataset, key=self.sort_key)
126 elif self.shuffle:
127 xs = [self.dataset[i] for i in self.random_shuffler(range(len(self.dataset)))]
128 else:
129 xs = self.dataset
130 return xs
131
132 def init_epoch(self):
133 """Set up the batch generator for a new epoch."""
134
135 if self._restored_from_state:
136 self.random_shuffler.random_state = self._random_state_this_epoch
137 else:
138 self._random_state_this_epoch = self.random_shuffler.random_state
139
140 self.create_batches()
141
142 if self._restored_from_state:
143 self._restored_from_state = False
144 else:
145 self._iterations_this_epoch = 0
146
147 if not self.repeat:
148 self.iterations = 0
149
150 def create_batches(self):
151 self.batches = batch(self.data(), self.batch_size, self.batch_size_fn)
152
153 @property
154 def epoch(self):
155 return self.iterations / len(self)
156
157 def __len__(self):
158 return math.ceil(len(self.dataset) / self.batch_size)
159
160 def __iter__(self):
161 while True:
162 self.init_epoch()
163 for idx, minibatch in enumerate(self.batches):
164 # fast-forward if loaded from state
165 if self._iterations_this_epoch > idx:
166 continue
167 self.iterations += 1
168 self._iterations_this_epoch += 1
169 if self.sort_within_batch:
170 # NOTE: `rnn.pack_padded_sequence` requires that a minibatch
171 # be sorted by decreasing order, which requires reversing
172 # relative to typical sort keys
173 if self.sort:
174 minibatch.reverse()
175 else:
176 minibatch.sort(key=self.sort_key, reverse=True)
177 yield Batch(minibatch, self.dataset, self.device,
178 self.train)
179 if not self.repeat:
180 raise StopIteration
181
182 def state_dict(self):
183 return {
184 "iterations": self.iterations,
185 "iterations_this_epoch": self._iterations_this_epoch,
186 "random_state_this_epoch": self._random_state_this_epoch}
187
188 def load_state_dict(self, state_dict):
189 self.iterations = state_dict["iterations"]
190 self._iterations_this_epoch = state_dict["iterations_this_epoch"]
191 self._random_state_this_epoch = state_dict["random_state_this_epoch"]
192 self._restored_from_state = True
193
194
195 class BPTTIterator(Iterator):
196 """Defines an iterator for language modeling tasks that use BPTT.
197
198 Provides contiguous streams of examples together with targets that are
199 one timestep further forward, for language modeling training with
200 backpropagation through time (BPTT). Expects a Dataset with a single
201 example and a single field called 'text' and produces Batches with text and
202 target attributes.
203
204 Attributes:
205 dataset: The Dataset object to load Examples from.
206 batch_size: Batch size.
207 bptt_len: Length of sequences for backpropagation through time.
208 sort_key: A key to use for sorting examples in order to batch together
209 examples with similar lengths and minimize padding. The sort_key
210 provided to the Iterator constructor overrides the sort_key
211 attribute of the Dataset, or defers to it if None.
212 train: Whether the iterator represents a train set.
213 repeat: Whether to repeat the iterator for multiple epochs.
214 shuffle: Whether to shuffle examples between epochs.
215 sort: Whether to sort examples according to self.sort_key.
216 Note that repeat, shuffle, and sort default to train, train, and
217 (not train).
218 device: Device to create batches on. Use -1 for CPU and None for the
219 currently active GPU device.
220 """
221
222 def __init__(self, dataset, batch_size, bptt_len, **kwargs):
223 self.bptt_len = bptt_len
224 super(BPTTIterator, self).__init__(dataset, batch_size, **kwargs)
225
226 def __len__(self):
227 return math.ceil((len(self.dataset[0].text) / self.batch_size - 1) /
228 self.bptt_len)
229
230 def __iter__(self):
231 text = self.dataset[0].text
232 TEXT = self.dataset.fields['text']
233 TEXT.eos_token = None
234 text = text + ([TEXT.pad_token] * int(math.ceil(len(text) / self.batch_size) *
235 self.batch_size - len(text)))
236 data = TEXT.numericalize(
237 [text], device=self.device, train=self.train)
238 data = data.view(self.batch_size, -1).t().contiguous()
239 dataset = Dataset(examples=self.dataset.examples, fields=[
240 ('text', TEXT), ('target', TEXT)])
241 while True:
242 for i in range(0, len(self) * self.bptt_len, self.bptt_len):
243 seq_len = min(self.bptt_len, len(data) - i - 1)
244 yield Batch.fromvars(
245 dataset, self.batch_size, train=self.train,
246 text=data[i:i + seq_len],
247 target=data[i + 1:i + 1 + seq_len])
248 if not self.repeat:
249 raise StopIteration
250
251
252 class BucketIterator(Iterator):
253 """Defines an iterator that batches examples of similar lengths together.
254
255 Minimizes amount of padding needed while producing freshly shuffled
256 batches for each new epoch. See pool for the bucketing procedure used.
257 """
258
259 def create_batches(self):
260 if self.sort:
261 self.batches = batch(self.data(), self.batch_size,
262 self.batch_size_fn)
263 else:
264 self.batches = pool(self.data(), self.batch_size,
265 self.sort_key, self.batch_size_fn,
266 random_shuffler=self.random_shuffler)
267
268
269 def batch(data, batch_size, batch_size_fn=lambda new, count, sofar: count):
270 """Yield elements from data in chunks of batch_size."""
271 minibatch, size_so_far = [], 0
272 for ex in data:
273 minibatch.append(ex)
274 size_so_far = batch_size_fn(ex, len(minibatch), size_so_far)
275 if size_so_far == batch_size:
276 yield minibatch
277 minibatch, size_so_far = [], 0
278 elif size_so_far > batch_size:
279 yield minibatch[:-1]
280 minibatch, size_so_far = minibatch[-1:], batch_size_fn(ex, 1, 0)
281 if minibatch:
282 yield minibatch
283
284
285 def pool(data, batch_size, key, batch_size_fn=lambda new, count, sofar: count,
286 random_shuffler=None):
287 """Sort within buckets, then batch, then shuffle batches.
288
289 Partitions data into chunks of size 100*batch_size, sorts examples within
290 each chunk using sort_key, then batch these examples and shuffle the
291 batches.
292 """
293 if random_shuffler is None:
294 random_shuffler = random.shuffle
295 for p in batch(data, batch_size * 100, batch_size_fn):
296 p_batch = batch(sorted(p, key=key), batch_size, batch_size_fn)
297 for b in random_shuffler(list(p_batch)):
298 yield b
299
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchtext/data/iterator.py b/torchtext/data/iterator.py
--- a/torchtext/data/iterator.py
+++ b/torchtext/data/iterator.py
@@ -72,7 +72,7 @@
"""
def __init__(self, dataset, batch_size, sort_key=None, device=None,
- batch_size_fn=lambda new, count, sofar: count, train=True,
+ batch_size_fn=None, train=True,
repeat=None, shuffle=None, sort=None,
sort_within_batch=None):
self.batch_size, self.train, self.dataset = batch_size, train, dataset
@@ -155,6 +155,8 @@
return self.iterations / len(self)
def __len__(self):
+ if self.batch_size_fn is not None:
+ raise NotImplementedError
return math.ceil(len(self.dataset) / self.batch_size)
def __iter__(self):
@@ -266,8 +268,11 @@
random_shuffler=self.random_shuffler)
-def batch(data, batch_size, batch_size_fn=lambda new, count, sofar: count):
+def batch(data, batch_size, batch_size_fn=None):
"""Yield elements from data in chunks of batch_size."""
+ if batch_size_fn is None:
+ def batch_size_fn(new, count, sofar):
+ return count
minibatch, size_so_far = [], 0
for ex in data:
minibatch.append(ex)
| {"golden_diff": "diff --git a/torchtext/data/iterator.py b/torchtext/data/iterator.py\n--- a/torchtext/data/iterator.py\n+++ b/torchtext/data/iterator.py\n@@ -72,7 +72,7 @@\n \"\"\"\n \n def __init__(self, dataset, batch_size, sort_key=None, device=None,\n- batch_size_fn=lambda new, count, sofar: count, train=True,\n+ batch_size_fn=None, train=True,\n repeat=None, shuffle=None, sort=None,\n sort_within_batch=None):\n self.batch_size, self.train, self.dataset = batch_size, train, dataset\n@@ -155,6 +155,8 @@\n return self.iterations / len(self)\n \n def __len__(self):\n+ if self.batch_size_fn is not None:\n+ raise NotImplementedError\n return math.ceil(len(self.dataset) / self.batch_size)\n \n def __iter__(self):\n@@ -266,8 +268,11 @@\n random_shuffler=self.random_shuffler)\n \n \n-def batch(data, batch_size, batch_size_fn=lambda new, count, sofar: count):\n+def batch(data, batch_size, batch_size_fn=None):\n \"\"\"Yield elements from data in chunks of batch_size.\"\"\"\n+ if batch_size_fn is None:\n+ def batch_size_fn(new, count, sofar):\n+ return count\n minibatch, size_so_far = [], 0\n for ex in data:\n minibatch.append(ex)\n", "issue": "len of iterator incorrect for dynamic batching\nThe `__len__` method of `Iterator` (defined [here](https://github.com/pytorch/text/blob/master/torchtext/data/iterator.py#L157)) returns a wrong result for dynamic batching (i.e. if [batch_size_fn](https://github.com/pytorch/text/blob/master/torchtext/data/iterator.py#L50) is not None).\r\n\r\nFor example if we set `batch_size_fn` to\r\n```\r\nlambda x, n, b: b + len(x.text)\r\n```\r\nthere might be more or fewer batches in the dataset than the `__len__` outputs. \n", "before_files": [{"content": "from __future__ import division\n\nimport math\nimport random\nfrom contextlib import contextmanager\nfrom copy import deepcopy\n\nfrom .batch import Batch\nfrom .dataset import Dataset\n\n\nclass RandomShuffler(object):\n \"\"\"Use random functions while keeping track of the random state to make it\n reproducible and deterministic.\"\"\"\n\n def __init__(self, random_state=None):\n self._random_state = random_state\n if self._random_state is None:\n self._random_state = random.getstate()\n\n @contextmanager\n def use_internal_state(self):\n \"\"\"Use a specific RNG state.\"\"\"\n old_state = random.getstate()\n random.setstate(self._random_state)\n yield\n self._random_state = random.getstate()\n random.setstate(old_state)\n\n @property\n def random_state(self):\n return deepcopy(self._random_state)\n\n @random_state.setter\n def random_state(self, s):\n self._random_state = s\n\n def __call__(self, data):\n \"\"\"Shuffle and return a new list.\"\"\"\n with self.use_internal_state():\n return random.sample(data, len(data))\n\n\nclass Iterator(object):\n \"\"\"Defines an iterator that loads batches of data from a Dataset.\n\n Attributes:\n dataset: The Dataset object to load Examples from.\n batch_size: Batch size.\n batch_size_fn: Function of three arguments (new example to add, current\n count of examples in the batch, and current effective batch size)\n that returns the new effective batch size resulting from adding\n that example to a batch. This is useful for dynamic batching, where\n this function would add to the current effective batch size the\n number of tokens in the new example.\n sort_key: A key to use for sorting examples in order to batch together\n examples with similar lengths and minimize padding. The sort_key\n provided to the Iterator constructor overrides the sort_key\n attribute of the Dataset, or defers to it if None.\n train: Whether the iterator represents a train set.\n repeat: Whether to repeat the iterator for multiple epochs.\n shuffle: Whether to shuffle examples between epochs.\n sort: Whether to sort examples according to self.sort_key.\n Note that repeat, shuffle, and sort default to train, train, and\n (not train).\n sort_within_batch: Whether to sort (in descending order according to\n self.sort_key) within each batch. If None, defaults to self.sort.\n If self.sort is True and this is False, the batch is left in the\n original (ascending) sorted order.\n device: Device to create batches on. Use -1 for CPU and None for the\n currently active GPU device.\n \"\"\"\n\n def __init__(self, dataset, batch_size, sort_key=None, device=None,\n batch_size_fn=lambda new, count, sofar: count, train=True,\n repeat=None, shuffle=None, sort=None,\n sort_within_batch=None):\n self.batch_size, self.train, self.dataset = batch_size, train, dataset\n self.batch_size_fn = batch_size_fn\n self.iterations = 0\n self.repeat = train if repeat is None else repeat\n self.shuffle = train if shuffle is None else shuffle\n self.sort = not train if sort is None else sort\n if sort_within_batch is None:\n self.sort_within_batch = self.sort\n else:\n self.sort_within_batch = sort_within_batch\n if sort_key is None:\n self.sort_key = dataset.sort_key\n else:\n self.sort_key = sort_key\n self.device = device\n\n self.random_shuffler = RandomShuffler()\n\n # For state loading/saving only\n self._iterations_this_epoch = 0\n self._random_state_this_epoch = None\n self._restored_from_state = False\n\n @classmethod\n def splits(cls, datasets, batch_sizes=None, **kwargs):\n \"\"\"Create Iterator objects for multiple splits of a dataset.\n\n Arguments:\n datasets: Tuple of Dataset objects corresponding to the splits. The\n first such object should be the train set.\n batch_sizes: Tuple of batch sizes to use for the different splits,\n or None to use the same batch_size for all splits.\n Remaining keyword arguments: Passed to the constructor of the\n iterator class being used.\n \"\"\"\n if batch_sizes is None:\n batch_sizes = [kwargs.pop('batch_size')] * len(datasets)\n ret = []\n for i in range(len(datasets)):\n train = i == 0\n ret.append(cls(\n datasets[i], batch_size=batch_sizes[i], train=train, **kwargs))\n return tuple(ret)\n\n def data(self):\n \"\"\"Return the examples in the dataset in order, sorted, or shuffled.\"\"\"\n if self.sort:\n xs = sorted(self.dataset, key=self.sort_key)\n elif self.shuffle:\n xs = [self.dataset[i] for i in self.random_shuffler(range(len(self.dataset)))]\n else:\n xs = self.dataset\n return xs\n\n def init_epoch(self):\n \"\"\"Set up the batch generator for a new epoch.\"\"\"\n\n if self._restored_from_state:\n self.random_shuffler.random_state = self._random_state_this_epoch\n else:\n self._random_state_this_epoch = self.random_shuffler.random_state\n\n self.create_batches()\n\n if self._restored_from_state:\n self._restored_from_state = False\n else:\n self._iterations_this_epoch = 0\n\n if not self.repeat:\n self.iterations = 0\n\n def create_batches(self):\n self.batches = batch(self.data(), self.batch_size, self.batch_size_fn)\n\n @property\n def epoch(self):\n return self.iterations / len(self)\n\n def __len__(self):\n return math.ceil(len(self.dataset) / self.batch_size)\n\n def __iter__(self):\n while True:\n self.init_epoch()\n for idx, minibatch in enumerate(self.batches):\n # fast-forward if loaded from state\n if self._iterations_this_epoch > idx:\n continue\n self.iterations += 1\n self._iterations_this_epoch += 1\n if self.sort_within_batch:\n # NOTE: `rnn.pack_padded_sequence` requires that a minibatch\n # be sorted by decreasing order, which requires reversing\n # relative to typical sort keys\n if self.sort:\n minibatch.reverse()\n else:\n minibatch.sort(key=self.sort_key, reverse=True)\n yield Batch(minibatch, self.dataset, self.device,\n self.train)\n if not self.repeat:\n raise StopIteration\n\n def state_dict(self):\n return {\n \"iterations\": self.iterations,\n \"iterations_this_epoch\": self._iterations_this_epoch,\n \"random_state_this_epoch\": self._random_state_this_epoch}\n\n def load_state_dict(self, state_dict):\n self.iterations = state_dict[\"iterations\"]\n self._iterations_this_epoch = state_dict[\"iterations_this_epoch\"]\n self._random_state_this_epoch = state_dict[\"random_state_this_epoch\"]\n self._restored_from_state = True\n\n\nclass BPTTIterator(Iterator):\n \"\"\"Defines an iterator for language modeling tasks that use BPTT.\n\n Provides contiguous streams of examples together with targets that are\n one timestep further forward, for language modeling training with\n backpropagation through time (BPTT). Expects a Dataset with a single\n example and a single field called 'text' and produces Batches with text and\n target attributes.\n\n Attributes:\n dataset: The Dataset object to load Examples from.\n batch_size: Batch size.\n bptt_len: Length of sequences for backpropagation through time.\n sort_key: A key to use for sorting examples in order to batch together\n examples with similar lengths and minimize padding. The sort_key\n provided to the Iterator constructor overrides the sort_key\n attribute of the Dataset, or defers to it if None.\n train: Whether the iterator represents a train set.\n repeat: Whether to repeat the iterator for multiple epochs.\n shuffle: Whether to shuffle examples between epochs.\n sort: Whether to sort examples according to self.sort_key.\n Note that repeat, shuffle, and sort default to train, train, and\n (not train).\n device: Device to create batches on. Use -1 for CPU and None for the\n currently active GPU device.\n \"\"\"\n\n def __init__(self, dataset, batch_size, bptt_len, **kwargs):\n self.bptt_len = bptt_len\n super(BPTTIterator, self).__init__(dataset, batch_size, **kwargs)\n\n def __len__(self):\n return math.ceil((len(self.dataset[0].text) / self.batch_size - 1) /\n self.bptt_len)\n\n def __iter__(self):\n text = self.dataset[0].text\n TEXT = self.dataset.fields['text']\n TEXT.eos_token = None\n text = text + ([TEXT.pad_token] * int(math.ceil(len(text) / self.batch_size) *\n self.batch_size - len(text)))\n data = TEXT.numericalize(\n [text], device=self.device, train=self.train)\n data = data.view(self.batch_size, -1).t().contiguous()\n dataset = Dataset(examples=self.dataset.examples, fields=[\n ('text', TEXT), ('target', TEXT)])\n while True:\n for i in range(0, len(self) * self.bptt_len, self.bptt_len):\n seq_len = min(self.bptt_len, len(data) - i - 1)\n yield Batch.fromvars(\n dataset, self.batch_size, train=self.train,\n text=data[i:i + seq_len],\n target=data[i + 1:i + 1 + seq_len])\n if not self.repeat:\n raise StopIteration\n\n\nclass BucketIterator(Iterator):\n \"\"\"Defines an iterator that batches examples of similar lengths together.\n\n Minimizes amount of padding needed while producing freshly shuffled\n batches for each new epoch. See pool for the bucketing procedure used.\n \"\"\"\n\n def create_batches(self):\n if self.sort:\n self.batches = batch(self.data(), self.batch_size,\n self.batch_size_fn)\n else:\n self.batches = pool(self.data(), self.batch_size,\n self.sort_key, self.batch_size_fn,\n random_shuffler=self.random_shuffler)\n\n\ndef batch(data, batch_size, batch_size_fn=lambda new, count, sofar: count):\n \"\"\"Yield elements from data in chunks of batch_size.\"\"\"\n minibatch, size_so_far = [], 0\n for ex in data:\n minibatch.append(ex)\n size_so_far = batch_size_fn(ex, len(minibatch), size_so_far)\n if size_so_far == batch_size:\n yield minibatch\n minibatch, size_so_far = [], 0\n elif size_so_far > batch_size:\n yield minibatch[:-1]\n minibatch, size_so_far = minibatch[-1:], batch_size_fn(ex, 1, 0)\n if minibatch:\n yield minibatch\n\n\ndef pool(data, batch_size, key, batch_size_fn=lambda new, count, sofar: count,\n random_shuffler=None):\n \"\"\"Sort within buckets, then batch, then shuffle batches.\n\n Partitions data into chunks of size 100*batch_size, sorts examples within\n each chunk using sort_key, then batch these examples and shuffle the\n batches.\n \"\"\"\n if random_shuffler is None:\n random_shuffler = random.shuffle\n for p in batch(data, batch_size * 100, batch_size_fn):\n p_batch = batch(sorted(p, key=key), batch_size, batch_size_fn)\n for b in random_shuffler(list(p_batch)):\n yield b\n", "path": "torchtext/data/iterator.py"}], "after_files": [{"content": "from __future__ import division\n\nimport math\nimport random\nfrom contextlib import contextmanager\nfrom copy import deepcopy\n\nfrom .batch import Batch\nfrom .dataset import Dataset\n\n\nclass RandomShuffler(object):\n \"\"\"Use random functions while keeping track of the random state to make it\n reproducible and deterministic.\"\"\"\n\n def __init__(self, random_state=None):\n self._random_state = random_state\n if self._random_state is None:\n self._random_state = random.getstate()\n\n @contextmanager\n def use_internal_state(self):\n \"\"\"Use a specific RNG state.\"\"\"\n old_state = random.getstate()\n random.setstate(self._random_state)\n yield\n self._random_state = random.getstate()\n random.setstate(old_state)\n\n @property\n def random_state(self):\n return deepcopy(self._random_state)\n\n @random_state.setter\n def random_state(self, s):\n self._random_state = s\n\n def __call__(self, data):\n \"\"\"Shuffle and return a new list.\"\"\"\n with self.use_internal_state():\n return random.sample(data, len(data))\n\n\nclass Iterator(object):\n \"\"\"Defines an iterator that loads batches of data from a Dataset.\n\n Attributes:\n dataset: The Dataset object to load Examples from.\n batch_size: Batch size.\n batch_size_fn: Function of three arguments (new example to add, current\n count of examples in the batch, and current effective batch size)\n that returns the new effective batch size resulting from adding\n that example to a batch. This is useful for dynamic batching, where\n this function would add to the current effective batch size the\n number of tokens in the new example.\n sort_key: A key to use for sorting examples in order to batch together\n examples with similar lengths and minimize padding. The sort_key\n provided to the Iterator constructor overrides the sort_key\n attribute of the Dataset, or defers to it if None.\n train: Whether the iterator represents a train set.\n repeat: Whether to repeat the iterator for multiple epochs.\n shuffle: Whether to shuffle examples between epochs.\n sort: Whether to sort examples according to self.sort_key.\n Note that repeat, shuffle, and sort default to train, train, and\n (not train).\n sort_within_batch: Whether to sort (in descending order according to\n self.sort_key) within each batch. If None, defaults to self.sort.\n If self.sort is True and this is False, the batch is left in the\n original (ascending) sorted order.\n device: Device to create batches on. Use -1 for CPU and None for the\n currently active GPU device.\n \"\"\"\n\n def __init__(self, dataset, batch_size, sort_key=None, device=None,\n batch_size_fn=None, train=True,\n repeat=None, shuffle=None, sort=None,\n sort_within_batch=None):\n self.batch_size, self.train, self.dataset = batch_size, train, dataset\n self.batch_size_fn = batch_size_fn\n self.iterations = 0\n self.repeat = train if repeat is None else repeat\n self.shuffle = train if shuffle is None else shuffle\n self.sort = not train if sort is None else sort\n if sort_within_batch is None:\n self.sort_within_batch = self.sort\n else:\n self.sort_within_batch = sort_within_batch\n if sort_key is None:\n self.sort_key = dataset.sort_key\n else:\n self.sort_key = sort_key\n self.device = device\n\n self.random_shuffler = RandomShuffler()\n\n # For state loading/saving only\n self._iterations_this_epoch = 0\n self._random_state_this_epoch = None\n self._restored_from_state = False\n\n @classmethod\n def splits(cls, datasets, batch_sizes=None, **kwargs):\n \"\"\"Create Iterator objects for multiple splits of a dataset.\n\n Arguments:\n datasets: Tuple of Dataset objects corresponding to the splits. The\n first such object should be the train set.\n batch_sizes: Tuple of batch sizes to use for the different splits,\n or None to use the same batch_size for all splits.\n Remaining keyword arguments: Passed to the constructor of the\n iterator class being used.\n \"\"\"\n if batch_sizes is None:\n batch_sizes = [kwargs.pop('batch_size')] * len(datasets)\n ret = []\n for i in range(len(datasets)):\n train = i == 0\n ret.append(cls(\n datasets[i], batch_size=batch_sizes[i], train=train, **kwargs))\n return tuple(ret)\n\n def data(self):\n \"\"\"Return the examples in the dataset in order, sorted, or shuffled.\"\"\"\n if self.sort:\n xs = sorted(self.dataset, key=self.sort_key)\n elif self.shuffle:\n xs = [self.dataset[i] for i in self.random_shuffler(range(len(self.dataset)))]\n else:\n xs = self.dataset\n return xs\n\n def init_epoch(self):\n \"\"\"Set up the batch generator for a new epoch.\"\"\"\n\n if self._restored_from_state:\n self.random_shuffler.random_state = self._random_state_this_epoch\n else:\n self._random_state_this_epoch = self.random_shuffler.random_state\n\n self.create_batches()\n\n if self._restored_from_state:\n self._restored_from_state = False\n else:\n self._iterations_this_epoch = 0\n\n if not self.repeat:\n self.iterations = 0\n\n def create_batches(self):\n self.batches = batch(self.data(), self.batch_size, self.batch_size_fn)\n\n @property\n def epoch(self):\n return self.iterations / len(self)\n\n def __len__(self):\n if self.batch_size_fn is not None:\n raise NotImplementedError\n return math.ceil(len(self.dataset) / self.batch_size)\n\n def __iter__(self):\n while True:\n self.init_epoch()\n for idx, minibatch in enumerate(self.batches):\n # fast-forward if loaded from state\n if self._iterations_this_epoch > idx:\n continue\n self.iterations += 1\n self._iterations_this_epoch += 1\n if self.sort_within_batch:\n # NOTE: `rnn.pack_padded_sequence` requires that a minibatch\n # be sorted by decreasing order, which requires reversing\n # relative to typical sort keys\n if self.sort:\n minibatch.reverse()\n else:\n minibatch.sort(key=self.sort_key, reverse=True)\n yield Batch(minibatch, self.dataset, self.device,\n self.train)\n if not self.repeat:\n raise StopIteration\n\n def state_dict(self):\n return {\n \"iterations\": self.iterations,\n \"iterations_this_epoch\": self._iterations_this_epoch,\n \"random_state_this_epoch\": self._random_state_this_epoch}\n\n def load_state_dict(self, state_dict):\n self.iterations = state_dict[\"iterations\"]\n self._iterations_this_epoch = state_dict[\"iterations_this_epoch\"]\n self._random_state_this_epoch = state_dict[\"random_state_this_epoch\"]\n self._restored_from_state = True\n\n\nclass BPTTIterator(Iterator):\n \"\"\"Defines an iterator for language modeling tasks that use BPTT.\n\n Provides contiguous streams of examples together with targets that are\n one timestep further forward, for language modeling training with\n backpropagation through time (BPTT). Expects a Dataset with a single\n example and a single field called 'text' and produces Batches with text and\n target attributes.\n\n Attributes:\n dataset: The Dataset object to load Examples from.\n batch_size: Batch size.\n bptt_len: Length of sequences for backpropagation through time.\n sort_key: A key to use for sorting examples in order to batch together\n examples with similar lengths and minimize padding. The sort_key\n provided to the Iterator constructor overrides the sort_key\n attribute of the Dataset, or defers to it if None.\n train: Whether the iterator represents a train set.\n repeat: Whether to repeat the iterator for multiple epochs.\n shuffle: Whether to shuffle examples between epochs.\n sort: Whether to sort examples according to self.sort_key.\n Note that repeat, shuffle, and sort default to train, train, and\n (not train).\n device: Device to create batches on. Use -1 for CPU and None for the\n currently active GPU device.\n \"\"\"\n\n def __init__(self, dataset, batch_size, bptt_len, **kwargs):\n self.bptt_len = bptt_len\n super(BPTTIterator, self).__init__(dataset, batch_size, **kwargs)\n\n def __len__(self):\n return math.ceil((len(self.dataset[0].text) / self.batch_size - 1) /\n self.bptt_len)\n\n def __iter__(self):\n text = self.dataset[0].text\n TEXT = self.dataset.fields['text']\n TEXT.eos_token = None\n text = text + ([TEXT.pad_token] * int(math.ceil(len(text) / self.batch_size) *\n self.batch_size - len(text)))\n data = TEXT.numericalize(\n [text], device=self.device, train=self.train)\n data = data.view(self.batch_size, -1).t().contiguous()\n dataset = Dataset(examples=self.dataset.examples, fields=[\n ('text', TEXT), ('target', TEXT)])\n while True:\n for i in range(0, len(self) * self.bptt_len, self.bptt_len):\n seq_len = min(self.bptt_len, len(data) - i - 1)\n yield Batch.fromvars(\n dataset, self.batch_size, train=self.train,\n text=data[i:i + seq_len],\n target=data[i + 1:i + 1 + seq_len])\n if not self.repeat:\n raise StopIteration\n\n\nclass BucketIterator(Iterator):\n \"\"\"Defines an iterator that batches examples of similar lengths together.\n\n Minimizes amount of padding needed while producing freshly shuffled\n batches for each new epoch. See pool for the bucketing procedure used.\n \"\"\"\n\n def create_batches(self):\n if self.sort:\n self.batches = batch(self.data(), self.batch_size,\n self.batch_size_fn)\n else:\n self.batches = pool(self.data(), self.batch_size,\n self.sort_key, self.batch_size_fn,\n random_shuffler=self.random_shuffler)\n\n\ndef batch(data, batch_size, batch_size_fn=None):\n \"\"\"Yield elements from data in chunks of batch_size.\"\"\"\n if batch_size_fn is None:\n def batch_size_fn(new, count, sofar):\n return count\n minibatch, size_so_far = [], 0\n for ex in data:\n minibatch.append(ex)\n size_so_far = batch_size_fn(ex, len(minibatch), size_so_far)\n if size_so_far == batch_size:\n yield minibatch\n minibatch, size_so_far = [], 0\n elif size_so_far > batch_size:\n yield minibatch[:-1]\n minibatch, size_so_far = minibatch[-1:], batch_size_fn(ex, 1, 0)\n if minibatch:\n yield minibatch\n\n\ndef pool(data, batch_size, key, batch_size_fn=lambda new, count, sofar: count,\n random_shuffler=None):\n \"\"\"Sort within buckets, then batch, then shuffle batches.\n\n Partitions data into chunks of size 100*batch_size, sorts examples within\n each chunk using sort_key, then batch these examples and shuffle the\n batches.\n \"\"\"\n if random_shuffler is None:\n random_shuffler = random.shuffle\n for p in batch(data, batch_size * 100, batch_size_fn):\n p_batch = batch(sorted(p, key=key), batch_size, batch_size_fn)\n for b in random_shuffler(list(p_batch)):\n yield b\n", "path": "torchtext/data/iterator.py"}]} | 3,743 | 328 |
gh_patches_debug_2582 | rasdani/github-patches | git_diff | azavea__raster-vision-1586 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Same explanation for SlidingWindowGeoDataset and RandomWindowGeoDataset
## 📚 Documentation
<!-- A clear and concise description of what content in https://docs.rastervision.io/ is an issue.-->
> The SlidingWindowGeoDataset allows reading the scene by sampling random window sizes and locations.
This description is same to explained both SlidingWindowGeoDataset and RandomWindowGeoDataset. This can be found here: https://docs.rastervision.io/en/latest/tutorials/sampling_training_data.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `rastervision_core/rastervision/core/data/class_config.py`
Content:
```
1 from typing import List, Optional, Tuple, Union
2
3 from rastervision.pipeline.config import (Config, register_config, ConfigError,
4 Field, validator)
5 from rastervision.core.data.utils import color_to_triple, normalize_color
6
7 DEFAULT_NULL_CLASS_NAME = 'null'
8 DEFAULT_NULL_CLASS_COLOR = 'black'
9
10
11 @register_config('class_config')
12 class ClassConfig(Config):
13 """Configures the class names that are being predicted."""
14 names: List[str] = Field(
15 ...,
16 description='Names of classes. The i-th class in this list will have '
17 'class ID = i.')
18 colors: Optional[List[Union[str, Tuple]]] = Field(
19 None,
20 description=
21 ('Colors used to visualize classes. Can be color strings accepted by '
22 'matplotlib or RGB tuples. If None, a random color will be auto-generated '
23 'for each class.'))
24 null_class: Optional[str] = Field(
25 None,
26 description='Optional name of class in `names` to use as the null '
27 'class. This is used in semantic segmentation to represent the label '
28 'for imagery pixels that are NODATA or that are missing a label. '
29 f'If None and the class names include "{DEFAULT_NULL_CLASS_NAME}", '
30 'it will automatically be used as the null class. If None, and this '
31 'Config is part of a SemanticSegmentationConfig, a null class will be '
32 'added automatically.')
33
34 @validator('colors', always=True)
35 def validate_colors(cls, v: Optional[List[Union[str, Tuple]]],
36 values: dict) -> Optional[List[Union[str, Tuple]]]:
37 """Compare length w/ names. Also auto-generate if not specified."""
38 class_names = values['names']
39 class_colors = v
40 if class_colors is None:
41 class_colors = [color_to_triple() for _ in class_names]
42 elif len(class_names) != len(class_colors):
43 raise ConfigError(f'len(class_names) ({len(class_names)}) != '
44 f'len(class_colors) ({len(class_colors)})\n'
45 f'class_names: {class_names}\n'
46 f'class_colors: {class_colors}')
47 return class_colors
48
49 @validator('null_class', always=True)
50 def validate_null_class(cls, v: Optional[str],
51 values: dict) -> Optional[str]:
52 """Check if in names. If 'null' in names, use it as null class."""
53 names = values['names']
54 if v is None:
55 if DEFAULT_NULL_CLASS_NAME in names:
56 v = DEFAULT_NULL_CLASS_NAME
57 else:
58 if v not in names:
59 raise ConfigError(
60 f'The null_class, "{v}", must be in list of class names.')
61
62 # edge case
63 default_null_class_in_names = (DEFAULT_NULL_CLASS_NAME in names)
64 null_class_neq_default = (v != DEFAULT_NULL_CLASS_NAME)
65 if default_null_class_in_names and null_class_neq_default:
66 raise ConfigError(
67 f'"{DEFAULT_NULL_CLASS_NAME}" is in names but the '
68 f'specified null_class is something else ("{v}").')
69 return v
70
71 def get_class_id(self, name: str) -> int:
72 return self.names.index(name)
73
74 def get_name(self, id: int) -> str:
75 return self.names[id]
76
77 @property
78 def null_class_id(self) -> int:
79 if self.null_class is None:
80 raise ValueError('null_class is not set')
81 return self.get_class_id(self.null_class)
82
83 def get_color_to_class_id(self) -> dict:
84 return dict([(self.colors[i], i) for i in range(len(self.colors))])
85
86 def ensure_null_class(self) -> None:
87 """Add a null class if one isn't set. This method is idempotent."""
88 if self.null_class is not None:
89 return
90
91 null_class_name = DEFAULT_NULL_CLASS_NAME
92 null_class_color = DEFAULT_NULL_CLASS_COLOR
93
94 # This might seeem redundant given the null class validator above, but
95 # is actually important. Sometimes there can be multiple ClassConfig
96 # instances that reference the same list objects for names and colors
97 # (not clear why this happens). This means that
98 # each ensure_null_class() call will add to names and colors in each
99 # copy of ClassConfig but only set its own null_class, which makes this
100 # method() non-idempotent.
101 if null_class_name in self.names:
102 self.null_class = null_class_name
103 return
104
105 # use random color if default color is already taken
106 null_class_color_triple = color_to_triple(null_class_color)
107 all_color_triples = [
108 color_to_triple(c) if isinstance(c, str) else c
109 for c in self.colors
110 ]
111 if null_class_color_triple in all_color_triples:
112 null_class_color = color_to_triple()
113
114 self.names.append(null_class_name)
115 self.colors.append(null_class_color)
116 self.null_class = null_class_name
117
118 def __len__(self) -> int:
119 return len(self.names)
120
121 @property
122 def color_triples(self) -> List[Tuple[float, float, float]]:
123 color_triples = [normalize_color(c) for c in self.colors]
124 return color_triples
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/rastervision_core/rastervision/core/data/class_config.py b/rastervision_core/rastervision/core/data/class_config.py
--- a/rastervision_core/rastervision/core/data/class_config.py
+++ b/rastervision_core/rastervision/core/data/class_config.py
@@ -120,5 +120,6 @@
@property
def color_triples(self) -> List[Tuple[float, float, float]]:
+ """Class colors in a normalized form."""
color_triples = [normalize_color(c) for c in self.colors]
return color_triples
| {"golden_diff": "diff --git a/rastervision_core/rastervision/core/data/class_config.py b/rastervision_core/rastervision/core/data/class_config.py\n--- a/rastervision_core/rastervision/core/data/class_config.py\n+++ b/rastervision_core/rastervision/core/data/class_config.py\n@@ -120,5 +120,6 @@\n \n @property\n def color_triples(self) -> List[Tuple[float, float, float]]:\n+ \"\"\"Class colors in a normalized form.\"\"\"\n color_triples = [normalize_color(c) for c in self.colors]\n return color_triples\n", "issue": "Same explanation for SlidingWindowGeoDataset and RandomWindowGeoDataset\n## \ud83d\udcda Documentation\r\n\r\n<!-- A clear and concise description of what content in https://docs.rastervision.io/ is an issue.-->\r\n\r\n> The SlidingWindowGeoDataset allows reading the scene by sampling random window sizes and locations.\r\n\r\nThis description is same to explained both SlidingWindowGeoDataset and RandomWindowGeoDataset. This can be found here: https://docs.rastervision.io/en/latest/tutorials/sampling_training_data.html\n", "before_files": [{"content": "from typing import List, Optional, Tuple, Union\n\nfrom rastervision.pipeline.config import (Config, register_config, ConfigError,\n Field, validator)\nfrom rastervision.core.data.utils import color_to_triple, normalize_color\n\nDEFAULT_NULL_CLASS_NAME = 'null'\nDEFAULT_NULL_CLASS_COLOR = 'black'\n\n\n@register_config('class_config')\nclass ClassConfig(Config):\n \"\"\"Configures the class names that are being predicted.\"\"\"\n names: List[str] = Field(\n ...,\n description='Names of classes. The i-th class in this list will have '\n 'class ID = i.')\n colors: Optional[List[Union[str, Tuple]]] = Field(\n None,\n description=\n ('Colors used to visualize classes. Can be color strings accepted by '\n 'matplotlib or RGB tuples. If None, a random color will be auto-generated '\n 'for each class.'))\n null_class: Optional[str] = Field(\n None,\n description='Optional name of class in `names` to use as the null '\n 'class. This is used in semantic segmentation to represent the label '\n 'for imagery pixels that are NODATA or that are missing a label. '\n f'If None and the class names include \"{DEFAULT_NULL_CLASS_NAME}\", '\n 'it will automatically be used as the null class. If None, and this '\n 'Config is part of a SemanticSegmentationConfig, a null class will be '\n 'added automatically.')\n\n @validator('colors', always=True)\n def validate_colors(cls, v: Optional[List[Union[str, Tuple]]],\n values: dict) -> Optional[List[Union[str, Tuple]]]:\n \"\"\"Compare length w/ names. Also auto-generate if not specified.\"\"\"\n class_names = values['names']\n class_colors = v\n if class_colors is None:\n class_colors = [color_to_triple() for _ in class_names]\n elif len(class_names) != len(class_colors):\n raise ConfigError(f'len(class_names) ({len(class_names)}) != '\n f'len(class_colors) ({len(class_colors)})\\n'\n f'class_names: {class_names}\\n'\n f'class_colors: {class_colors}')\n return class_colors\n\n @validator('null_class', always=True)\n def validate_null_class(cls, v: Optional[str],\n values: dict) -> Optional[str]:\n \"\"\"Check if in names. If 'null' in names, use it as null class.\"\"\"\n names = values['names']\n if v is None:\n if DEFAULT_NULL_CLASS_NAME in names:\n v = DEFAULT_NULL_CLASS_NAME\n else:\n if v not in names:\n raise ConfigError(\n f'The null_class, \"{v}\", must be in list of class names.')\n\n # edge case\n default_null_class_in_names = (DEFAULT_NULL_CLASS_NAME in names)\n null_class_neq_default = (v != DEFAULT_NULL_CLASS_NAME)\n if default_null_class_in_names and null_class_neq_default:\n raise ConfigError(\n f'\"{DEFAULT_NULL_CLASS_NAME}\" is in names but the '\n f'specified null_class is something else (\"{v}\").')\n return v\n\n def get_class_id(self, name: str) -> int:\n return self.names.index(name)\n\n def get_name(self, id: int) -> str:\n return self.names[id]\n\n @property\n def null_class_id(self) -> int:\n if self.null_class is None:\n raise ValueError('null_class is not set')\n return self.get_class_id(self.null_class)\n\n def get_color_to_class_id(self) -> dict:\n return dict([(self.colors[i], i) for i in range(len(self.colors))])\n\n def ensure_null_class(self) -> None:\n \"\"\"Add a null class if one isn't set. This method is idempotent.\"\"\"\n if self.null_class is not None:\n return\n\n null_class_name = DEFAULT_NULL_CLASS_NAME\n null_class_color = DEFAULT_NULL_CLASS_COLOR\n\n # This might seeem redundant given the null class validator above, but\n # is actually important. Sometimes there can be multiple ClassConfig\n # instances that reference the same list objects for names and colors\n # (not clear why this happens). This means that\n # each ensure_null_class() call will add to names and colors in each\n # copy of ClassConfig but only set its own null_class, which makes this\n # method() non-idempotent.\n if null_class_name in self.names:\n self.null_class = null_class_name\n return\n\n # use random color if default color is already taken\n null_class_color_triple = color_to_triple(null_class_color)\n all_color_triples = [\n color_to_triple(c) if isinstance(c, str) else c\n for c in self.colors\n ]\n if null_class_color_triple in all_color_triples:\n null_class_color = color_to_triple()\n\n self.names.append(null_class_name)\n self.colors.append(null_class_color)\n self.null_class = null_class_name\n\n def __len__(self) -> int:\n return len(self.names)\n\n @property\n def color_triples(self) -> List[Tuple[float, float, float]]:\n color_triples = [normalize_color(c) for c in self.colors]\n return color_triples\n", "path": "rastervision_core/rastervision/core/data/class_config.py"}], "after_files": [{"content": "from typing import List, Optional, Tuple, Union\n\nfrom rastervision.pipeline.config import (Config, register_config, ConfigError,\n Field, validator)\nfrom rastervision.core.data.utils import color_to_triple, normalize_color\n\nDEFAULT_NULL_CLASS_NAME = 'null'\nDEFAULT_NULL_CLASS_COLOR = 'black'\n\n\n@register_config('class_config')\nclass ClassConfig(Config):\n \"\"\"Configures the class names that are being predicted.\"\"\"\n names: List[str] = Field(\n ...,\n description='Names of classes. The i-th class in this list will have '\n 'class ID = i.')\n colors: Optional[List[Union[str, Tuple]]] = Field(\n None,\n description=\n ('Colors used to visualize classes. Can be color strings accepted by '\n 'matplotlib or RGB tuples. If None, a random color will be auto-generated '\n 'for each class.'))\n null_class: Optional[str] = Field(\n None,\n description='Optional name of class in `names` to use as the null '\n 'class. This is used in semantic segmentation to represent the label '\n 'for imagery pixels that are NODATA or that are missing a label. '\n f'If None and the class names include \"{DEFAULT_NULL_CLASS_NAME}\", '\n 'it will automatically be used as the null class. If None, and this '\n 'Config is part of a SemanticSegmentationConfig, a null class will be '\n 'added automatically.')\n\n @validator('colors', always=True)\n def validate_colors(cls, v: Optional[List[Union[str, Tuple]]],\n values: dict) -> Optional[List[Union[str, Tuple]]]:\n \"\"\"Compare length w/ names. Also auto-generate if not specified.\"\"\"\n class_names = values['names']\n class_colors = v\n if class_colors is None:\n class_colors = [color_to_triple() for _ in class_names]\n elif len(class_names) != len(class_colors):\n raise ConfigError(f'len(class_names) ({len(class_names)}) != '\n f'len(class_colors) ({len(class_colors)})\\n'\n f'class_names: {class_names}\\n'\n f'class_colors: {class_colors}')\n return class_colors\n\n @validator('null_class', always=True)\n def validate_null_class(cls, v: Optional[str],\n values: dict) -> Optional[str]:\n \"\"\"Check if in names. If 'null' in names, use it as null class.\"\"\"\n names = values['names']\n if v is None:\n if DEFAULT_NULL_CLASS_NAME in names:\n v = DEFAULT_NULL_CLASS_NAME\n else:\n if v not in names:\n raise ConfigError(\n f'The null_class, \"{v}\", must be in list of class names.')\n\n # edge case\n default_null_class_in_names = (DEFAULT_NULL_CLASS_NAME in names)\n null_class_neq_default = (v != DEFAULT_NULL_CLASS_NAME)\n if default_null_class_in_names and null_class_neq_default:\n raise ConfigError(\n f'\"{DEFAULT_NULL_CLASS_NAME}\" is in names but the '\n f'specified null_class is something else (\"{v}\").')\n return v\n\n def get_class_id(self, name: str) -> int:\n return self.names.index(name)\n\n def get_name(self, id: int) -> str:\n return self.names[id]\n\n @property\n def null_class_id(self) -> int:\n if self.null_class is None:\n raise ValueError('null_class is not set')\n return self.get_class_id(self.null_class)\n\n def get_color_to_class_id(self) -> dict:\n return dict([(self.colors[i], i) for i in range(len(self.colors))])\n\n def ensure_null_class(self) -> None:\n \"\"\"Add a null class if one isn't set. This method is idempotent.\"\"\"\n if self.null_class is not None:\n return\n\n null_class_name = DEFAULT_NULL_CLASS_NAME\n null_class_color = DEFAULT_NULL_CLASS_COLOR\n\n # This might seeem redundant given the null class validator above, but\n # is actually important. Sometimes there can be multiple ClassConfig\n # instances that reference the same list objects for names and colors\n # (not clear why this happens). This means that\n # each ensure_null_class() call will add to names and colors in each\n # copy of ClassConfig but only set its own null_class, which makes this\n # method() non-idempotent.\n if null_class_name in self.names:\n self.null_class = null_class_name\n return\n\n # use random color if default color is already taken\n null_class_color_triple = color_to_triple(null_class_color)\n all_color_triples = [\n color_to_triple(c) if isinstance(c, str) else c\n for c in self.colors\n ]\n if null_class_color_triple in all_color_triples:\n null_class_color = color_to_triple()\n\n self.names.append(null_class_name)\n self.colors.append(null_class_color)\n self.null_class = null_class_name\n\n def __len__(self) -> int:\n return len(self.names)\n\n @property\n def color_triples(self) -> List[Tuple[float, float, float]]:\n \"\"\"Class colors in a normalized form.\"\"\"\n color_triples = [normalize_color(c) for c in self.colors]\n return color_triples\n", "path": "rastervision_core/rastervision/core/data/class_config.py"}]} | 1,792 | 136 |
gh_patches_debug_67232 | rasdani/github-patches | git_diff | pypi__warehouse-8550 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
/pypi/{package}/{version}/json: yanking an older release updates latest release's yanked_reason field
**Describe the bug**
Yanking an older version of a package leads to unexpected side-effect for latest version's package info provided via the JSON endpoint. In particular, the `yanked_reason` field gets updated.
**Expected behavior**
When yanking a version of a package, no other verision's `yanked_reason` field should be updated.
**To Reproduce**
1. Create new package on test.pypi.org
2. Release version `0.2.0`.
3. Release version `0.3.0`.
4. Yank version `0.2.0`.
5. Check json endpoint of package version `0.3.0`.
```console
$ curl -sL https://test.pypi.org/pypi/abn-test-rss-yank/0.3.0/json | jq '.info.yanked'
false
$ curl -sL https://test.pypi.org/pypi/abn-test-rss-yank/0.3.0/json | jq '.info.yanked_reason'
"Testing Yank"
```
**My Platform**
N/A
**Additional context**
* Test package: https://test.pypi.org/project/abn-test-rss-yank/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/legacy/api/json.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 from collections import OrderedDict
14
15 from pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound
16 from pyramid.view import view_config
17 from sqlalchemy.orm import Load
18 from sqlalchemy.orm.exc import NoResultFound
19
20 from warehouse.cache.http import cache_control
21 from warehouse.cache.origin import origin_cache
22 from warehouse.packaging.models import File, Project, Release
23
24 # Generate appropriate CORS headers for the JSON endpoint.
25 # We want to allow Cross-Origin requests here so that users can interact
26 # with these endpoints via XHR/Fetch APIs in the browser.
27 _CORS_HEADERS = {
28 "Access-Control-Allow-Origin": "*",
29 "Access-Control-Allow-Headers": ", ".join(
30 [
31 "Content-Type",
32 "If-Match",
33 "If-Modified-Since",
34 "If-None-Match",
35 "If-Unmodified-Since",
36 ]
37 ),
38 "Access-Control-Allow-Methods": "GET",
39 "Access-Control-Max-Age": "86400", # 1 day.
40 "Access-Control-Expose-Headers": ", ".join(["X-PyPI-Last-Serial"]),
41 }
42
43 _CACHE_DECORATOR = [
44 cache_control(15 * 60), # 15 minutes
45 origin_cache(
46 1 * 24 * 60 * 60, # 1 day
47 stale_while_revalidate=5 * 60, # 5 minutes
48 stale_if_error=1 * 24 * 60 * 60, # 1 day
49 ),
50 ]
51
52
53 @view_config(
54 route_name="legacy.api.json.project",
55 context=Project,
56 renderer="json",
57 decorator=_CACHE_DECORATOR,
58 )
59 def json_project(project, request):
60 if project.name != request.matchdict.get("name", project.name):
61 return HTTPMovedPermanently(
62 request.current_route_path(name=project.name), headers=_CORS_HEADERS
63 )
64
65 try:
66 release = (
67 request.db.query(Release)
68 .filter(Release.project == project, Release.yanked.is_(False))
69 .order_by(Release.is_prerelease.nullslast(), Release._pypi_ordering.desc())
70 .limit(1)
71 .one()
72 )
73 except NoResultFound:
74 return HTTPNotFound(headers=_CORS_HEADERS)
75
76 return json_release(release, request)
77
78
79 @view_config(
80 route_name="legacy.api.json.project_slash",
81 context=Project,
82 decorator=_CACHE_DECORATOR,
83 )
84 def json_project_slash(project, request):
85 return HTTPMovedPermanently(
86 # Respond with redirect to url without trailing slash
87 request.route_path("legacy.api.json.project", name=project.name),
88 headers=_CORS_HEADERS,
89 )
90
91
92 @view_config(
93 route_name="legacy.api.json.release",
94 context=Release,
95 renderer="json",
96 decorator=_CACHE_DECORATOR,
97 )
98 def json_release(release, request):
99 project = release.project
100
101 if project.name != request.matchdict.get("name", project.name):
102 return HTTPMovedPermanently(
103 request.current_route_path(name=project.name), headers=_CORS_HEADERS
104 )
105
106 # Apply CORS headers.
107 request.response.headers.update(_CORS_HEADERS)
108
109 # Get the latest serial number for this project.
110 request.response.headers["X-PyPI-Last-Serial"] = str(project.last_serial)
111
112 # Get all of the releases and files for this project.
113 release_files = (
114 request.db.query(Release, File)
115 .options(
116 Load(Release).load_only(
117 "version", "requires_python", "yanked", "yanked_reason"
118 )
119 )
120 .outerjoin(File)
121 .filter(Release.project == project)
122 .order_by(Release._pypi_ordering.desc(), File.filename)
123 .all()
124 )
125
126 # Map our releases + files into a dictionary that maps each release to a
127 # list of all its files.
128 releases = {}
129 for r, file_ in release_files:
130 files = releases.setdefault(r, [])
131 if file_ is not None:
132 files.append(file_)
133
134 # Serialize our database objects to match the way that PyPI legacy
135 # presented this data.
136 releases = {
137 r.version: [
138 {
139 "filename": f.filename,
140 "packagetype": f.packagetype,
141 "python_version": f.python_version,
142 "has_sig": f.has_signature,
143 "comment_text": f.comment_text,
144 "md5_digest": f.md5_digest,
145 "digests": {"md5": f.md5_digest, "sha256": f.sha256_digest},
146 "size": f.size,
147 # TODO: Remove this once we've had a long enough time with it
148 # here to consider it no longer in use.
149 "downloads": -1,
150 "upload_time": f.upload_time.strftime("%Y-%m-%dT%H:%M:%S"),
151 "upload_time_iso_8601": f.upload_time.isoformat() + "Z",
152 "url": request.route_url("packaging.file", path=f.path),
153 "requires_python": r.requires_python if r.requires_python else None,
154 "yanked": r.yanked,
155 "yanked_reason": r.yanked_reason or None,
156 }
157 for f in fs
158 ]
159 for r, fs in releases.items()
160 }
161
162 return {
163 "info": {
164 "name": project.name,
165 "version": release.version,
166 "summary": release.summary,
167 "description_content_type": release.description.content_type,
168 "description": release.description.raw,
169 "keywords": release.keywords,
170 "license": release.license,
171 "classifiers": list(release.classifiers),
172 "author": release.author,
173 "author_email": release.author_email,
174 "maintainer": release.maintainer,
175 "maintainer_email": release.maintainer_email,
176 "requires_python": release.requires_python,
177 "platform": release.platform,
178 "downloads": {"last_day": -1, "last_week": -1, "last_month": -1},
179 "package_url": request.route_url("packaging.project", name=project.name),
180 "project_url": request.route_url("packaging.project", name=project.name),
181 "project_urls": OrderedDict(release.urls) if release.urls else None,
182 "release_url": request.route_url(
183 "packaging.release", name=project.name, version=release.version
184 ),
185 "requires_dist": (
186 list(release.requires_dist) if release.requires_dist else None
187 ),
188 "docs_url": project.documentation_url,
189 "bugtrack_url": None,
190 "home_page": release.home_page,
191 "download_url": release.download_url,
192 "yanked": release.yanked,
193 "yanked_reason": r.yanked_reason or None,
194 },
195 "urls": releases[release.version],
196 "releases": releases,
197 "last_serial": project.last_serial,
198 }
199
200
201 @view_config(
202 route_name="legacy.api.json.release_slash",
203 context=Release,
204 decorator=_CACHE_DECORATOR,
205 )
206 def json_release_slash(release, request):
207 return HTTPMovedPermanently(
208 # Respond with redirect to url without trailing slash
209 request.route_path(
210 "legacy.api.json.release",
211 name=release.project.name,
212 version=release.version,
213 ),
214 headers=_CORS_HEADERS,
215 )
216
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/warehouse/legacy/api/json.py b/warehouse/legacy/api/json.py
--- a/warehouse/legacy/api/json.py
+++ b/warehouse/legacy/api/json.py
@@ -190,7 +190,7 @@
"home_page": release.home_page,
"download_url": release.download_url,
"yanked": release.yanked,
- "yanked_reason": r.yanked_reason or None,
+ "yanked_reason": release.yanked_reason or None,
},
"urls": releases[release.version],
"releases": releases,
| {"golden_diff": "diff --git a/warehouse/legacy/api/json.py b/warehouse/legacy/api/json.py\n--- a/warehouse/legacy/api/json.py\n+++ b/warehouse/legacy/api/json.py\n@@ -190,7 +190,7 @@\n \"home_page\": release.home_page,\n \"download_url\": release.download_url,\n \"yanked\": release.yanked,\n- \"yanked_reason\": r.yanked_reason or None,\n+ \"yanked_reason\": release.yanked_reason or None,\n },\n \"urls\": releases[release.version],\n \"releases\": releases,\n", "issue": "/pypi/{package}/{version}/json: yanking an older release updates latest release's yanked_reason field\n**Describe the bug**\r\nYanking an older version of a package leads to unexpected side-effect for latest version's package info provided via the JSON endpoint. In particular, the `yanked_reason` field gets updated.\r\n\r\n**Expected behavior**\r\nWhen yanking a version of a package, no other verision's `yanked_reason` field should be updated.\r\n\r\n**To Reproduce**\r\n1. Create new package on test.pypi.org\r\n2. Release version `0.2.0`.\r\n3. Release version `0.3.0`.\r\n4. Yank version `0.2.0`.\r\n5. Check json endpoint of package version `0.3.0`.\r\n\r\n```console\r\n$ curl -sL https://test.pypi.org/pypi/abn-test-rss-yank/0.3.0/json | jq '.info.yanked'\r\nfalse\r\n$ curl -sL https://test.pypi.org/pypi/abn-test-rss-yank/0.3.0/json | jq '.info.yanked_reason'\r\n\"Testing Yank\"\r\n```\r\n\r\n**My Platform**\r\nN/A\r\n\r\n**Additional context**\r\n* Test package: https://test.pypi.org/project/abn-test-rss-yank/\r\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom collections import OrderedDict\n\nfrom pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound\nfrom pyramid.view import view_config\nfrom sqlalchemy.orm import Load\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom warehouse.cache.http import cache_control\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.packaging.models import File, Project, Release\n\n# Generate appropriate CORS headers for the JSON endpoint.\n# We want to allow Cross-Origin requests here so that users can interact\n# with these endpoints via XHR/Fetch APIs in the browser.\n_CORS_HEADERS = {\n \"Access-Control-Allow-Origin\": \"*\",\n \"Access-Control-Allow-Headers\": \", \".join(\n [\n \"Content-Type\",\n \"If-Match\",\n \"If-Modified-Since\",\n \"If-None-Match\",\n \"If-Unmodified-Since\",\n ]\n ),\n \"Access-Control-Allow-Methods\": \"GET\",\n \"Access-Control-Max-Age\": \"86400\", # 1 day.\n \"Access-Control-Expose-Headers\": \", \".join([\"X-PyPI-Last-Serial\"]),\n}\n\n_CACHE_DECORATOR = [\n cache_control(15 * 60), # 15 minutes\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=5 * 60, # 5 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n ),\n]\n\n\n@view_config(\n route_name=\"legacy.api.json.project\",\n context=Project,\n renderer=\"json\",\n decorator=_CACHE_DECORATOR,\n)\ndef json_project(project, request):\n if project.name != request.matchdict.get(\"name\", project.name):\n return HTTPMovedPermanently(\n request.current_route_path(name=project.name), headers=_CORS_HEADERS\n )\n\n try:\n release = (\n request.db.query(Release)\n .filter(Release.project == project, Release.yanked.is_(False))\n .order_by(Release.is_prerelease.nullslast(), Release._pypi_ordering.desc())\n .limit(1)\n .one()\n )\n except NoResultFound:\n return HTTPNotFound(headers=_CORS_HEADERS)\n\n return json_release(release, request)\n\n\n@view_config(\n route_name=\"legacy.api.json.project_slash\",\n context=Project,\n decorator=_CACHE_DECORATOR,\n)\ndef json_project_slash(project, request):\n return HTTPMovedPermanently(\n # Respond with redirect to url without trailing slash\n request.route_path(\"legacy.api.json.project\", name=project.name),\n headers=_CORS_HEADERS,\n )\n\n\n@view_config(\n route_name=\"legacy.api.json.release\",\n context=Release,\n renderer=\"json\",\n decorator=_CACHE_DECORATOR,\n)\ndef json_release(release, request):\n project = release.project\n\n if project.name != request.matchdict.get(\"name\", project.name):\n return HTTPMovedPermanently(\n request.current_route_path(name=project.name), headers=_CORS_HEADERS\n )\n\n # Apply CORS headers.\n request.response.headers.update(_CORS_HEADERS)\n\n # Get the latest serial number for this project.\n request.response.headers[\"X-PyPI-Last-Serial\"] = str(project.last_serial)\n\n # Get all of the releases and files for this project.\n release_files = (\n request.db.query(Release, File)\n .options(\n Load(Release).load_only(\n \"version\", \"requires_python\", \"yanked\", \"yanked_reason\"\n )\n )\n .outerjoin(File)\n .filter(Release.project == project)\n .order_by(Release._pypi_ordering.desc(), File.filename)\n .all()\n )\n\n # Map our releases + files into a dictionary that maps each release to a\n # list of all its files.\n releases = {}\n for r, file_ in release_files:\n files = releases.setdefault(r, [])\n if file_ is not None:\n files.append(file_)\n\n # Serialize our database objects to match the way that PyPI legacy\n # presented this data.\n releases = {\n r.version: [\n {\n \"filename\": f.filename,\n \"packagetype\": f.packagetype,\n \"python_version\": f.python_version,\n \"has_sig\": f.has_signature,\n \"comment_text\": f.comment_text,\n \"md5_digest\": f.md5_digest,\n \"digests\": {\"md5\": f.md5_digest, \"sha256\": f.sha256_digest},\n \"size\": f.size,\n # TODO: Remove this once we've had a long enough time with it\n # here to consider it no longer in use.\n \"downloads\": -1,\n \"upload_time\": f.upload_time.strftime(\"%Y-%m-%dT%H:%M:%S\"),\n \"upload_time_iso_8601\": f.upload_time.isoformat() + \"Z\",\n \"url\": request.route_url(\"packaging.file\", path=f.path),\n \"requires_python\": r.requires_python if r.requires_python else None,\n \"yanked\": r.yanked,\n \"yanked_reason\": r.yanked_reason or None,\n }\n for f in fs\n ]\n for r, fs in releases.items()\n }\n\n return {\n \"info\": {\n \"name\": project.name,\n \"version\": release.version,\n \"summary\": release.summary,\n \"description_content_type\": release.description.content_type,\n \"description\": release.description.raw,\n \"keywords\": release.keywords,\n \"license\": release.license,\n \"classifiers\": list(release.classifiers),\n \"author\": release.author,\n \"author_email\": release.author_email,\n \"maintainer\": release.maintainer,\n \"maintainer_email\": release.maintainer_email,\n \"requires_python\": release.requires_python,\n \"platform\": release.platform,\n \"downloads\": {\"last_day\": -1, \"last_week\": -1, \"last_month\": -1},\n \"package_url\": request.route_url(\"packaging.project\", name=project.name),\n \"project_url\": request.route_url(\"packaging.project\", name=project.name),\n \"project_urls\": OrderedDict(release.urls) if release.urls else None,\n \"release_url\": request.route_url(\n \"packaging.release\", name=project.name, version=release.version\n ),\n \"requires_dist\": (\n list(release.requires_dist) if release.requires_dist else None\n ),\n \"docs_url\": project.documentation_url,\n \"bugtrack_url\": None,\n \"home_page\": release.home_page,\n \"download_url\": release.download_url,\n \"yanked\": release.yanked,\n \"yanked_reason\": r.yanked_reason or None,\n },\n \"urls\": releases[release.version],\n \"releases\": releases,\n \"last_serial\": project.last_serial,\n }\n\n\n@view_config(\n route_name=\"legacy.api.json.release_slash\",\n context=Release,\n decorator=_CACHE_DECORATOR,\n)\ndef json_release_slash(release, request):\n return HTTPMovedPermanently(\n # Respond with redirect to url without trailing slash\n request.route_path(\n \"legacy.api.json.release\",\n name=release.project.name,\n version=release.version,\n ),\n headers=_CORS_HEADERS,\n )\n", "path": "warehouse/legacy/api/json.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom collections import OrderedDict\n\nfrom pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound\nfrom pyramid.view import view_config\nfrom sqlalchemy.orm import Load\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom warehouse.cache.http import cache_control\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.packaging.models import File, Project, Release\n\n# Generate appropriate CORS headers for the JSON endpoint.\n# We want to allow Cross-Origin requests here so that users can interact\n# with these endpoints via XHR/Fetch APIs in the browser.\n_CORS_HEADERS = {\n \"Access-Control-Allow-Origin\": \"*\",\n \"Access-Control-Allow-Headers\": \", \".join(\n [\n \"Content-Type\",\n \"If-Match\",\n \"If-Modified-Since\",\n \"If-None-Match\",\n \"If-Unmodified-Since\",\n ]\n ),\n \"Access-Control-Allow-Methods\": \"GET\",\n \"Access-Control-Max-Age\": \"86400\", # 1 day.\n \"Access-Control-Expose-Headers\": \", \".join([\"X-PyPI-Last-Serial\"]),\n}\n\n_CACHE_DECORATOR = [\n cache_control(15 * 60), # 15 minutes\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=5 * 60, # 5 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n ),\n]\n\n\n@view_config(\n route_name=\"legacy.api.json.project\",\n context=Project,\n renderer=\"json\",\n decorator=_CACHE_DECORATOR,\n)\ndef json_project(project, request):\n if project.name != request.matchdict.get(\"name\", project.name):\n return HTTPMovedPermanently(\n request.current_route_path(name=project.name), headers=_CORS_HEADERS\n )\n\n try:\n release = (\n request.db.query(Release)\n .filter(Release.project == project, Release.yanked.is_(False))\n .order_by(Release.is_prerelease.nullslast(), Release._pypi_ordering.desc())\n .limit(1)\n .one()\n )\n except NoResultFound:\n return HTTPNotFound(headers=_CORS_HEADERS)\n\n return json_release(release, request)\n\n\n@view_config(\n route_name=\"legacy.api.json.project_slash\",\n context=Project,\n decorator=_CACHE_DECORATOR,\n)\ndef json_project_slash(project, request):\n return HTTPMovedPermanently(\n # Respond with redirect to url without trailing slash\n request.route_path(\"legacy.api.json.project\", name=project.name),\n headers=_CORS_HEADERS,\n )\n\n\n@view_config(\n route_name=\"legacy.api.json.release\",\n context=Release,\n renderer=\"json\",\n decorator=_CACHE_DECORATOR,\n)\ndef json_release(release, request):\n project = release.project\n\n if project.name != request.matchdict.get(\"name\", project.name):\n return HTTPMovedPermanently(\n request.current_route_path(name=project.name), headers=_CORS_HEADERS\n )\n\n # Apply CORS headers.\n request.response.headers.update(_CORS_HEADERS)\n\n # Get the latest serial number for this project.\n request.response.headers[\"X-PyPI-Last-Serial\"] = str(project.last_serial)\n\n # Get all of the releases and files for this project.\n release_files = (\n request.db.query(Release, File)\n .options(\n Load(Release).load_only(\n \"version\", \"requires_python\", \"yanked\", \"yanked_reason\"\n )\n )\n .outerjoin(File)\n .filter(Release.project == project)\n .order_by(Release._pypi_ordering.desc(), File.filename)\n .all()\n )\n\n # Map our releases + files into a dictionary that maps each release to a\n # list of all its files.\n releases = {}\n for r, file_ in release_files:\n files = releases.setdefault(r, [])\n if file_ is not None:\n files.append(file_)\n\n # Serialize our database objects to match the way that PyPI legacy\n # presented this data.\n releases = {\n r.version: [\n {\n \"filename\": f.filename,\n \"packagetype\": f.packagetype,\n \"python_version\": f.python_version,\n \"has_sig\": f.has_signature,\n \"comment_text\": f.comment_text,\n \"md5_digest\": f.md5_digest,\n \"digests\": {\"md5\": f.md5_digest, \"sha256\": f.sha256_digest},\n \"size\": f.size,\n # TODO: Remove this once we've had a long enough time with it\n # here to consider it no longer in use.\n \"downloads\": -1,\n \"upload_time\": f.upload_time.strftime(\"%Y-%m-%dT%H:%M:%S\"),\n \"upload_time_iso_8601\": f.upload_time.isoformat() + \"Z\",\n \"url\": request.route_url(\"packaging.file\", path=f.path),\n \"requires_python\": r.requires_python if r.requires_python else None,\n \"yanked\": r.yanked,\n \"yanked_reason\": r.yanked_reason or None,\n }\n for f in fs\n ]\n for r, fs in releases.items()\n }\n\n return {\n \"info\": {\n \"name\": project.name,\n \"version\": release.version,\n \"summary\": release.summary,\n \"description_content_type\": release.description.content_type,\n \"description\": release.description.raw,\n \"keywords\": release.keywords,\n \"license\": release.license,\n \"classifiers\": list(release.classifiers),\n \"author\": release.author,\n \"author_email\": release.author_email,\n \"maintainer\": release.maintainer,\n \"maintainer_email\": release.maintainer_email,\n \"requires_python\": release.requires_python,\n \"platform\": release.platform,\n \"downloads\": {\"last_day\": -1, \"last_week\": -1, \"last_month\": -1},\n \"package_url\": request.route_url(\"packaging.project\", name=project.name),\n \"project_url\": request.route_url(\"packaging.project\", name=project.name),\n \"project_urls\": OrderedDict(release.urls) if release.urls else None,\n \"release_url\": request.route_url(\n \"packaging.release\", name=project.name, version=release.version\n ),\n \"requires_dist\": (\n list(release.requires_dist) if release.requires_dist else None\n ),\n \"docs_url\": project.documentation_url,\n \"bugtrack_url\": None,\n \"home_page\": release.home_page,\n \"download_url\": release.download_url,\n \"yanked\": release.yanked,\n \"yanked_reason\": release.yanked_reason or None,\n },\n \"urls\": releases[release.version],\n \"releases\": releases,\n \"last_serial\": project.last_serial,\n }\n\n\n@view_config(\n route_name=\"legacy.api.json.release_slash\",\n context=Release,\n decorator=_CACHE_DECORATOR,\n)\ndef json_release_slash(release, request):\n return HTTPMovedPermanently(\n # Respond with redirect to url without trailing slash\n request.route_path(\n \"legacy.api.json.release\",\n name=release.project.name,\n version=release.version,\n ),\n headers=_CORS_HEADERS,\n )\n", "path": "warehouse/legacy/api/json.py"}]} | 2,794 | 127 |
gh_patches_debug_10054 | rasdani/github-patches | git_diff | acl-org__acl-anthology-990 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Recaser bug: adding fixed-case inside tex-math markup
`<tex-math><fixed-case>O</fixed-case>(<fixed-case>M</fixed-case>(n^2))</tex-math>` caused the build to fail in #892
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bin/fixedcase/protect.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # protect.py <infile> <outfile>
4 # looks for file "truelist" in current dir
5
6 # cd data/xml
7 # for i in *xml ; do (cd ../../tools/fixedcase/ ; python3 ./protect.py ../../data/xml/$i /tmp/$i ; echo $i ); done > log
8
9
10 import lxml.etree as ET
11 import os
12 import sys
13 import copy
14 import itertools
15 import inspect
16
17 from collections import defaultdict
18
19 if __name__ == "__main__":
20 from common import *
21 else:
22 from .common import *
23
24 # recursive helper called by protect
25 # protect text of "node", including children, and tails of children
26 def protect_recurse(node, recased):
27 if node.tag == "fixed-case": # already protected
28 newnode = copy.deepcopy(node) # don't need to modify descendents
29 newnode.tail = None # tail will be protected by caller
30 return newnode
31 newnode = ET.Element(node.tag, node.attrib)
32
33 def process(text, rc):
34 i = 0
35 for upper, chars in itertools.groupby(rc[: len(text)], lambda c: c.isupper()):
36 charstr = "".join(chars)
37 if upper:
38 p = ET.Element("fixed-case")
39 p.text = charstr
40 newnode.append(p)
41 else:
42 append_text(newnode, text[i : i + len(charstr)])
43
44 assert text[i : i + len(charstr)].lower() == charstr.lower(), (
45 i,
46 text,
47 charstr,
48 )
49 i += len(charstr)
50
51 if node.text:
52 process(node.text, recased)
53 recased = recased[len(node.text) :]
54 for child in node:
55 protected_child = protect_recurse(child, recased)
56 recased = recased[len(get_text(protected_child)) :]
57 newnode.append(protected_child)
58 if child.tail:
59 process(child.tail, recased)
60 recased = recased[len(child.tail) :]
61
62 return newnode
63
64
65 def protect(node):
66 rawtext = get_text(node).strip()
67 recased = None
68 if rawtext.lower() in special_titles:
69 recased = special_titles[rawtext.lower()]
70 else:
71 text = tokenize(rawtext)
72 fixed = fixedcase_title(
73 text,
74 truelist=truelist,
75 phrase_truelist=phrase_truelist,
76 amodifiers=amodifiers,
77 ndescriptors=ndescriptors,
78 )
79 if any(fixed):
80 # Generate the recased string so we know where to look in the XML
81 # to apply fixed-case
82 recasedtoks = [(w if b else w.lower()) for w, b in zip(text, fixed)]
83 recased = TreebankWordDetokenizer().detokenize(recasedtoks)
84 # PTB (de)tokenizer doesn't think of hyphens as separate tokens,
85 # so we need to manually detokenize them.
86 # Assuming the only edits that need to be made are adding/deleting
87 # spaces, the following will work:
88 i = 0
89 while i < len(rawtext):
90 # scan rawtext from left to right and adjust recased by adding/removing
91 # spaces until it matches
92 t = rawtext[i]
93 assert i < len(recased), ((i, t), rawtext, recased)
94 c = recased[i]
95 if t.isspace() and not c.isspace(): # may be ' ' or '\n'
96 # add space to recased
97 recased = recased[:i] + t + recased[i:]
98 i += 1
99 elif c.isspace() and not t.isspace():
100 # remove space from recased
101 recased = recased[:i] + recased[i + 1 :]
102 # don't increment i
103 elif t != c and t.isspace() and c.isspace():
104 recased = recased[:i] + t + recased[i + 1 :]
105 i += 1
106 else:
107 assert t == c or t.lower() == c.lower(), (
108 (i, t, c),
109 rawtext,
110 recased,
111 text,
112 )
113 i += 1
114 if len(recased) > len(rawtext):
115 recased = recased[: len(rawtext)]
116 assert rawtext.lower() == recased.lower(), (rawtext, recased)
117
118 if recased:
119 newnode = protect_recurse(node, recased)
120 newnode.tail = node.tail # tail of top level is not protected
121 replace_node(node, newnode)
122
123
124 # Read in the truelist (list of words that should always be protected)
125 truelist, phrase_truelist, special_titles, amodifiers, ndescriptors = load_lists()
126
127 if __name__ == "__main__":
128 infile, outfile = sys.argv[1:]
129
130 tree = ET.parse(infile)
131 if not tree.getroot().tail:
132 tree.getroot().tail = "\n"
133 for paper in tree.getroot().findall(".//paper"):
134 for title in paper.xpath("./title|./booktitle"):
135 protect(title)
136 tree.write(outfile, encoding="UTF-8", xml_declaration=True)
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/bin/fixedcase/protect.py b/bin/fixedcase/protect.py
--- a/bin/fixedcase/protect.py
+++ b/bin/fixedcase/protect.py
@@ -24,7 +24,7 @@
# recursive helper called by protect
# protect text of "node", including children, and tails of children
def protect_recurse(node, recased):
- if node.tag == "fixed-case": # already protected
+ if node.tag in ("fixed-case", "tex-math"): # already protected text, or math
newnode = copy.deepcopy(node) # don't need to modify descendents
newnode.tail = None # tail will be protected by caller
return newnode
| {"golden_diff": "diff --git a/bin/fixedcase/protect.py b/bin/fixedcase/protect.py\n--- a/bin/fixedcase/protect.py\n+++ b/bin/fixedcase/protect.py\n@@ -24,7 +24,7 @@\n # recursive helper called by protect\n # protect text of \"node\", including children, and tails of children\n def protect_recurse(node, recased):\n- if node.tag == \"fixed-case\": # already protected\n+ if node.tag in (\"fixed-case\", \"tex-math\"): # already protected text, or math\n newnode = copy.deepcopy(node) # don't need to modify descendents\n newnode.tail = None # tail will be protected by caller\n return newnode\n", "issue": "Recaser bug: adding fixed-case inside tex-math markup\n`<tex-math><fixed-case>O</fixed-case>(<fixed-case>M</fixed-case>(n^2))</tex-math>` caused the build to fail in #892\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# protect.py <infile> <outfile>\n# looks for file \"truelist\" in current dir\n\n# cd data/xml\n# for i in *xml ; do (cd ../../tools/fixedcase/ ; python3 ./protect.py ../../data/xml/$i /tmp/$i ; echo $i ); done > log\n\n\nimport lxml.etree as ET\nimport os\nimport sys\nimport copy\nimport itertools\nimport inspect\n\nfrom collections import defaultdict\n\nif __name__ == \"__main__\":\n from common import *\nelse:\n from .common import *\n\n# recursive helper called by protect\n# protect text of \"node\", including children, and tails of children\ndef protect_recurse(node, recased):\n if node.tag == \"fixed-case\": # already protected\n newnode = copy.deepcopy(node) # don't need to modify descendents\n newnode.tail = None # tail will be protected by caller\n return newnode\n newnode = ET.Element(node.tag, node.attrib)\n\n def process(text, rc):\n i = 0\n for upper, chars in itertools.groupby(rc[: len(text)], lambda c: c.isupper()):\n charstr = \"\".join(chars)\n if upper:\n p = ET.Element(\"fixed-case\")\n p.text = charstr\n newnode.append(p)\n else:\n append_text(newnode, text[i : i + len(charstr)])\n\n assert text[i : i + len(charstr)].lower() == charstr.lower(), (\n i,\n text,\n charstr,\n )\n i += len(charstr)\n\n if node.text:\n process(node.text, recased)\n recased = recased[len(node.text) :]\n for child in node:\n protected_child = protect_recurse(child, recased)\n recased = recased[len(get_text(protected_child)) :]\n newnode.append(protected_child)\n if child.tail:\n process(child.tail, recased)\n recased = recased[len(child.tail) :]\n\n return newnode\n\n\ndef protect(node):\n rawtext = get_text(node).strip()\n recased = None\n if rawtext.lower() in special_titles:\n recased = special_titles[rawtext.lower()]\n else:\n text = tokenize(rawtext)\n fixed = fixedcase_title(\n text,\n truelist=truelist,\n phrase_truelist=phrase_truelist,\n amodifiers=amodifiers,\n ndescriptors=ndescriptors,\n )\n if any(fixed):\n # Generate the recased string so we know where to look in the XML\n # to apply fixed-case\n recasedtoks = [(w if b else w.lower()) for w, b in zip(text, fixed)]\n recased = TreebankWordDetokenizer().detokenize(recasedtoks)\n # PTB (de)tokenizer doesn't think of hyphens as separate tokens,\n # so we need to manually detokenize them.\n # Assuming the only edits that need to be made are adding/deleting\n # spaces, the following will work:\n i = 0\n while i < len(rawtext):\n # scan rawtext from left to right and adjust recased by adding/removing\n # spaces until it matches\n t = rawtext[i]\n assert i < len(recased), ((i, t), rawtext, recased)\n c = recased[i]\n if t.isspace() and not c.isspace(): # may be ' ' or '\\n'\n # add space to recased\n recased = recased[:i] + t + recased[i:]\n i += 1\n elif c.isspace() and not t.isspace():\n # remove space from recased\n recased = recased[:i] + recased[i + 1 :]\n # don't increment i\n elif t != c and t.isspace() and c.isspace():\n recased = recased[:i] + t + recased[i + 1 :]\n i += 1\n else:\n assert t == c or t.lower() == c.lower(), (\n (i, t, c),\n rawtext,\n recased,\n text,\n )\n i += 1\n if len(recased) > len(rawtext):\n recased = recased[: len(rawtext)]\n assert rawtext.lower() == recased.lower(), (rawtext, recased)\n\n if recased:\n newnode = protect_recurse(node, recased)\n newnode.tail = node.tail # tail of top level is not protected\n replace_node(node, newnode)\n\n\n# Read in the truelist (list of words that should always be protected)\ntruelist, phrase_truelist, special_titles, amodifiers, ndescriptors = load_lists()\n\nif __name__ == \"__main__\":\n infile, outfile = sys.argv[1:]\n\n tree = ET.parse(infile)\n if not tree.getroot().tail:\n tree.getroot().tail = \"\\n\"\n for paper in tree.getroot().findall(\".//paper\"):\n for title in paper.xpath(\"./title|./booktitle\"):\n protect(title)\n tree.write(outfile, encoding=\"UTF-8\", xml_declaration=True)\n", "path": "bin/fixedcase/protect.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# protect.py <infile> <outfile>\n# looks for file \"truelist\" in current dir\n\n# cd data/xml\n# for i in *xml ; do (cd ../../tools/fixedcase/ ; python3 ./protect.py ../../data/xml/$i /tmp/$i ; echo $i ); done > log\n\n\nimport lxml.etree as ET\nimport os\nimport sys\nimport copy\nimport itertools\nimport inspect\n\nfrom collections import defaultdict\n\nif __name__ == \"__main__\":\n from common import *\nelse:\n from .common import *\n\n# recursive helper called by protect\n# protect text of \"node\", including children, and tails of children\ndef protect_recurse(node, recased):\n if node.tag in (\"fixed-case\", \"tex-math\"): # already protected text, or math\n newnode = copy.deepcopy(node) # don't need to modify descendents\n newnode.tail = None # tail will be protected by caller\n return newnode\n newnode = ET.Element(node.tag, node.attrib)\n\n def process(text, rc):\n i = 0\n for upper, chars in itertools.groupby(rc[: len(text)], lambda c: c.isupper()):\n charstr = \"\".join(chars)\n if upper:\n p = ET.Element(\"fixed-case\")\n p.text = charstr\n newnode.append(p)\n else:\n append_text(newnode, text[i : i + len(charstr)])\n\n assert text[i : i + len(charstr)].lower() == charstr.lower(), (\n i,\n text,\n charstr,\n )\n i += len(charstr)\n\n if node.text:\n process(node.text, recased)\n recased = recased[len(node.text) :]\n for child in node:\n protected_child = protect_recurse(child, recased)\n recased = recased[len(get_text(protected_child)) :]\n newnode.append(protected_child)\n if child.tail:\n process(child.tail, recased)\n recased = recased[len(child.tail) :]\n\n return newnode\n\n\ndef protect(node):\n rawtext = get_text(node).strip()\n recased = None\n if rawtext.lower() in special_titles:\n recased = special_titles[rawtext.lower()]\n else:\n text = tokenize(rawtext)\n fixed = fixedcase_title(\n text,\n truelist=truelist,\n phrase_truelist=phrase_truelist,\n amodifiers=amodifiers,\n ndescriptors=ndescriptors,\n )\n if any(fixed):\n # Generate the recased string so we know where to look in the XML\n # to apply fixed-case\n recasedtoks = [(w if b else w.lower()) for w, b in zip(text, fixed)]\n recased = TreebankWordDetokenizer().detokenize(recasedtoks)\n # PTB (de)tokenizer doesn't think of hyphens as separate tokens,\n # so we need to manually detokenize them.\n # Assuming the only edits that need to be made are adding/deleting\n # spaces, the following will work:\n i = 0\n while i < len(rawtext):\n # scan rawtext from left to right and adjust recased by adding/removing\n # spaces until it matches\n t = rawtext[i]\n assert i < len(recased), ((i, t), rawtext, recased)\n c = recased[i]\n if t.isspace() and not c.isspace(): # may be ' ' or '\\n'\n # add space to recased\n recased = recased[:i] + t + recased[i:]\n i += 1\n elif c.isspace() and not t.isspace():\n # remove space from recased\n recased = recased[:i] + recased[i + 1 :]\n # don't increment i\n elif t != c and t.isspace() and c.isspace():\n recased = recased[:i] + t + recased[i + 1 :]\n i += 1\n else:\n assert t == c or t.lower() == c.lower(), (\n (i, t, c),\n rawtext,\n recased,\n text,\n )\n i += 1\n if len(recased) > len(rawtext):\n recased = recased[: len(rawtext)]\n assert rawtext.lower() == recased.lower(), (rawtext, recased)\n\n if recased:\n newnode = protect_recurse(node, recased)\n newnode.tail = node.tail # tail of top level is not protected\n replace_node(node, newnode)\n\n\n# Read in the truelist (list of words that should always be protected)\ntruelist, phrase_truelist, special_titles, amodifiers, ndescriptors = load_lists()\n\nif __name__ == \"__main__\":\n infile, outfile = sys.argv[1:]\n\n tree = ET.parse(infile)\n if not tree.getroot().tail:\n tree.getroot().tail = \"\\n\"\n for paper in tree.getroot().findall(\".//paper\"):\n for title in paper.xpath(\"./title|./booktitle\"):\n protect(title)\n tree.write(outfile, encoding=\"UTF-8\", xml_declaration=True)\n", "path": "bin/fixedcase/protect.py"}]} | 1,767 | 161 |
gh_patches_debug_47339 | rasdani/github-patches | git_diff | enthought__chaco-904 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Wx BitmapFromBufferRGBA deprecated in favor of Bitmap.FromBufferRGBA
```
c:\users\rporuri\work\github\ets\chaco\chaco\tools\toolbars\toolbar_buttons.py:190: wxPyDeprecationWarning: Call to deprecated item BitmapFromBufferRGBA. Use :meth:`wx.Bitmap.FromBufferRGBA` instead.
width + 1, height + 1, gc.bmp_array.flatten()
```
Discovered when running `examples/demo/depth.py`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chaco/tools/toolbars/toolbar_buttons.py`
Content:
```
1 # (C) Copyright 2005-2021 Enthought, Inc., Austin, TX
2 # All rights reserved.
3 #
4 # This software is provided without warranty under the terms of the BSD
5 # license included in LICENSE.txt and may be redistributed only under
6 # the conditions described in the aforementioned license. The license
7 # is also available online at http://www.enthought.com/licenses/BSD.txt
8 #
9 # Thanks for using Enthought open source!
10
11 import numpy
12
13 from traits.etsconfig.api import ETSConfig
14 from enable.tools.toolbars.toolbar_buttons import Button
15 from chaco.tools.zoom_tool import ZoomTool
16 from chaco.plot_graphics_context import PlotGraphicsContext
17 from kiva.image import Image
18 from pyface.image_resource import ImageResource
19 from pyface.api import FileDialog, OK, error
20 from traits.api import (
21 Instance,
22 Str,
23 Property,
24 cached_property,
25 List,
26 Int,
27 Enum,
28 )
29
30
31 class ToolbarButton(Button):
32 image = Str()
33 _image = Instance(Image)
34
35 color = "black"
36
37 width = Property(Int, observe="label, image")
38 height = Property(Int, observe="label, image")
39
40 # bounds are used for hit testing
41 bounds = Property(List, observe="label, image")
42
43 def __init__(self, *args, **kw):
44 super().__init__(*args, **kw)
45
46 image_resource = ImageResource(self.image)
47 self._image = Image(image_resource.absolute_path)
48
49 @cached_property
50 def _get_width(self):
51 gc = PlotGraphicsContext((100, 100), dpi=72)
52 gc.set_font(self.label_font)
53 (w, h, descent, leading) = gc.get_full_text_extent(self.label)
54 return max(self._image.width(), w)
55
56 @cached_property
57 def _get_height(self):
58 gc = PlotGraphicsContext((100, 100), dpi=72)
59 gc.set_font(self.label_font)
60 (w, h, descent, leading) = gc.get_full_text_extent(self.label)
61 return self._image.height() + h
62
63 @cached_property
64 def _get_bounds(self):
65 return [self.width, self.height]
66
67 def _draw_actual_button(self, gc):
68 x_offset = self.x + (self.width - self._image.width()) / 2
69 gc.draw_image(
70 self._image,
71 (x_offset, self.y + 2, self._image.width(), self._image.height()),
72 )
73
74 if self.label is not None and len(self.label) > 0:
75 gc.set_font(self.label_font)
76
77 (w, h, descent, leading) = gc.get_full_text_extent(self.label)
78 if w < self.width:
79 x_offset = self.x + (self.width - w) / 2
80 else:
81 x_offset = self.x
82
83 gc.set_text_position(x_offset, self.y - 8)
84 gc.show_text(self.label)
85
86
87 class IndexAxisLogButton(ToolbarButton):
88 label = "X Log Scale"
89 tooltip = "Change index axis scale"
90 image = "zoom-fit-width"
91
92 def perform(self, event):
93 if self.container.component.index_scale == "linear":
94 self.container.component.index_scale = "log"
95 else:
96 self.container.component.index_scale = "linear"
97 self.container.request_redraw()
98
99
100 class ValueAxisLogButton(ToolbarButton):
101 label = "Y Log Scale"
102 tooltip = "Change value axis scale"
103 image = "zoom-fit-height"
104
105 def perform(self, event):
106 if self.container.component.value_scale == "linear":
107 self.container.component.value_scale = "log"
108 else:
109 self.container.component.value_scale = "linear"
110 self.container.request_redraw()
111
112
113 class ZoomResetButton(ToolbarButton):
114 label = "Zoom Reset"
115 tooltip = "Zoom Reset"
116 image = "zoom-original"
117
118 def perform(self, event):
119 plot_component = self.container.component
120
121 for overlay in plot_component.overlays:
122 if isinstance(overlay, ZoomTool):
123 overlay._reset_state_pressed()
124
125 self.container.request_redraw()
126
127
128 class SaveAsButton(ToolbarButton):
129 label = "Save As"
130 tooltip = "Save As"
131 image = "document-save"
132
133 def perform(self, event):
134
135 plot_component = self.container.component
136
137 filter = "PNG file (*.png)|*.png|\nTIFF file (*.tiff)|*.tiff|"
138 dialog = FileDialog(action="save as", wildcard=filter)
139
140 if dialog.open() != OK:
141 return
142
143 # Remove the toolbar before saving the plot, so the output doesn't
144 # include the toolbar.
145 plot_component.remove_toolbar()
146
147 filename = dialog.path
148
149 width, height = plot_component.outer_bounds
150
151 gc = PlotGraphicsContext((width, height), dpi=72)
152 gc.render_component(plot_component)
153 try:
154 gc.save(filename)
155 except KeyError as e:
156 errmsg = (
157 "The filename must have an extension that matches "
158 "a graphics format, such as '.png' or '.tiff'."
159 )
160 if str(e.message) != "":
161 errmsg = (
162 "Unknown filename extension: '%s'\n" % str(e.message)
163 ) + errmsg
164
165 error(None, errmsg, title="Invalid Filename Extension")
166
167 # Restore the toolbar.
168 plot_component.add_toolbar()
169
170
171 class CopyToClipboardButton(ToolbarButton):
172 label = "Copy Image"
173 tooltip = "Copy to the clipboard"
174 image = "edit-copy"
175
176 def perform(self, event):
177 plot_component = self.container.component
178
179 # Remove the toolbar before saving the plot, so the output doesn't
180 # include the toolbar.
181 plot_component.remove_toolbar()
182
183 width, height = plot_component.outer_bounds
184
185 gc = PlotGraphicsContext((width, height), dpi=72)
186 gc.render_component(plot_component)
187
188 if ETSConfig.toolkit == "wx":
189 self._perform_wx(width, height, gc)
190 else:
191 pass
192
193 # Restore the toolbar.
194 plot_component.add_toolbar()
195
196 def _perform_wx(self, width, height, gc):
197 import wx
198
199 bitmap = wx.BitmapFromBufferRGBA(
200 width + 1, height + 1, gc.bmp_array.flatten()
201 )
202 data = wx.BitmapDataObject()
203 data.SetBitmap(bitmap)
204 if wx.TheClipboard.Open():
205 wx.TheClipboard.SetData(data)
206 wx.TheClipboard.Close()
207 else:
208 wx.MessageBox("Unable to open the clipboard.", "Error")
209
210
211 class ExportDataToClipboardButton(ToolbarButton):
212 label = "Copy Data"
213 tooltip = "Copy data to the clipboard"
214 image = "application-vnd-ms-excel"
215
216 orientation = Enum("v", "h")
217
218 def perform(self, event):
219 if ETSConfig.toolkit == "wx":
220 self._perform_wx()
221 elif ETSConfig.toolkit == "qt4":
222 self._perform_qt()
223 else:
224 pass
225
226 def _get_data_from_plots(self):
227 values = []
228 indices = []
229 for renderers in self.container.component.plots.values():
230 for renderer in renderers:
231 indices.append(renderer.index.get_data())
232 values.append(renderer.value.get_data())
233 return indices, values
234
235 def _serialize_data(self, indices, values):
236
237 # if all of rows are the same length, use faster algorithms,
238 # otherwise go element by element adding the necessary empty strings
239 if len(set([len(l) for l in values])) == 1:
240 data = [indices[0]] + values
241 if self.orientation == "v":
242 data = numpy.array(data).T.tolist()
243
244 data_str = ""
245 for row in data:
246 data_str += ",".join(["%f" % v for v in row]) + "\n"
247 return data_str
248
249 else:
250 # There might not be a single solution which fits all cases,
251 # so this is left to specific implementations to override
252 raise NotImplementedError()
253
254 def _perform_wx(self):
255 import wx
256
257 indices, values = self._get_data_from_plots()
258 data_str = self._serialize_data(indices, values)
259 data_obj = wx.TextDataObject(data_str)
260
261 if wx.TheClipboard.Open():
262 wx.TheClipboard.SetData(data_obj)
263 wx.TheClipboard.Close()
264 else:
265 wx.MessageBox("Unable to open the clipboard.", "Error")
266
267 def _perform_qt(self):
268 from pyface.qt import QtGui
269
270 indices, values = self._get_data_from_plots()
271 data_str = self._serialize_data(indices, values)
272
273 QtGui.QApplication.clipboard().setText(data_str)
274
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chaco/tools/toolbars/toolbar_buttons.py b/chaco/tools/toolbars/toolbar_buttons.py
--- a/chaco/tools/toolbars/toolbar_buttons.py
+++ b/chaco/tools/toolbars/toolbar_buttons.py
@@ -196,7 +196,7 @@
def _perform_wx(self, width, height, gc):
import wx
- bitmap = wx.BitmapFromBufferRGBA(
+ bitmap = wx.Bitmap.FromBufferRGBA(
width + 1, height + 1, gc.bmp_array.flatten()
)
data = wx.BitmapDataObject()
| {"golden_diff": "diff --git a/chaco/tools/toolbars/toolbar_buttons.py b/chaco/tools/toolbars/toolbar_buttons.py\n--- a/chaco/tools/toolbars/toolbar_buttons.py\n+++ b/chaco/tools/toolbars/toolbar_buttons.py\n@@ -196,7 +196,7 @@\n def _perform_wx(self, width, height, gc):\n import wx\n \n- bitmap = wx.BitmapFromBufferRGBA(\n+ bitmap = wx.Bitmap.FromBufferRGBA(\n width + 1, height + 1, gc.bmp_array.flatten()\n )\n data = wx.BitmapDataObject()\n", "issue": "Wx BitmapFromBufferRGBA deprecated in favor of Bitmap.FromBufferRGBA\n```\r\nc:\\users\\rporuri\\work\\github\\ets\\chaco\\chaco\\tools\\toolbars\\toolbar_buttons.py:190: wxPyDeprecationWarning: Call to deprecated item BitmapFromBufferRGBA. Use :meth:`wx.Bitmap.FromBufferRGBA` instead.\r\n width + 1, height + 1, gc.bmp_array.flatten()\r\n```\r\n\r\nDiscovered when running `examples/demo/depth.py`\n", "before_files": [{"content": "# (C) Copyright 2005-2021 Enthought, Inc., Austin, TX\n# All rights reserved.\n#\n# This software is provided without warranty under the terms of the BSD\n# license included in LICENSE.txt and may be redistributed only under\n# the conditions described in the aforementioned license. The license\n# is also available online at http://www.enthought.com/licenses/BSD.txt\n#\n# Thanks for using Enthought open source!\n\nimport numpy\n\nfrom traits.etsconfig.api import ETSConfig\nfrom enable.tools.toolbars.toolbar_buttons import Button\nfrom chaco.tools.zoom_tool import ZoomTool\nfrom chaco.plot_graphics_context import PlotGraphicsContext\nfrom kiva.image import Image\nfrom pyface.image_resource import ImageResource\nfrom pyface.api import FileDialog, OK, error\nfrom traits.api import (\n Instance,\n Str,\n Property,\n cached_property,\n List,\n Int,\n Enum,\n)\n\n\nclass ToolbarButton(Button):\n image = Str()\n _image = Instance(Image)\n\n color = \"black\"\n\n width = Property(Int, observe=\"label, image\")\n height = Property(Int, observe=\"label, image\")\n\n # bounds are used for hit testing\n bounds = Property(List, observe=\"label, image\")\n\n def __init__(self, *args, **kw):\n super().__init__(*args, **kw)\n\n image_resource = ImageResource(self.image)\n self._image = Image(image_resource.absolute_path)\n\n @cached_property\n def _get_width(self):\n gc = PlotGraphicsContext((100, 100), dpi=72)\n gc.set_font(self.label_font)\n (w, h, descent, leading) = gc.get_full_text_extent(self.label)\n return max(self._image.width(), w)\n\n @cached_property\n def _get_height(self):\n gc = PlotGraphicsContext((100, 100), dpi=72)\n gc.set_font(self.label_font)\n (w, h, descent, leading) = gc.get_full_text_extent(self.label)\n return self._image.height() + h\n\n @cached_property\n def _get_bounds(self):\n return [self.width, self.height]\n\n def _draw_actual_button(self, gc):\n x_offset = self.x + (self.width - self._image.width()) / 2\n gc.draw_image(\n self._image,\n (x_offset, self.y + 2, self._image.width(), self._image.height()),\n )\n\n if self.label is not None and len(self.label) > 0:\n gc.set_font(self.label_font)\n\n (w, h, descent, leading) = gc.get_full_text_extent(self.label)\n if w < self.width:\n x_offset = self.x + (self.width - w) / 2\n else:\n x_offset = self.x\n\n gc.set_text_position(x_offset, self.y - 8)\n gc.show_text(self.label)\n\n\nclass IndexAxisLogButton(ToolbarButton):\n label = \"X Log Scale\"\n tooltip = \"Change index axis scale\"\n image = \"zoom-fit-width\"\n\n def perform(self, event):\n if self.container.component.index_scale == \"linear\":\n self.container.component.index_scale = \"log\"\n else:\n self.container.component.index_scale = \"linear\"\n self.container.request_redraw()\n\n\nclass ValueAxisLogButton(ToolbarButton):\n label = \"Y Log Scale\"\n tooltip = \"Change value axis scale\"\n image = \"zoom-fit-height\"\n\n def perform(self, event):\n if self.container.component.value_scale == \"linear\":\n self.container.component.value_scale = \"log\"\n else:\n self.container.component.value_scale = \"linear\"\n self.container.request_redraw()\n\n\nclass ZoomResetButton(ToolbarButton):\n label = \"Zoom Reset\"\n tooltip = \"Zoom Reset\"\n image = \"zoom-original\"\n\n def perform(self, event):\n plot_component = self.container.component\n\n for overlay in plot_component.overlays:\n if isinstance(overlay, ZoomTool):\n overlay._reset_state_pressed()\n\n self.container.request_redraw()\n\n\nclass SaveAsButton(ToolbarButton):\n label = \"Save As\"\n tooltip = \"Save As\"\n image = \"document-save\"\n\n def perform(self, event):\n\n plot_component = self.container.component\n\n filter = \"PNG file (*.png)|*.png|\\nTIFF file (*.tiff)|*.tiff|\"\n dialog = FileDialog(action=\"save as\", wildcard=filter)\n\n if dialog.open() != OK:\n return\n\n # Remove the toolbar before saving the plot, so the output doesn't\n # include the toolbar.\n plot_component.remove_toolbar()\n\n filename = dialog.path\n\n width, height = plot_component.outer_bounds\n\n gc = PlotGraphicsContext((width, height), dpi=72)\n gc.render_component(plot_component)\n try:\n gc.save(filename)\n except KeyError as e:\n errmsg = (\n \"The filename must have an extension that matches \"\n \"a graphics format, such as '.png' or '.tiff'.\"\n )\n if str(e.message) != \"\":\n errmsg = (\n \"Unknown filename extension: '%s'\\n\" % str(e.message)\n ) + errmsg\n\n error(None, errmsg, title=\"Invalid Filename Extension\")\n\n # Restore the toolbar.\n plot_component.add_toolbar()\n\n\nclass CopyToClipboardButton(ToolbarButton):\n label = \"Copy Image\"\n tooltip = \"Copy to the clipboard\"\n image = \"edit-copy\"\n\n def perform(self, event):\n plot_component = self.container.component\n\n # Remove the toolbar before saving the plot, so the output doesn't\n # include the toolbar.\n plot_component.remove_toolbar()\n\n width, height = plot_component.outer_bounds\n\n gc = PlotGraphicsContext((width, height), dpi=72)\n gc.render_component(plot_component)\n\n if ETSConfig.toolkit == \"wx\":\n self._perform_wx(width, height, gc)\n else:\n pass\n\n # Restore the toolbar.\n plot_component.add_toolbar()\n\n def _perform_wx(self, width, height, gc):\n import wx\n\n bitmap = wx.BitmapFromBufferRGBA(\n width + 1, height + 1, gc.bmp_array.flatten()\n )\n data = wx.BitmapDataObject()\n data.SetBitmap(bitmap)\n if wx.TheClipboard.Open():\n wx.TheClipboard.SetData(data)\n wx.TheClipboard.Close()\n else:\n wx.MessageBox(\"Unable to open the clipboard.\", \"Error\")\n\n\nclass ExportDataToClipboardButton(ToolbarButton):\n label = \"Copy Data\"\n tooltip = \"Copy data to the clipboard\"\n image = \"application-vnd-ms-excel\"\n\n orientation = Enum(\"v\", \"h\")\n\n def perform(self, event):\n if ETSConfig.toolkit == \"wx\":\n self._perform_wx()\n elif ETSConfig.toolkit == \"qt4\":\n self._perform_qt()\n else:\n pass\n\n def _get_data_from_plots(self):\n values = []\n indices = []\n for renderers in self.container.component.plots.values():\n for renderer in renderers:\n indices.append(renderer.index.get_data())\n values.append(renderer.value.get_data())\n return indices, values\n\n def _serialize_data(self, indices, values):\n\n # if all of rows are the same length, use faster algorithms,\n # otherwise go element by element adding the necessary empty strings\n if len(set([len(l) for l in values])) == 1:\n data = [indices[0]] + values\n if self.orientation == \"v\":\n data = numpy.array(data).T.tolist()\n\n data_str = \"\"\n for row in data:\n data_str += \",\".join([\"%f\" % v for v in row]) + \"\\n\"\n return data_str\n\n else:\n # There might not be a single solution which fits all cases,\n # so this is left to specific implementations to override\n raise NotImplementedError()\n\n def _perform_wx(self):\n import wx\n\n indices, values = self._get_data_from_plots()\n data_str = self._serialize_data(indices, values)\n data_obj = wx.TextDataObject(data_str)\n\n if wx.TheClipboard.Open():\n wx.TheClipboard.SetData(data_obj)\n wx.TheClipboard.Close()\n else:\n wx.MessageBox(\"Unable to open the clipboard.\", \"Error\")\n\n def _perform_qt(self):\n from pyface.qt import QtGui\n\n indices, values = self._get_data_from_plots()\n data_str = self._serialize_data(indices, values)\n\n QtGui.QApplication.clipboard().setText(data_str)\n", "path": "chaco/tools/toolbars/toolbar_buttons.py"}], "after_files": [{"content": "# (C) Copyright 2005-2021 Enthought, Inc., Austin, TX\n# All rights reserved.\n#\n# This software is provided without warranty under the terms of the BSD\n# license included in LICENSE.txt and may be redistributed only under\n# the conditions described in the aforementioned license. The license\n# is also available online at http://www.enthought.com/licenses/BSD.txt\n#\n# Thanks for using Enthought open source!\n\nimport numpy\n\nfrom traits.etsconfig.api import ETSConfig\nfrom enable.tools.toolbars.toolbar_buttons import Button\nfrom chaco.tools.zoom_tool import ZoomTool\nfrom chaco.plot_graphics_context import PlotGraphicsContext\nfrom kiva.image import Image\nfrom pyface.image_resource import ImageResource\nfrom pyface.api import FileDialog, OK, error\nfrom traits.api import (\n Instance,\n Str,\n Property,\n cached_property,\n List,\n Int,\n Enum,\n)\n\n\nclass ToolbarButton(Button):\n image = Str()\n _image = Instance(Image)\n\n color = \"black\"\n\n width = Property(Int, observe=\"label, image\")\n height = Property(Int, observe=\"label, image\")\n\n # bounds are used for hit testing\n bounds = Property(List, observe=\"label, image\")\n\n def __init__(self, *args, **kw):\n super().__init__(*args, **kw)\n\n image_resource = ImageResource(self.image)\n self._image = Image(image_resource.absolute_path)\n\n @cached_property\n def _get_width(self):\n gc = PlotGraphicsContext((100, 100), dpi=72)\n gc.set_font(self.label_font)\n (w, h, descent, leading) = gc.get_full_text_extent(self.label)\n return max(self._image.width(), w)\n\n @cached_property\n def _get_height(self):\n gc = PlotGraphicsContext((100, 100), dpi=72)\n gc.set_font(self.label_font)\n (w, h, descent, leading) = gc.get_full_text_extent(self.label)\n return self._image.height() + h\n\n @cached_property\n def _get_bounds(self):\n return [self.width, self.height]\n\n def _draw_actual_button(self, gc):\n x_offset = self.x + (self.width - self._image.width()) / 2\n gc.draw_image(\n self._image,\n (x_offset, self.y + 2, self._image.width(), self._image.height()),\n )\n\n if self.label is not None and len(self.label) > 0:\n gc.set_font(self.label_font)\n\n (w, h, descent, leading) = gc.get_full_text_extent(self.label)\n if w < self.width:\n x_offset = self.x + (self.width - w) / 2\n else:\n x_offset = self.x\n\n gc.set_text_position(x_offset, self.y - 8)\n gc.show_text(self.label)\n\n\nclass IndexAxisLogButton(ToolbarButton):\n label = \"X Log Scale\"\n tooltip = \"Change index axis scale\"\n image = \"zoom-fit-width\"\n\n def perform(self, event):\n if self.container.component.index_scale == \"linear\":\n self.container.component.index_scale = \"log\"\n else:\n self.container.component.index_scale = \"linear\"\n self.container.request_redraw()\n\n\nclass ValueAxisLogButton(ToolbarButton):\n label = \"Y Log Scale\"\n tooltip = \"Change value axis scale\"\n image = \"zoom-fit-height\"\n\n def perform(self, event):\n if self.container.component.value_scale == \"linear\":\n self.container.component.value_scale = \"log\"\n else:\n self.container.component.value_scale = \"linear\"\n self.container.request_redraw()\n\n\nclass ZoomResetButton(ToolbarButton):\n label = \"Zoom Reset\"\n tooltip = \"Zoom Reset\"\n image = \"zoom-original\"\n\n def perform(self, event):\n plot_component = self.container.component\n\n for overlay in plot_component.overlays:\n if isinstance(overlay, ZoomTool):\n overlay._reset_state_pressed()\n\n self.container.request_redraw()\n\n\nclass SaveAsButton(ToolbarButton):\n label = \"Save As\"\n tooltip = \"Save As\"\n image = \"document-save\"\n\n def perform(self, event):\n\n plot_component = self.container.component\n\n filter = \"PNG file (*.png)|*.png|\\nTIFF file (*.tiff)|*.tiff|\"\n dialog = FileDialog(action=\"save as\", wildcard=filter)\n\n if dialog.open() != OK:\n return\n\n # Remove the toolbar before saving the plot, so the output doesn't\n # include the toolbar.\n plot_component.remove_toolbar()\n\n filename = dialog.path\n\n width, height = plot_component.outer_bounds\n\n gc = PlotGraphicsContext((width, height), dpi=72)\n gc.render_component(plot_component)\n try:\n gc.save(filename)\n except KeyError as e:\n errmsg = (\n \"The filename must have an extension that matches \"\n \"a graphics format, such as '.png' or '.tiff'.\"\n )\n if str(e.message) != \"\":\n errmsg = (\n \"Unknown filename extension: '%s'\\n\" % str(e.message)\n ) + errmsg\n\n error(None, errmsg, title=\"Invalid Filename Extension\")\n\n # Restore the toolbar.\n plot_component.add_toolbar()\n\n\nclass CopyToClipboardButton(ToolbarButton):\n label = \"Copy Image\"\n tooltip = \"Copy to the clipboard\"\n image = \"edit-copy\"\n\n def perform(self, event):\n plot_component = self.container.component\n\n # Remove the toolbar before saving the plot, so the output doesn't\n # include the toolbar.\n plot_component.remove_toolbar()\n\n width, height = plot_component.outer_bounds\n\n gc = PlotGraphicsContext((width, height), dpi=72)\n gc.render_component(plot_component)\n\n if ETSConfig.toolkit == \"wx\":\n self._perform_wx(width, height, gc)\n else:\n pass\n\n # Restore the toolbar.\n plot_component.add_toolbar()\n\n def _perform_wx(self, width, height, gc):\n import wx\n\n bitmap = wx.Bitmap.FromBufferRGBA(\n width + 1, height + 1, gc.bmp_array.flatten()\n )\n data = wx.BitmapDataObject()\n data.SetBitmap(bitmap)\n if wx.TheClipboard.Open():\n wx.TheClipboard.SetData(data)\n wx.TheClipboard.Close()\n else:\n wx.MessageBox(\"Unable to open the clipboard.\", \"Error\")\n\n\nclass ExportDataToClipboardButton(ToolbarButton):\n label = \"Copy Data\"\n tooltip = \"Copy data to the clipboard\"\n image = \"application-vnd-ms-excel\"\n\n orientation = Enum(\"v\", \"h\")\n\n def perform(self, event):\n if ETSConfig.toolkit == \"wx\":\n self._perform_wx()\n elif ETSConfig.toolkit == \"qt4\":\n self._perform_qt()\n else:\n pass\n\n def _get_data_from_plots(self):\n values = []\n indices = []\n for renderers in self.container.component.plots.values():\n for renderer in renderers:\n indices.append(renderer.index.get_data())\n values.append(renderer.value.get_data())\n return indices, values\n\n def _serialize_data(self, indices, values):\n\n # if all of rows are the same length, use faster algorithms,\n # otherwise go element by element adding the necessary empty strings\n if len(set([len(l) for l in values])) == 1:\n data = [indices[0]] + values\n if self.orientation == \"v\":\n data = numpy.array(data).T.tolist()\n\n data_str = \"\"\n for row in data:\n data_str += \",\".join([\"%f\" % v for v in row]) + \"\\n\"\n return data_str\n\n else:\n # There might not be a single solution which fits all cases,\n # so this is left to specific implementations to override\n raise NotImplementedError()\n\n def _perform_wx(self):\n import wx\n\n indices, values = self._get_data_from_plots()\n data_str = self._serialize_data(indices, values)\n data_obj = wx.TextDataObject(data_str)\n\n if wx.TheClipboard.Open():\n wx.TheClipboard.SetData(data_obj)\n wx.TheClipboard.Close()\n else:\n wx.MessageBox(\"Unable to open the clipboard.\", \"Error\")\n\n def _perform_qt(self):\n from pyface.qt import QtGui\n\n indices, values = self._get_data_from_plots()\n data_str = self._serialize_data(indices, values)\n\n QtGui.QApplication.clipboard().setText(data_str)\n", "path": "chaco/tools/toolbars/toolbar_buttons.py"}]} | 2,977 | 126 |
gh_patches_debug_5316 | rasdani/github-patches | git_diff | Theano__Theano-6225 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`.../c_code/dimshuffle.c` missing from setup.py?
A Lasagne user just reported that
```bash
pip install --upgrade --no-deps https://github.com/Theano/Theano/archive/master.zip
```
lead to the following error (I assume after installation, when trying to import it):
```
FileNotFoundError: [Errno 2] No such file or directory: '/Users/XXXX/anaconda/lib/python3.5/site-packages/theano/tensor/c_code/dimshuffle.c
```
The file was added in #6174 and is included in the `master.zip` snapshot, but maybe it's missing in `setup.py` and thus not copied over? You wouldn't notice when doing a development/editable install. The user solved it by downloading `tensor/c_code/dimshuffle.c` and `gpuarray/c_code/dimshuffle.c` manually from github.
Disclaimer: I haven't verified this behavior myself.
Add "c_code" folders to Theano installations
To do that, I converted current `c_code` folders to python modules (I don't find other solution, currently). This should fix #6222 . I had forgotten that @abergeron warned about it in #6047.
@nouiz @lamblin
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 #
3 # TODO:
4 # * Figure out how to compile and install documentation automatically
5 # * Add download_url
6
7 from __future__ import absolute_import, print_function, division
8 import os
9 import subprocess
10 import codecs
11 from fnmatch import fnmatchcase
12 from distutils.util import convert_path
13 try:
14 from setuptools import setup
15 except ImportError:
16 from distutils.core import setup
17
18
19 CLASSIFIERS = """\
20 Development Status :: 4 - Beta
21 Intended Audience :: Education
22 Intended Audience :: Science/Research
23 Intended Audience :: Developers
24 License :: OSI Approved :: BSD License
25 Programming Language :: Python
26 Topic :: Software Development :: Code Generators
27 Topic :: Software Development :: Compilers
28 Topic :: Scientific/Engineering :: Mathematics
29 Operating System :: Microsoft :: Windows
30 Operating System :: POSIX
31 Operating System :: Unix
32 Operating System :: MacOS
33 Programming Language :: Python :: 2
34 Programming Language :: Python :: 2.7
35 Programming Language :: Python :: 3
36 Programming Language :: Python :: 3.4
37 Programming Language :: Python :: 3.5
38 """
39 NAME = 'Theano'
40 MAINTAINER = "LISA laboratory, University of Montreal"
41 MAINTAINER_EMAIL = "[email protected]"
42 DESCRIPTION = ('Optimizing compiler for evaluating mathematical ' +
43 'expressions on CPUs and GPUs.')
44 LONG_DESCRIPTION = (codecs.open("DESCRIPTION.txt", encoding='utf-8').read() +
45 "\n\n" + codecs.open("NEWS.txt", encoding='utf-8').read())
46 URL = "http://deeplearning.net/software/theano/"
47 DOWNLOAD_URL = ""
48 LICENSE = 'BSD'
49 CLASSIFIERS = [_f for _f in CLASSIFIERS.split('\n') if _f]
50 AUTHOR = "LISA laboratory, University of Montreal"
51 AUTHOR_EMAIL = "[email protected]"
52 PLATFORMS = ["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"]
53 MAJOR = 0
54 MINOR = 10
55 MICRO = 0
56 SUFFIX = "dev1" # Should be blank except for rc's, betas, etc.
57 ISRELEASED = False
58
59 VERSION = '%d.%d.%d%s' % (MAJOR, MINOR, MICRO, SUFFIX)
60
61
62 def find_packages(where='.', exclude=()):
63 out = []
64 stack = [(convert_path(where), '')]
65 while stack:
66 where, prefix = stack.pop(0)
67 for name in os.listdir(where):
68 fn = os.path.join(where, name)
69 if ('.' not in name and os.path.isdir(fn) and
70 os.path.isfile(os.path.join(fn, '__init__.py'))
71 ):
72 out.append(prefix+name)
73 stack.append((fn, prefix+name+'.'))
74 for pat in list(exclude) + ['ez_setup', 'distribute_setup']:
75 out = [item for item in out if not fnmatchcase(item, pat)]
76 return out
77
78
79 def git_version():
80 """
81 Return the sha1 of local git HEAD as a string.
82 """
83 # josharian: I doubt that the minimal environment stuff here is
84 # still needed; it is inherited. This was originally
85 # an hg_version function borrowed from NumPy's setup.py.
86 # I'm leaving it in for now because I don't have enough other
87 # environments to test in to be confident that it is safe to remove.
88 def _minimal_ext_cmd(cmd):
89 # construct minimal environment
90 env = {}
91 for k in ['SYSTEMROOT', 'PATH', 'PYTHONPATH']:
92 v = os.environ.get(k)
93 if v is not None:
94 env[k] = v
95 # LANGUAGE is used on win32
96 env['LANGUAGE'] = 'C'
97 env['LANG'] = 'C'
98 env['LC_ALL'] = 'C'
99 out = subprocess.Popen(
100 cmd,
101 stdout=subprocess.PIPE,
102 env=env
103 ).communicate()[0]
104 return out
105 try:
106 out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD'])
107 git_revision = out.strip().decode('ascii')
108 except OSError:
109 git_revision = "unknown-git"
110 return git_revision
111
112
113 def write_text(filename, text):
114 try:
115 with open(filename, 'w') as a:
116 a.write(text)
117 except Exception as e:
118 print(e)
119
120
121 def write_version_py(filename=os.path.join('theano', 'generated_version.py')):
122 cnt = """
123 # THIS FILE IS GENERATED FROM THEANO SETUP.PY
124 short_version = '%(version)s'
125 version = '%(version)s'
126 git_revision = '%(git_revision)s'
127 full_version = '%(version)s.dev-%%(git_revision)s' %% {
128 'git_revision': git_revision}
129 release = %(isrelease)s
130
131 if not release:
132 version = full_version
133 """
134 FULL_VERSION = VERSION
135 if os.path.isdir('.git'):
136 GIT_REVISION = git_version()
137 elif os.path.exists(filename):
138 # must be a source distribution, use existing version file
139 GIT_REVISION = "RELEASE"
140 else:
141 GIT_REVISION = "unknown-git"
142
143 FULL_VERSION += '.dev-' + GIT_REVISION
144 text = cnt % {'version': VERSION,
145 'full_version': FULL_VERSION,
146 'git_revision': GIT_REVISION,
147 'isrelease': str(ISRELEASED)}
148 write_text(filename, text)
149
150
151 def do_setup():
152 write_version_py()
153 setup(name=NAME,
154 version=VERSION,
155 description=DESCRIPTION,
156 long_description=LONG_DESCRIPTION,
157 classifiers=CLASSIFIERS,
158 author=AUTHOR,
159 author_email=AUTHOR_EMAIL,
160 url=URL,
161 license=LICENSE,
162 platforms=PLATFORMS,
163 packages=find_packages(),
164 install_requires=['numpy>=1.9.1', 'scipy>=0.14', 'six>=1.9.0'],
165 # pygments is a dependency for Sphinx code highlight
166 extras_require={
167 'test': ['nose>=1.3.0', 'parameterized', 'flake8<3'],
168 'doc': ['Sphinx>=0.5.1', 'pygments']
169 },
170 package_data={
171 '': ['*.txt', '*.rst', '*.cu', '*.cuh', '*.c', '*.sh', '*.pkl',
172 '*.h', '*.cpp', 'ChangeLog'],
173 'theano.misc': ['*.sh'],
174 'theano.d3viz' : ['html/*','css/*','js/*']
175 },
176 entry_points={
177 'console_scripts': ['theano-cache = bin.theano_cache:main',
178 'theano-nose = bin.theano_nose:main']
179 },
180 keywords=' '.join([
181 'theano', 'math', 'numerical', 'symbolic', 'blas',
182 'numpy', 'gpu', 'autodiff', 'differentiation'
183 ]),
184 )
185 if __name__ == "__main__":
186 do_setup()
187
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -169,7 +169,7 @@
},
package_data={
'': ['*.txt', '*.rst', '*.cu', '*.cuh', '*.c', '*.sh', '*.pkl',
- '*.h', '*.cpp', 'ChangeLog'],
+ '*.h', '*.cpp', 'ChangeLog', 'c_code/*'],
'theano.misc': ['*.sh'],
'theano.d3viz' : ['html/*','css/*','js/*']
},
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -169,7 +169,7 @@\n },\n package_data={\n '': ['*.txt', '*.rst', '*.cu', '*.cuh', '*.c', '*.sh', '*.pkl',\n- '*.h', '*.cpp', 'ChangeLog'],\n+ '*.h', '*.cpp', 'ChangeLog', 'c_code/*'],\n 'theano.misc': ['*.sh'],\n 'theano.d3viz' : ['html/*','css/*','js/*']\n },\n", "issue": "`.../c_code/dimshuffle.c` missing from setup.py?\nA Lasagne user just reported that\r\n```bash\r\npip install --upgrade --no-deps https://github.com/Theano/Theano/archive/master.zip\r\n```\r\nlead to the following error (I assume after installation, when trying to import it):\r\n```\r\nFileNotFoundError: [Errno 2] No such file or directory: '/Users/XXXX/anaconda/lib/python3.5/site-packages/theano/tensor/c_code/dimshuffle.c\r\n```\r\nThe file was added in #6174 and is included in the `master.zip` snapshot, but maybe it's missing in `setup.py` and thus not copied over? You wouldn't notice when doing a development/editable install. The user solved it by downloading `tensor/c_code/dimshuffle.c` and `gpuarray/c_code/dimshuffle.c` manually from github.\r\n\r\nDisclaimer: I haven't verified this behavior myself.\nAdd \"c_code\" folders to Theano installations\nTo do that, I converted current `c_code` folders to python modules (I don't find other solution, currently). This should fix #6222 . I had forgotten that @abergeron warned about it in #6047.\r\n\r\n@nouiz @lamblin \n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# TODO:\n# * Figure out how to compile and install documentation automatically\n# * Add download_url\n\nfrom __future__ import absolute_import, print_function, division\nimport os\nimport subprocess\nimport codecs\nfrom fnmatch import fnmatchcase\nfrom distutils.util import convert_path\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n\nCLASSIFIERS = \"\"\"\\\nDevelopment Status :: 4 - Beta\nIntended Audience :: Education\nIntended Audience :: Science/Research\nIntended Audience :: Developers\nLicense :: OSI Approved :: BSD License\nProgramming Language :: Python\nTopic :: Software Development :: Code Generators\nTopic :: Software Development :: Compilers\nTopic :: Scientific/Engineering :: Mathematics\nOperating System :: Microsoft :: Windows\nOperating System :: POSIX\nOperating System :: Unix\nOperating System :: MacOS\nProgramming Language :: Python :: 2\nProgramming Language :: Python :: 2.7\nProgramming Language :: Python :: 3\nProgramming Language :: Python :: 3.4\nProgramming Language :: Python :: 3.5\n\"\"\"\nNAME = 'Theano'\nMAINTAINER = \"LISA laboratory, University of Montreal\"\nMAINTAINER_EMAIL = \"[email protected]\"\nDESCRIPTION = ('Optimizing compiler for evaluating mathematical ' +\n 'expressions on CPUs and GPUs.')\nLONG_DESCRIPTION = (codecs.open(\"DESCRIPTION.txt\", encoding='utf-8').read() +\n \"\\n\\n\" + codecs.open(\"NEWS.txt\", encoding='utf-8').read())\nURL = \"http://deeplearning.net/software/theano/\"\nDOWNLOAD_URL = \"\"\nLICENSE = 'BSD'\nCLASSIFIERS = [_f for _f in CLASSIFIERS.split('\\n') if _f]\nAUTHOR = \"LISA laboratory, University of Montreal\"\nAUTHOR_EMAIL = \"[email protected]\"\nPLATFORMS = [\"Windows\", \"Linux\", \"Solaris\", \"Mac OS-X\", \"Unix\"]\nMAJOR = 0\nMINOR = 10\nMICRO = 0\nSUFFIX = \"dev1\" # Should be blank except for rc's, betas, etc.\nISRELEASED = False\n\nVERSION = '%d.%d.%d%s' % (MAJOR, MINOR, MICRO, SUFFIX)\n\n\ndef find_packages(where='.', exclude=()):\n out = []\n stack = [(convert_path(where), '')]\n while stack:\n where, prefix = stack.pop(0)\n for name in os.listdir(where):\n fn = os.path.join(where, name)\n if ('.' not in name and os.path.isdir(fn) and\n os.path.isfile(os.path.join(fn, '__init__.py'))\n ):\n out.append(prefix+name)\n stack.append((fn, prefix+name+'.'))\n for pat in list(exclude) + ['ez_setup', 'distribute_setup']:\n out = [item for item in out if not fnmatchcase(item, pat)]\n return out\n\n\ndef git_version():\n \"\"\"\n Return the sha1 of local git HEAD as a string.\n \"\"\"\n # josharian: I doubt that the minimal environment stuff here is\n # still needed; it is inherited. This was originally\n # an hg_version function borrowed from NumPy's setup.py.\n # I'm leaving it in for now because I don't have enough other\n # environments to test in to be confident that it is safe to remove.\n def _minimal_ext_cmd(cmd):\n # construct minimal environment\n env = {}\n for k in ['SYSTEMROOT', 'PATH', 'PYTHONPATH']:\n v = os.environ.get(k)\n if v is not None:\n env[k] = v\n # LANGUAGE is used on win32\n env['LANGUAGE'] = 'C'\n env['LANG'] = 'C'\n env['LC_ALL'] = 'C'\n out = subprocess.Popen(\n cmd,\n stdout=subprocess.PIPE,\n env=env\n ).communicate()[0]\n return out\n try:\n out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD'])\n git_revision = out.strip().decode('ascii')\n except OSError:\n git_revision = \"unknown-git\"\n return git_revision\n\n\ndef write_text(filename, text):\n try:\n with open(filename, 'w') as a:\n a.write(text)\n except Exception as e:\n print(e)\n\n\ndef write_version_py(filename=os.path.join('theano', 'generated_version.py')):\n cnt = \"\"\"\n# THIS FILE IS GENERATED FROM THEANO SETUP.PY\nshort_version = '%(version)s'\nversion = '%(version)s'\ngit_revision = '%(git_revision)s'\nfull_version = '%(version)s.dev-%%(git_revision)s' %% {\n 'git_revision': git_revision}\nrelease = %(isrelease)s\n\nif not release:\n version = full_version\n\"\"\"\n FULL_VERSION = VERSION\n if os.path.isdir('.git'):\n GIT_REVISION = git_version()\n elif os.path.exists(filename):\n # must be a source distribution, use existing version file\n GIT_REVISION = \"RELEASE\"\n else:\n GIT_REVISION = \"unknown-git\"\n\n FULL_VERSION += '.dev-' + GIT_REVISION\n text = cnt % {'version': VERSION,\n 'full_version': FULL_VERSION,\n 'git_revision': GIT_REVISION,\n 'isrelease': str(ISRELEASED)}\n write_text(filename, text)\n\n\ndef do_setup():\n write_version_py()\n setup(name=NAME,\n version=VERSION,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n classifiers=CLASSIFIERS,\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n url=URL,\n license=LICENSE,\n platforms=PLATFORMS,\n packages=find_packages(),\n install_requires=['numpy>=1.9.1', 'scipy>=0.14', 'six>=1.9.0'],\n # pygments is a dependency for Sphinx code highlight\n extras_require={\n 'test': ['nose>=1.3.0', 'parameterized', 'flake8<3'],\n 'doc': ['Sphinx>=0.5.1', 'pygments']\n },\n package_data={\n '': ['*.txt', '*.rst', '*.cu', '*.cuh', '*.c', '*.sh', '*.pkl',\n '*.h', '*.cpp', 'ChangeLog'],\n 'theano.misc': ['*.sh'],\n 'theano.d3viz' : ['html/*','css/*','js/*']\n },\n entry_points={\n 'console_scripts': ['theano-cache = bin.theano_cache:main',\n 'theano-nose = bin.theano_nose:main']\n },\n keywords=' '.join([\n 'theano', 'math', 'numerical', 'symbolic', 'blas',\n 'numpy', 'gpu', 'autodiff', 'differentiation'\n ]),\n )\nif __name__ == \"__main__\":\n do_setup()\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n#\n# TODO:\n# * Figure out how to compile and install documentation automatically\n# * Add download_url\n\nfrom __future__ import absolute_import, print_function, division\nimport os\nimport subprocess\nimport codecs\nfrom fnmatch import fnmatchcase\nfrom distutils.util import convert_path\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n\nCLASSIFIERS = \"\"\"\\\nDevelopment Status :: 4 - Beta\nIntended Audience :: Education\nIntended Audience :: Science/Research\nIntended Audience :: Developers\nLicense :: OSI Approved :: BSD License\nProgramming Language :: Python\nTopic :: Software Development :: Code Generators\nTopic :: Software Development :: Compilers\nTopic :: Scientific/Engineering :: Mathematics\nOperating System :: Microsoft :: Windows\nOperating System :: POSIX\nOperating System :: Unix\nOperating System :: MacOS\nProgramming Language :: Python :: 2\nProgramming Language :: Python :: 2.7\nProgramming Language :: Python :: 3\nProgramming Language :: Python :: 3.4\nProgramming Language :: Python :: 3.5\n\"\"\"\nNAME = 'Theano'\nMAINTAINER = \"LISA laboratory, University of Montreal\"\nMAINTAINER_EMAIL = \"[email protected]\"\nDESCRIPTION = ('Optimizing compiler for evaluating mathematical ' +\n 'expressions on CPUs and GPUs.')\nLONG_DESCRIPTION = (codecs.open(\"DESCRIPTION.txt\", encoding='utf-8').read() +\n \"\\n\\n\" + codecs.open(\"NEWS.txt\", encoding='utf-8').read())\nURL = \"http://deeplearning.net/software/theano/\"\nDOWNLOAD_URL = \"\"\nLICENSE = 'BSD'\nCLASSIFIERS = [_f for _f in CLASSIFIERS.split('\\n') if _f]\nAUTHOR = \"LISA laboratory, University of Montreal\"\nAUTHOR_EMAIL = \"[email protected]\"\nPLATFORMS = [\"Windows\", \"Linux\", \"Solaris\", \"Mac OS-X\", \"Unix\"]\nMAJOR = 0\nMINOR = 10\nMICRO = 0\nSUFFIX = \"dev1\" # Should be blank except for rc's, betas, etc.\nISRELEASED = False\n\nVERSION = '%d.%d.%d%s' % (MAJOR, MINOR, MICRO, SUFFIX)\n\n\ndef find_packages(where='.', exclude=()):\n out = []\n stack = [(convert_path(where), '')]\n while stack:\n where, prefix = stack.pop(0)\n for name in os.listdir(where):\n fn = os.path.join(where, name)\n if ('.' not in name and os.path.isdir(fn) and\n os.path.isfile(os.path.join(fn, '__init__.py'))\n ):\n out.append(prefix+name)\n stack.append((fn, prefix+name+'.'))\n for pat in list(exclude) + ['ez_setup', 'distribute_setup']:\n out = [item for item in out if not fnmatchcase(item, pat)]\n return out\n\n\ndef git_version():\n \"\"\"\n Return the sha1 of local git HEAD as a string.\n \"\"\"\n # josharian: I doubt that the minimal environment stuff here is\n # still needed; it is inherited. This was originally\n # an hg_version function borrowed from NumPy's setup.py.\n # I'm leaving it in for now because I don't have enough other\n # environments to test in to be confident that it is safe to remove.\n def _minimal_ext_cmd(cmd):\n # construct minimal environment\n env = {}\n for k in ['SYSTEMROOT', 'PATH', 'PYTHONPATH']:\n v = os.environ.get(k)\n if v is not None:\n env[k] = v\n # LANGUAGE is used on win32\n env['LANGUAGE'] = 'C'\n env['LANG'] = 'C'\n env['LC_ALL'] = 'C'\n out = subprocess.Popen(\n cmd,\n stdout=subprocess.PIPE,\n env=env\n ).communicate()[0]\n return out\n try:\n out = _minimal_ext_cmd(['git', 'rev-parse', 'HEAD'])\n git_revision = out.strip().decode('ascii')\n except OSError:\n git_revision = \"unknown-git\"\n return git_revision\n\n\ndef write_text(filename, text):\n try:\n with open(filename, 'w') as a:\n a.write(text)\n except Exception as e:\n print(e)\n\n\ndef write_version_py(filename=os.path.join('theano', 'generated_version.py')):\n cnt = \"\"\"\n# THIS FILE IS GENERATED FROM THEANO SETUP.PY\nshort_version = '%(version)s'\nversion = '%(version)s'\ngit_revision = '%(git_revision)s'\nfull_version = '%(version)s.dev-%%(git_revision)s' %% {\n 'git_revision': git_revision}\nrelease = %(isrelease)s\n\nif not release:\n version = full_version\n\"\"\"\n FULL_VERSION = VERSION\n if os.path.isdir('.git'):\n GIT_REVISION = git_version()\n elif os.path.exists(filename):\n # must be a source distribution, use existing version file\n GIT_REVISION = \"RELEASE\"\n else:\n GIT_REVISION = \"unknown-git\"\n\n FULL_VERSION += '.dev-' + GIT_REVISION\n text = cnt % {'version': VERSION,\n 'full_version': FULL_VERSION,\n 'git_revision': GIT_REVISION,\n 'isrelease': str(ISRELEASED)}\n write_text(filename, text)\n\n\ndef do_setup():\n write_version_py()\n setup(name=NAME,\n version=VERSION,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n classifiers=CLASSIFIERS,\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n url=URL,\n license=LICENSE,\n platforms=PLATFORMS,\n packages=find_packages(),\n install_requires=['numpy>=1.9.1', 'scipy>=0.14', 'six>=1.9.0'],\n # pygments is a dependency for Sphinx code highlight\n extras_require={\n 'test': ['nose>=1.3.0', 'parameterized', 'flake8<3'],\n 'doc': ['Sphinx>=0.5.1', 'pygments']\n },\n package_data={\n '': ['*.txt', '*.rst', '*.cu', '*.cuh', '*.c', '*.sh', '*.pkl',\n '*.h', '*.cpp', 'ChangeLog', 'c_code/*'],\n 'theano.misc': ['*.sh'],\n 'theano.d3viz' : ['html/*','css/*','js/*']\n },\n entry_points={\n 'console_scripts': ['theano-cache = bin.theano_cache:main',\n 'theano-nose = bin.theano_nose:main']\n },\n keywords=' '.join([\n 'theano', 'math', 'numerical', 'symbolic', 'blas',\n 'numpy', 'gpu', 'autodiff', 'differentiation'\n ]),\n )\nif __name__ == \"__main__\":\n do_setup()\n", "path": "setup.py"}]} | 2,493 | 127 |
gh_patches_debug_3653 | rasdani/github-patches | git_diff | pypa__pip-9569 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Set encoding for tar file and use unicode path for unpacking
When tarfile.TarFile decodes filenames in Python 2.7 by default it uses
sys.getfilesystemencoding. On Windows this returns "mbcs", which is
lossy when converting from proper utf-8 to bytes (results in '?' for out
of range characters).
We now pass an encoding to tarfile.open which will be used instead.
Since the encoding argument is only ever used for the PAX format, and
since the PAX format guarantees utf-8 encoded information, this should
work in all circumstances.
For filesystem APIs in Python 2, the type of the path object passed
dictates the underlying Windows API that is called. For `str` it is the
`*A` (for ANSI) APIs. For `unicode` it is the `*W` (for Wide character)
APIs. To use the second set of APIs, which properly handles unicode
filenames, we convert the byte path to utf-8.
Fixes #7667.
Filename encoding error in some environments with PAX sdist
**Environment**
* pip version: any
* Python version: 2.7
* OS: Windows, non-Windows in C locale
(pip Windows CI hits this)
**Description**
The PAX format wheel 0.34.1 sdists fail to install on Python 2.7 on Windows with a UnicodeEncodeError, or on non-Windows systems in a non-utf-8 locale: https://github.com/pypa/wheel/issues/331
**Expected behavior**
Unicode filename from the PAX tarball is correctly encoded for the local filesystem.
**How to Reproduce**
Attempt to install a PAX formatted tarball containing a file name that cannot be encoded to the default code page (Windows) or the default locale encoding (non-Windows).
In GNU tar, the affected paths are pre-mangled to something ASCII compatible, but PAX tar preserves them correctly, so the installer needs to handle them itself.
**Output**
See
https://dev.azure.com/pypa/pip/_build/results?buildId=18040&view=logs&j=404e6841-f5ba-57d9-f2c8-8c5322057572&t=0219f6bf-240d-5b08-c877-377b12af5079&l=309 for a Windows example in the pip test suite.
The wheel issue linked above has some Linux examples.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pip/_internal/utils/unpacking.py`
Content:
```
1 """Utilities related archives.
2 """
3
4 import logging
5 import os
6 import shutil
7 import stat
8 import tarfile
9 import zipfile
10 from typing import Iterable, List, Optional
11 from zipfile import ZipInfo
12
13 from pip._internal.exceptions import InstallationError
14 from pip._internal.utils.filetypes import (
15 BZ2_EXTENSIONS,
16 TAR_EXTENSIONS,
17 XZ_EXTENSIONS,
18 ZIP_EXTENSIONS,
19 )
20 from pip._internal.utils.misc import ensure_dir
21
22 logger = logging.getLogger(__name__)
23
24
25 SUPPORTED_EXTENSIONS = ZIP_EXTENSIONS + TAR_EXTENSIONS
26
27 try:
28 import bz2 # noqa
29
30 SUPPORTED_EXTENSIONS += BZ2_EXTENSIONS
31 except ImportError:
32 logger.debug("bz2 module is not available")
33
34 try:
35 # Only for Python 3.3+
36 import lzma # noqa
37
38 SUPPORTED_EXTENSIONS += XZ_EXTENSIONS
39 except ImportError:
40 logger.debug("lzma module is not available")
41
42
43 def current_umask():
44 # type: () -> int
45 """Get the current umask which involves having to set it temporarily."""
46 mask = os.umask(0)
47 os.umask(mask)
48 return mask
49
50
51 def split_leading_dir(path):
52 # type: (str) -> List[str]
53 path = path.lstrip("/").lstrip("\\")
54 if "/" in path and (
55 ("\\" in path and path.find("/") < path.find("\\")) or "\\" not in path
56 ):
57 return path.split("/", 1)
58 elif "\\" in path:
59 return path.split("\\", 1)
60 else:
61 return [path, ""]
62
63
64 def has_leading_dir(paths):
65 # type: (Iterable[str]) -> bool
66 """Returns true if all the paths have the same leading path name
67 (i.e., everything is in one subdirectory in an archive)"""
68 common_prefix = None
69 for path in paths:
70 prefix, rest = split_leading_dir(path)
71 if not prefix:
72 return False
73 elif common_prefix is None:
74 common_prefix = prefix
75 elif prefix != common_prefix:
76 return False
77 return True
78
79
80 def is_within_directory(directory, target):
81 # type: (str, str) -> bool
82 """
83 Return true if the absolute path of target is within the directory
84 """
85 abs_directory = os.path.abspath(directory)
86 abs_target = os.path.abspath(target)
87
88 prefix = os.path.commonprefix([abs_directory, abs_target])
89 return prefix == abs_directory
90
91
92 def set_extracted_file_to_default_mode_plus_executable(path):
93 # type: (str) -> None
94 """
95 Make file present at path have execute for user/group/world
96 (chmod +x) is no-op on windows per python docs
97 """
98 os.chmod(path, (0o777 & ~current_umask() | 0o111))
99
100
101 def zip_item_is_executable(info):
102 # type: (ZipInfo) -> bool
103 mode = info.external_attr >> 16
104 # if mode and regular file and any execute permissions for
105 # user/group/world?
106 return bool(mode and stat.S_ISREG(mode) and mode & 0o111)
107
108
109 def unzip_file(filename, location, flatten=True):
110 # type: (str, str, bool) -> None
111 """
112 Unzip the file (with path `filename`) to the destination `location`. All
113 files are written based on system defaults and umask (i.e. permissions are
114 not preserved), except that regular file members with any execute
115 permissions (user, group, or world) have "chmod +x" applied after being
116 written. Note that for windows, any execute changes using os.chmod are
117 no-ops per the python docs.
118 """
119 ensure_dir(location)
120 zipfp = open(filename, "rb")
121 try:
122 zip = zipfile.ZipFile(zipfp, allowZip64=True)
123 leading = has_leading_dir(zip.namelist()) and flatten
124 for info in zip.infolist():
125 name = info.filename
126 fn = name
127 if leading:
128 fn = split_leading_dir(name)[1]
129 fn = os.path.join(location, fn)
130 dir = os.path.dirname(fn)
131 if not is_within_directory(location, fn):
132 message = (
133 "The zip file ({}) has a file ({}) trying to install "
134 "outside target directory ({})"
135 )
136 raise InstallationError(message.format(filename, fn, location))
137 if fn.endswith("/") or fn.endswith("\\"):
138 # A directory
139 ensure_dir(fn)
140 else:
141 ensure_dir(dir)
142 # Don't use read() to avoid allocating an arbitrarily large
143 # chunk of memory for the file's content
144 fp = zip.open(name)
145 try:
146 with open(fn, "wb") as destfp:
147 shutil.copyfileobj(fp, destfp)
148 finally:
149 fp.close()
150 if zip_item_is_executable(info):
151 set_extracted_file_to_default_mode_plus_executable(fn)
152 finally:
153 zipfp.close()
154
155
156 def untar_file(filename, location):
157 # type: (str, str) -> None
158 """
159 Untar the file (with path `filename`) to the destination `location`.
160 All files are written based on system defaults and umask (i.e. permissions
161 are not preserved), except that regular file members with any execute
162 permissions (user, group, or world) have "chmod +x" applied after being
163 written. Note that for windows, any execute changes using os.chmod are
164 no-ops per the python docs.
165 """
166 ensure_dir(location)
167 if filename.lower().endswith(".gz") or filename.lower().endswith(".tgz"):
168 mode = "r:gz"
169 elif filename.lower().endswith(BZ2_EXTENSIONS):
170 mode = "r:bz2"
171 elif filename.lower().endswith(XZ_EXTENSIONS):
172 mode = "r:xz"
173 elif filename.lower().endswith(".tar"):
174 mode = "r"
175 else:
176 logger.warning(
177 "Cannot determine compression type for file %s",
178 filename,
179 )
180 mode = "r:*"
181 tar = tarfile.open(filename, mode)
182 try:
183 leading = has_leading_dir([member.name for member in tar.getmembers()])
184 for member in tar.getmembers():
185 fn = member.name
186 if leading:
187 fn = split_leading_dir(fn)[1]
188 path = os.path.join(location, fn)
189 if not is_within_directory(location, path):
190 message = (
191 "The tar file ({}) has a file ({}) trying to install "
192 "outside target directory ({})"
193 )
194 raise InstallationError(message.format(filename, path, location))
195 if member.isdir():
196 ensure_dir(path)
197 elif member.issym():
198 try:
199 # https://github.com/python/typeshed/issues/2673
200 tar._extract_member(member, path) # type: ignore
201 except Exception as exc:
202 # Some corrupt tar files seem to produce this
203 # (specifically bad symlinks)
204 logger.warning(
205 "In the tar file %s the member %s is invalid: %s",
206 filename,
207 member.name,
208 exc,
209 )
210 continue
211 else:
212 try:
213 fp = tar.extractfile(member)
214 except (KeyError, AttributeError) as exc:
215 # Some corrupt tar files seem to produce this
216 # (specifically bad symlinks)
217 logger.warning(
218 "In the tar file %s the member %s is invalid: %s",
219 filename,
220 member.name,
221 exc,
222 )
223 continue
224 ensure_dir(os.path.dirname(path))
225 assert fp is not None
226 with open(path, "wb") as destfp:
227 shutil.copyfileobj(fp, destfp)
228 fp.close()
229 # Update the timestamp (useful for cython compiled files)
230 tar.utime(member, path)
231 # member have any execute permissions for user/group/world?
232 if member.mode & 0o111:
233 set_extracted_file_to_default_mode_plus_executable(path)
234 finally:
235 tar.close()
236
237
238 def unpack_file(
239 filename, # type: str
240 location, # type: str
241 content_type=None, # type: Optional[str]
242 ):
243 # type: (...) -> None
244 filename = os.path.realpath(filename)
245 if (
246 content_type == "application/zip"
247 or filename.lower().endswith(ZIP_EXTENSIONS)
248 or zipfile.is_zipfile(filename)
249 ):
250 unzip_file(filename, location, flatten=not filename.endswith(".whl"))
251 elif (
252 content_type == "application/x-gzip"
253 or tarfile.is_tarfile(filename)
254 or filename.lower().endswith(TAR_EXTENSIONS + BZ2_EXTENSIONS + XZ_EXTENSIONS)
255 ):
256 untar_file(filename, location)
257 else:
258 # FIXME: handle?
259 # FIXME: magic signatures?
260 logger.critical(
261 "Cannot unpack file %s (downloaded from %s, content-type: %s); "
262 "cannot detect archive format",
263 filename,
264 location,
265 content_type,
266 )
267 raise InstallationError(f"Cannot determine archive format of {location}")
268
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/pip/_internal/utils/unpacking.py b/src/pip/_internal/utils/unpacking.py
--- a/src/pip/_internal/utils/unpacking.py
+++ b/src/pip/_internal/utils/unpacking.py
@@ -178,7 +178,7 @@
filename,
)
mode = "r:*"
- tar = tarfile.open(filename, mode)
+ tar = tarfile.open(filename, mode, encoding="utf-8")
try:
leading = has_leading_dir([member.name for member in tar.getmembers()])
for member in tar.getmembers():
| {"golden_diff": "diff --git a/src/pip/_internal/utils/unpacking.py b/src/pip/_internal/utils/unpacking.py\n--- a/src/pip/_internal/utils/unpacking.py\n+++ b/src/pip/_internal/utils/unpacking.py\n@@ -178,7 +178,7 @@\n filename,\n )\n mode = \"r:*\"\n- tar = tarfile.open(filename, mode)\n+ tar = tarfile.open(filename, mode, encoding=\"utf-8\")\n try:\n leading = has_leading_dir([member.name for member in tar.getmembers()])\n for member in tar.getmembers():\n", "issue": "Set encoding for tar file and use unicode path for unpacking\nWhen tarfile.TarFile decodes filenames in Python 2.7 by default it uses\r\nsys.getfilesystemencoding. On Windows this returns \"mbcs\", which is\r\nlossy when converting from proper utf-8 to bytes (results in '?' for out\r\nof range characters).\r\n\r\nWe now pass an encoding to tarfile.open which will be used instead.\r\nSince the encoding argument is only ever used for the PAX format, and\r\nsince the PAX format guarantees utf-8 encoded information, this should\r\nwork in all circumstances.\r\n\r\nFor filesystem APIs in Python 2, the type of the path object passed\r\ndictates the underlying Windows API that is called. For `str` it is the\r\n`*A` (for ANSI) APIs. For `unicode` it is the `*W` (for Wide character)\r\nAPIs. To use the second set of APIs, which properly handles unicode\r\nfilenames, we convert the byte path to utf-8.\r\n\r\nFixes #7667.\nFilename encoding error in some environments with PAX sdist\n**Environment**\r\n\r\n* pip version: any\r\n* Python version: 2.7\r\n* OS: Windows, non-Windows in C locale\r\n\r\n(pip Windows CI hits this)\r\n\r\n**Description**\r\nThe PAX format wheel 0.34.1 sdists fail to install on Python 2.7 on Windows with a UnicodeEncodeError, or on non-Windows systems in a non-utf-8 locale: https://github.com/pypa/wheel/issues/331\r\n\r\n**Expected behavior**\r\nUnicode filename from the PAX tarball is correctly encoded for the local filesystem.\r\n\r\n**How to Reproduce**\r\nAttempt to install a PAX formatted tarball containing a file name that cannot be encoded to the default code page (Windows) or the default locale encoding (non-Windows).\r\n\r\nIn GNU tar, the affected paths are pre-mangled to something ASCII compatible, but PAX tar preserves them correctly, so the installer needs to handle them itself.\r\n\r\n**Output**\r\n\r\nSee \r\nhttps://dev.azure.com/pypa/pip/_build/results?buildId=18040&view=logs&j=404e6841-f5ba-57d9-f2c8-8c5322057572&t=0219f6bf-240d-5b08-c877-377b12af5079&l=309 for a Windows example in the pip test suite.\r\n\r\nThe wheel issue linked above has some Linux examples.\n", "before_files": [{"content": "\"\"\"Utilities related archives.\n\"\"\"\n\nimport logging\nimport os\nimport shutil\nimport stat\nimport tarfile\nimport zipfile\nfrom typing import Iterable, List, Optional\nfrom zipfile import ZipInfo\n\nfrom pip._internal.exceptions import InstallationError\nfrom pip._internal.utils.filetypes import (\n BZ2_EXTENSIONS,\n TAR_EXTENSIONS,\n XZ_EXTENSIONS,\n ZIP_EXTENSIONS,\n)\nfrom pip._internal.utils.misc import ensure_dir\n\nlogger = logging.getLogger(__name__)\n\n\nSUPPORTED_EXTENSIONS = ZIP_EXTENSIONS + TAR_EXTENSIONS\n\ntry:\n import bz2 # noqa\n\n SUPPORTED_EXTENSIONS += BZ2_EXTENSIONS\nexcept ImportError:\n logger.debug(\"bz2 module is not available\")\n\ntry:\n # Only for Python 3.3+\n import lzma # noqa\n\n SUPPORTED_EXTENSIONS += XZ_EXTENSIONS\nexcept ImportError:\n logger.debug(\"lzma module is not available\")\n\n\ndef current_umask():\n # type: () -> int\n \"\"\"Get the current umask which involves having to set it temporarily.\"\"\"\n mask = os.umask(0)\n os.umask(mask)\n return mask\n\n\ndef split_leading_dir(path):\n # type: (str) -> List[str]\n path = path.lstrip(\"/\").lstrip(\"\\\\\")\n if \"/\" in path and (\n (\"\\\\\" in path and path.find(\"/\") < path.find(\"\\\\\")) or \"\\\\\" not in path\n ):\n return path.split(\"/\", 1)\n elif \"\\\\\" in path:\n return path.split(\"\\\\\", 1)\n else:\n return [path, \"\"]\n\n\ndef has_leading_dir(paths):\n # type: (Iterable[str]) -> bool\n \"\"\"Returns true if all the paths have the same leading path name\n (i.e., everything is in one subdirectory in an archive)\"\"\"\n common_prefix = None\n for path in paths:\n prefix, rest = split_leading_dir(path)\n if not prefix:\n return False\n elif common_prefix is None:\n common_prefix = prefix\n elif prefix != common_prefix:\n return False\n return True\n\n\ndef is_within_directory(directory, target):\n # type: (str, str) -> bool\n \"\"\"\n Return true if the absolute path of target is within the directory\n \"\"\"\n abs_directory = os.path.abspath(directory)\n abs_target = os.path.abspath(target)\n\n prefix = os.path.commonprefix([abs_directory, abs_target])\n return prefix == abs_directory\n\n\ndef set_extracted_file_to_default_mode_plus_executable(path):\n # type: (str) -> None\n \"\"\"\n Make file present at path have execute for user/group/world\n (chmod +x) is no-op on windows per python docs\n \"\"\"\n os.chmod(path, (0o777 & ~current_umask() | 0o111))\n\n\ndef zip_item_is_executable(info):\n # type: (ZipInfo) -> bool\n mode = info.external_attr >> 16\n # if mode and regular file and any execute permissions for\n # user/group/world?\n return bool(mode and stat.S_ISREG(mode) and mode & 0o111)\n\n\ndef unzip_file(filename, location, flatten=True):\n # type: (str, str, bool) -> None\n \"\"\"\n Unzip the file (with path `filename`) to the destination `location`. All\n files are written based on system defaults and umask (i.e. permissions are\n not preserved), except that regular file members with any execute\n permissions (user, group, or world) have \"chmod +x\" applied after being\n written. Note that for windows, any execute changes using os.chmod are\n no-ops per the python docs.\n \"\"\"\n ensure_dir(location)\n zipfp = open(filename, \"rb\")\n try:\n zip = zipfile.ZipFile(zipfp, allowZip64=True)\n leading = has_leading_dir(zip.namelist()) and flatten\n for info in zip.infolist():\n name = info.filename\n fn = name\n if leading:\n fn = split_leading_dir(name)[1]\n fn = os.path.join(location, fn)\n dir = os.path.dirname(fn)\n if not is_within_directory(location, fn):\n message = (\n \"The zip file ({}) has a file ({}) trying to install \"\n \"outside target directory ({})\"\n )\n raise InstallationError(message.format(filename, fn, location))\n if fn.endswith(\"/\") or fn.endswith(\"\\\\\"):\n # A directory\n ensure_dir(fn)\n else:\n ensure_dir(dir)\n # Don't use read() to avoid allocating an arbitrarily large\n # chunk of memory for the file's content\n fp = zip.open(name)\n try:\n with open(fn, \"wb\") as destfp:\n shutil.copyfileobj(fp, destfp)\n finally:\n fp.close()\n if zip_item_is_executable(info):\n set_extracted_file_to_default_mode_plus_executable(fn)\n finally:\n zipfp.close()\n\n\ndef untar_file(filename, location):\n # type: (str, str) -> None\n \"\"\"\n Untar the file (with path `filename`) to the destination `location`.\n All files are written based on system defaults and umask (i.e. permissions\n are not preserved), except that regular file members with any execute\n permissions (user, group, or world) have \"chmod +x\" applied after being\n written. Note that for windows, any execute changes using os.chmod are\n no-ops per the python docs.\n \"\"\"\n ensure_dir(location)\n if filename.lower().endswith(\".gz\") or filename.lower().endswith(\".tgz\"):\n mode = \"r:gz\"\n elif filename.lower().endswith(BZ2_EXTENSIONS):\n mode = \"r:bz2\"\n elif filename.lower().endswith(XZ_EXTENSIONS):\n mode = \"r:xz\"\n elif filename.lower().endswith(\".tar\"):\n mode = \"r\"\n else:\n logger.warning(\n \"Cannot determine compression type for file %s\",\n filename,\n )\n mode = \"r:*\"\n tar = tarfile.open(filename, mode)\n try:\n leading = has_leading_dir([member.name for member in tar.getmembers()])\n for member in tar.getmembers():\n fn = member.name\n if leading:\n fn = split_leading_dir(fn)[1]\n path = os.path.join(location, fn)\n if not is_within_directory(location, path):\n message = (\n \"The tar file ({}) has a file ({}) trying to install \"\n \"outside target directory ({})\"\n )\n raise InstallationError(message.format(filename, path, location))\n if member.isdir():\n ensure_dir(path)\n elif member.issym():\n try:\n # https://github.com/python/typeshed/issues/2673\n tar._extract_member(member, path) # type: ignore\n except Exception as exc:\n # Some corrupt tar files seem to produce this\n # (specifically bad symlinks)\n logger.warning(\n \"In the tar file %s the member %s is invalid: %s\",\n filename,\n member.name,\n exc,\n )\n continue\n else:\n try:\n fp = tar.extractfile(member)\n except (KeyError, AttributeError) as exc:\n # Some corrupt tar files seem to produce this\n # (specifically bad symlinks)\n logger.warning(\n \"In the tar file %s the member %s is invalid: %s\",\n filename,\n member.name,\n exc,\n )\n continue\n ensure_dir(os.path.dirname(path))\n assert fp is not None\n with open(path, \"wb\") as destfp:\n shutil.copyfileobj(fp, destfp)\n fp.close()\n # Update the timestamp (useful for cython compiled files)\n tar.utime(member, path)\n # member have any execute permissions for user/group/world?\n if member.mode & 0o111:\n set_extracted_file_to_default_mode_plus_executable(path)\n finally:\n tar.close()\n\n\ndef unpack_file(\n filename, # type: str\n location, # type: str\n content_type=None, # type: Optional[str]\n):\n # type: (...) -> None\n filename = os.path.realpath(filename)\n if (\n content_type == \"application/zip\"\n or filename.lower().endswith(ZIP_EXTENSIONS)\n or zipfile.is_zipfile(filename)\n ):\n unzip_file(filename, location, flatten=not filename.endswith(\".whl\"))\n elif (\n content_type == \"application/x-gzip\"\n or tarfile.is_tarfile(filename)\n or filename.lower().endswith(TAR_EXTENSIONS + BZ2_EXTENSIONS + XZ_EXTENSIONS)\n ):\n untar_file(filename, location)\n else:\n # FIXME: handle?\n # FIXME: magic signatures?\n logger.critical(\n \"Cannot unpack file %s (downloaded from %s, content-type: %s); \"\n \"cannot detect archive format\",\n filename,\n location,\n content_type,\n )\n raise InstallationError(f\"Cannot determine archive format of {location}\")\n", "path": "src/pip/_internal/utils/unpacking.py"}], "after_files": [{"content": "\"\"\"Utilities related archives.\n\"\"\"\n\nimport logging\nimport os\nimport shutil\nimport stat\nimport tarfile\nimport zipfile\nfrom typing import Iterable, List, Optional\nfrom zipfile import ZipInfo\n\nfrom pip._internal.exceptions import InstallationError\nfrom pip._internal.utils.filetypes import (\n BZ2_EXTENSIONS,\n TAR_EXTENSIONS,\n XZ_EXTENSIONS,\n ZIP_EXTENSIONS,\n)\nfrom pip._internal.utils.misc import ensure_dir\n\nlogger = logging.getLogger(__name__)\n\n\nSUPPORTED_EXTENSIONS = ZIP_EXTENSIONS + TAR_EXTENSIONS\n\ntry:\n import bz2 # noqa\n\n SUPPORTED_EXTENSIONS += BZ2_EXTENSIONS\nexcept ImportError:\n logger.debug(\"bz2 module is not available\")\n\ntry:\n # Only for Python 3.3+\n import lzma # noqa\n\n SUPPORTED_EXTENSIONS += XZ_EXTENSIONS\nexcept ImportError:\n logger.debug(\"lzma module is not available\")\n\n\ndef current_umask():\n # type: () -> int\n \"\"\"Get the current umask which involves having to set it temporarily.\"\"\"\n mask = os.umask(0)\n os.umask(mask)\n return mask\n\n\ndef split_leading_dir(path):\n # type: (str) -> List[str]\n path = path.lstrip(\"/\").lstrip(\"\\\\\")\n if \"/\" in path and (\n (\"\\\\\" in path and path.find(\"/\") < path.find(\"\\\\\")) or \"\\\\\" not in path\n ):\n return path.split(\"/\", 1)\n elif \"\\\\\" in path:\n return path.split(\"\\\\\", 1)\n else:\n return [path, \"\"]\n\n\ndef has_leading_dir(paths):\n # type: (Iterable[str]) -> bool\n \"\"\"Returns true if all the paths have the same leading path name\n (i.e., everything is in one subdirectory in an archive)\"\"\"\n common_prefix = None\n for path in paths:\n prefix, rest = split_leading_dir(path)\n if not prefix:\n return False\n elif common_prefix is None:\n common_prefix = prefix\n elif prefix != common_prefix:\n return False\n return True\n\n\ndef is_within_directory(directory, target):\n # type: (str, str) -> bool\n \"\"\"\n Return true if the absolute path of target is within the directory\n \"\"\"\n abs_directory = os.path.abspath(directory)\n abs_target = os.path.abspath(target)\n\n prefix = os.path.commonprefix([abs_directory, abs_target])\n return prefix == abs_directory\n\n\ndef set_extracted_file_to_default_mode_plus_executable(path):\n # type: (str) -> None\n \"\"\"\n Make file present at path have execute for user/group/world\n (chmod +x) is no-op on windows per python docs\n \"\"\"\n os.chmod(path, (0o777 & ~current_umask() | 0o111))\n\n\ndef zip_item_is_executable(info):\n # type: (ZipInfo) -> bool\n mode = info.external_attr >> 16\n # if mode and regular file and any execute permissions for\n # user/group/world?\n return bool(mode and stat.S_ISREG(mode) and mode & 0o111)\n\n\ndef unzip_file(filename, location, flatten=True):\n # type: (str, str, bool) -> None\n \"\"\"\n Unzip the file (with path `filename`) to the destination `location`. All\n files are written based on system defaults and umask (i.e. permissions are\n not preserved), except that regular file members with any execute\n permissions (user, group, or world) have \"chmod +x\" applied after being\n written. Note that for windows, any execute changes using os.chmod are\n no-ops per the python docs.\n \"\"\"\n ensure_dir(location)\n zipfp = open(filename, \"rb\")\n try:\n zip = zipfile.ZipFile(zipfp, allowZip64=True)\n leading = has_leading_dir(zip.namelist()) and flatten\n for info in zip.infolist():\n name = info.filename\n fn = name\n if leading:\n fn = split_leading_dir(name)[1]\n fn = os.path.join(location, fn)\n dir = os.path.dirname(fn)\n if not is_within_directory(location, fn):\n message = (\n \"The zip file ({}) has a file ({}) trying to install \"\n \"outside target directory ({})\"\n )\n raise InstallationError(message.format(filename, fn, location))\n if fn.endswith(\"/\") or fn.endswith(\"\\\\\"):\n # A directory\n ensure_dir(fn)\n else:\n ensure_dir(dir)\n # Don't use read() to avoid allocating an arbitrarily large\n # chunk of memory for the file's content\n fp = zip.open(name)\n try:\n with open(fn, \"wb\") as destfp:\n shutil.copyfileobj(fp, destfp)\n finally:\n fp.close()\n if zip_item_is_executable(info):\n set_extracted_file_to_default_mode_plus_executable(fn)\n finally:\n zipfp.close()\n\n\ndef untar_file(filename, location):\n # type: (str, str) -> None\n \"\"\"\n Untar the file (with path `filename`) to the destination `location`.\n All files are written based on system defaults and umask (i.e. permissions\n are not preserved), except that regular file members with any execute\n permissions (user, group, or world) have \"chmod +x\" applied after being\n written. Note that for windows, any execute changes using os.chmod are\n no-ops per the python docs.\n \"\"\"\n ensure_dir(location)\n if filename.lower().endswith(\".gz\") or filename.lower().endswith(\".tgz\"):\n mode = \"r:gz\"\n elif filename.lower().endswith(BZ2_EXTENSIONS):\n mode = \"r:bz2\"\n elif filename.lower().endswith(XZ_EXTENSIONS):\n mode = \"r:xz\"\n elif filename.lower().endswith(\".tar\"):\n mode = \"r\"\n else:\n logger.warning(\n \"Cannot determine compression type for file %s\",\n filename,\n )\n mode = \"r:*\"\n tar = tarfile.open(filename, mode, encoding=\"utf-8\")\n try:\n leading = has_leading_dir([member.name for member in tar.getmembers()])\n for member in tar.getmembers():\n fn = member.name\n if leading:\n fn = split_leading_dir(fn)[1]\n path = os.path.join(location, fn)\n if not is_within_directory(location, path):\n message = (\n \"The tar file ({}) has a file ({}) trying to install \"\n \"outside target directory ({})\"\n )\n raise InstallationError(message.format(filename, path, location))\n if member.isdir():\n ensure_dir(path)\n elif member.issym():\n try:\n # https://github.com/python/typeshed/issues/2673\n tar._extract_member(member, path) # type: ignore\n except Exception as exc:\n # Some corrupt tar files seem to produce this\n # (specifically bad symlinks)\n logger.warning(\n \"In the tar file %s the member %s is invalid: %s\",\n filename,\n member.name,\n exc,\n )\n continue\n else:\n try:\n fp = tar.extractfile(member)\n except (KeyError, AttributeError) as exc:\n # Some corrupt tar files seem to produce this\n # (specifically bad symlinks)\n logger.warning(\n \"In the tar file %s the member %s is invalid: %s\",\n filename,\n member.name,\n exc,\n )\n continue\n ensure_dir(os.path.dirname(path))\n assert fp is not None\n with open(path, \"wb\") as destfp:\n shutil.copyfileobj(fp, destfp)\n fp.close()\n # Update the timestamp (useful for cython compiled files)\n tar.utime(member, path)\n # member have any execute permissions for user/group/world?\n if member.mode & 0o111:\n set_extracted_file_to_default_mode_plus_executable(path)\n finally:\n tar.close()\n\n\ndef unpack_file(\n filename, # type: str\n location, # type: str\n content_type=None, # type: Optional[str]\n):\n # type: (...) -> None\n filename = os.path.realpath(filename)\n if (\n content_type == \"application/zip\"\n or filename.lower().endswith(ZIP_EXTENSIONS)\n or zipfile.is_zipfile(filename)\n ):\n unzip_file(filename, location, flatten=not filename.endswith(\".whl\"))\n elif (\n content_type == \"application/x-gzip\"\n or tarfile.is_tarfile(filename)\n or filename.lower().endswith(TAR_EXTENSIONS + BZ2_EXTENSIONS + XZ_EXTENSIONS)\n ):\n untar_file(filename, location)\n else:\n # FIXME: handle?\n # FIXME: magic signatures?\n logger.critical(\n \"Cannot unpack file %s (downloaded from %s, content-type: %s); \"\n \"cannot detect archive format\",\n filename,\n location,\n content_type,\n )\n raise InstallationError(f\"Cannot determine archive format of {location}\")\n", "path": "src/pip/_internal/utils/unpacking.py"}]} | 3,493 | 130 |
gh_patches_debug_13 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-1779 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ebola Page>Map: disable scroll wheel zoom
CJ - The specific property is here: https://github.com/OCHA-DAP/hdx-design/blob/gh-pages/js/country.js
line 111: map.scrollWheelZoom.disable();
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ckanext-hdx_theme/ckanext/hdx_theme/version.py`
Content:
```
1 hdx_version = 'v0.5.1'
2
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
@@ -1 +1 @@
-hdx_version = 'v0.5.1'
+hdx_version = 'v0.5.2'
| {"golden_diff": "diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n@@ -1 +1 @@\n-hdx_version = 'v0.5.1'\n+hdx_version = 'v0.5.2'\n", "issue": "Ebola Page>Map: disable scroll wheel zoom\nCJ - The specific property is here: https://github.com/OCHA-DAP/hdx-design/blob/gh-pages/js/country.js\n\nline 111: map.scrollWheelZoom.disable();\n\n", "before_files": [{"content": "hdx_version = 'v0.5.1'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}], "after_files": [{"content": "hdx_version = 'v0.5.2'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}]} | 335 | 106 |
gh_patches_debug_32295 | rasdani/github-patches | git_diff | deepset-ai__haystack-6301 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Load additional fields from SQUAD-format file to meta field for labels
**Is your feature request related to a problem? Please describe.**
Currently `eval_data_from_json` loads additional fields for Documents as meta field but it doesn't do it for Labels.
**Describe the solution you'd like**
`eval_data_from_json` loads additional fields to `Label.meta` dict analogically to how it's done for Documents.
**Describe alternatives you've considered**
Currently I'd would have to rewrite whole loading data from json code to achieve this.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `haystack/document_stores/utils.py`
Content:
```
1 import typing
2 from typing import Dict, List, Optional, Tuple, Union, Generator
3
4 import json
5 import logging
6 from datetime import datetime
7
8 from haystack.schema import Document, Label, Answer, Span
9 from haystack.nodes.preprocessor import PreProcessor
10
11 if typing.TYPE_CHECKING:
12 # This results in a circular import if we don't use typing.TYPE_CHECKING
13 from haystack.document_stores.base import BaseDocumentStore
14
15
16 logger = logging.getLogger(__name__)
17
18
19 def eval_data_from_json(
20 filename: str,
21 max_docs: Optional[Union[int, bool]] = None,
22 preprocessor: Optional[PreProcessor] = None,
23 open_domain: bool = False,
24 ) -> Tuple[List[Document], List[Label]]:
25 """
26 Read Documents + Labels from a SQuAD-style file.
27 Document and Labels can then be indexed to the DocumentStore and be used for evaluation.
28
29 :param filename: Path to file in SQuAD format
30 :param max_docs: This sets the number of documents that will be loaded. By default, this is set to None, thus reading in all available eval documents.
31 :param open_domain: Set this to True if your file is an open domain dataset where two different answers to the same question might be found in different contexts.
32 """
33 docs: List[Document] = []
34 labels = []
35 problematic_ids = []
36
37 with open(filename, "r", encoding="utf-8") as file:
38 data = json.load(file)
39 if "title" not in data["data"][0]:
40 logger.warning("No title information found for documents in QA file: %s", filename)
41
42 for squad_document in data["data"]:
43 if max_docs and len(docs) > max_docs:
44 break
45 # Extracting paragraphs and their labels from a SQuAD document dict
46 cur_docs, cur_labels, cur_problematic_ids = _extract_docs_and_labels_from_dict(
47 squad_document, preprocessor, open_domain
48 )
49 docs.extend(cur_docs)
50 labels.extend(cur_labels)
51 problematic_ids.extend(cur_problematic_ids)
52 if len(problematic_ids) > 0:
53 logger.warning(
54 "Could not convert an answer for %s questions.\nThere were conversion errors for question ids: %s",
55 len(problematic_ids),
56 problematic_ids,
57 )
58 return docs, labels
59
60
61 def eval_data_from_jsonl(
62 filename: str,
63 batch_size: Optional[int] = None,
64 max_docs: Optional[Union[int, bool]] = None,
65 preprocessor: Optional[PreProcessor] = None,
66 open_domain: bool = False,
67 ) -> Generator[Tuple[List[Document], List[Label]], None, None]:
68 """
69 Read Documents + Labels from a SQuAD-style file in jsonl format, i.e. one document per line.
70 Document and Labels can then be indexed to the DocumentStore and be used for evaluation.
71
72 This is a generator which will yield one tuple per iteration containing a list
73 of batch_size documents and a list with the documents' labels.
74 If batch_size is set to None, this method will yield all documents and labels.
75
76 :param filename: Path to file in SQuAD format
77 :param max_docs: This sets the number of documents that will be loaded. By default, this is set to None, thus reading in all available eval documents.
78 :param open_domain: Set this to True if your file is an open domain dataset where two different answers to the same question might be found in different contexts.
79 """
80 docs: List[Document] = []
81 labels = []
82 problematic_ids = []
83
84 with open(filename, "r", encoding="utf-8") as file:
85 for document in file:
86 if max_docs and len(docs) > max_docs:
87 break
88 # Extracting paragraphs and their labels from a SQuAD document dict
89 squad_document = json.loads(document)
90 cur_docs, cur_labels, cur_problematic_ids = _extract_docs_and_labels_from_dict(
91 squad_document, preprocessor, open_domain
92 )
93 docs.extend(cur_docs)
94 labels.extend(cur_labels)
95 problematic_ids.extend(cur_problematic_ids)
96
97 if batch_size is not None and len(docs) >= batch_size:
98 if len(problematic_ids) > 0:
99 logger.warning(
100 "Could not convert an answer for %s questions.\n"
101 "There were conversion errors for question ids: %s",
102 len(problematic_ids),
103 problematic_ids,
104 )
105 yield docs, labels
106 docs = []
107 labels = []
108 problematic_ids = []
109
110 yield docs, labels
111
112
113 def squad_json_to_jsonl(squad_file: str, output_file: str):
114 """
115 Converts a SQuAD-json-file into jsonl format with one document per line.
116
117 :param squad_file: SQuAD-file in json format.
118 :param output_file: Name of output file (SQuAD in jsonl format)
119 """
120 with open(squad_file, encoding="utf-8") as json_file, open(output_file, "w", encoding="utf-8") as jsonl_file:
121 squad_json = json.load(json_file)
122
123 for doc in squad_json["data"]:
124 json.dump(doc, jsonl_file)
125 jsonl_file.write("\n")
126
127
128 def _extract_docs_and_labels_from_dict(
129 document_dict: Dict, preprocessor: Optional[PreProcessor] = None, open_domain: bool = False
130 ):
131 """
132 Set open_domain to True if you are trying to load open_domain labels (i.e. labels without doc id or start idx)
133 """
134 docs = []
135 labels = []
136 problematic_ids = []
137
138 # get all extra fields from document level (e.g. title)
139 meta_doc = {k: v for k, v in document_dict.items() if k not in ("paragraphs", "title")}
140 for paragraph in document_dict["paragraphs"]:
141 ## Create Metadata
142 cur_meta = {"name": document_dict.get("title", None)}
143 # all other fields from paragraph level
144 meta_paragraph = {k: v for k, v in paragraph.items() if k not in ("qas", "context")}
145 cur_meta.update(meta_paragraph)
146 # meta from parent document
147 cur_meta.update(meta_doc)
148
149 ## Create Document
150 cur_full_doc = Document(content=paragraph["context"], meta=cur_meta)
151 if preprocessor is not None:
152 splits_docs = preprocessor.process(documents=[cur_full_doc])
153 # we need to pull in _split_id into the document id for unique reference in labels
154 splits: List[Document] = []
155 offset = 0
156 for d in splits_docs:
157 id = f"{d.id}-{d.meta['_split_id']}"
158 d.meta["_split_offset"] = offset
159 offset += len(d.content)
160 # offset correction based on splitting method
161 if preprocessor.split_by == "word":
162 offset += 1
163 elif preprocessor.split_by == "passage":
164 offset += 2
165 else:
166 raise NotImplementedError
167 mydoc = Document(content=d.content, id=id, meta=d.meta)
168 splits.append(mydoc)
169 else:
170 splits = [cur_full_doc]
171 docs.extend(splits)
172
173 ## Assign Labels to corresponding documents
174 for qa in paragraph["qas"]:
175 if not qa.get("is_impossible", False):
176 for answer in qa["answers"]:
177 ans = answer["text"]
178 # TODO The following block of code means that answer_start is never calculated
179 # and cur_id is always None for open_domain
180 # This can be rewritten so that this function could try to calculate offsets
181 # and populate id in open_domain mode
182 if open_domain:
183 # TODO check with Branden why we want to treat open_domain here differently.
184 # Shouldn't this be something configured at eval time only?
185 cur_ans_start = answer.get("answer_start", 0)
186 # cur_id = '0'
187 label = Label(
188 query=qa["question"],
189 answer=Answer(answer=ans, type="extractive", score=0.0),
190 document=None, # type: ignore
191 is_correct_answer=True,
192 is_correct_document=True,
193 origin="gold-label",
194 )
195 labels.append(label)
196 else:
197 ans_position = cur_full_doc.content[
198 answer["answer_start"] : answer["answer_start"] + len(str(ans))
199 ]
200 if ans != ans_position:
201 # do not use answer
202 problematic_ids.append(qa.get("id", "missing"))
203 break
204 # find corresponding document or split
205 if len(splits) == 1:
206 # cur_id = splits[0].id
207 cur_ans_start = answer["answer_start"]
208 cur_doc = splits[0]
209 else:
210 for s in splits:
211 # If answer start offset is contained in passage we assign the label to that passage
212 if (answer["answer_start"] >= s.meta["_split_offset"]) and (
213 answer["answer_start"] < (s.meta["_split_offset"] + len(s.content))
214 ):
215 cur_doc = s
216 cur_ans_start = answer["answer_start"] - s.meta["_split_offset"]
217 # If a document is splitting an answer we add the whole answer text to the document
218 if s.content[cur_ans_start : cur_ans_start + len(ans)] != ans:
219 s.content = s.content[:cur_ans_start] + ans
220 break
221 cur_answer = Answer(
222 answer=ans,
223 type="extractive",
224 score=0.0,
225 context=cur_doc.content,
226 offsets_in_document=[Span(start=cur_ans_start, end=cur_ans_start + len(ans))],
227 offsets_in_context=[Span(start=cur_ans_start, end=cur_ans_start + len(ans))],
228 document_ids=[cur_doc.id],
229 )
230 label = Label(
231 query=qa["question"],
232 answer=cur_answer,
233 document=cur_doc,
234 is_correct_answer=True,
235 is_correct_document=True,
236 origin="gold-label",
237 )
238 labels.append(label)
239 else:
240 # for no_answer we need to assign each split as not fitting to the question
241 for s in splits:
242 label = Label(
243 query=qa["question"],
244 answer=Answer(
245 answer="",
246 type="extractive",
247 score=0.0,
248 offsets_in_document=[Span(start=0, end=0)],
249 offsets_in_context=[Span(start=0, end=0)],
250 ),
251 document=s,
252 is_correct_answer=True,
253 is_correct_document=True,
254 origin="gold-label",
255 )
256
257 labels.append(label)
258
259 return docs, labels, problematic_ids
260
261
262 def convert_date_to_rfc3339(date: str) -> str:
263 """
264 Converts a date to RFC3339 format, as Weaviate requires dates to be in RFC3339 format including the time and
265 timezone.
266
267 If the provided date string does not contain a time and/or timezone, we use 00:00 as default time
268 and UTC as default time zone.
269
270 This method cannot be part of WeaviateDocumentStore, as this would result in a circular import between weaviate.py
271 and filter_utils.py.
272 """
273 parsed_datetime = datetime.fromisoformat(date)
274 if parsed_datetime.utcoffset() is None:
275 converted_date = parsed_datetime.isoformat() + "Z"
276 else:
277 converted_date = parsed_datetime.isoformat()
278
279 return converted_date
280
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/haystack/document_stores/utils.py b/haystack/document_stores/utils.py
--- a/haystack/document_stores/utils.py
+++ b/haystack/document_stores/utils.py
@@ -172,6 +172,9 @@
## Assign Labels to corresponding documents
for qa in paragraph["qas"]:
+ meta_qa = {
+ k: v for k, v in qa.items() if k not in ("is_impossible", "answers", "question", "id", "missing")
+ }
if not qa.get("is_impossible", False):
for answer in qa["answers"]:
ans = answer["text"]
@@ -191,6 +194,7 @@
is_correct_answer=True,
is_correct_document=True,
origin="gold-label",
+ meta=meta_qa,
)
labels.append(label)
else:
@@ -234,6 +238,7 @@
is_correct_answer=True,
is_correct_document=True,
origin="gold-label",
+ meta=meta_qa,
)
labels.append(label)
else:
@@ -252,6 +257,7 @@
is_correct_answer=True,
is_correct_document=True,
origin="gold-label",
+ meta=meta_qa,
)
labels.append(label)
| {"golden_diff": "diff --git a/haystack/document_stores/utils.py b/haystack/document_stores/utils.py\n--- a/haystack/document_stores/utils.py\n+++ b/haystack/document_stores/utils.py\n@@ -172,6 +172,9 @@\n \n ## Assign Labels to corresponding documents\n for qa in paragraph[\"qas\"]:\n+ meta_qa = {\n+ k: v for k, v in qa.items() if k not in (\"is_impossible\", \"answers\", \"question\", \"id\", \"missing\")\n+ }\n if not qa.get(\"is_impossible\", False):\n for answer in qa[\"answers\"]:\n ans = answer[\"text\"]\n@@ -191,6 +194,7 @@\n is_correct_answer=True,\n is_correct_document=True,\n origin=\"gold-label\",\n+ meta=meta_qa,\n )\n labels.append(label)\n else:\n@@ -234,6 +238,7 @@\n is_correct_answer=True,\n is_correct_document=True,\n origin=\"gold-label\",\n+ meta=meta_qa,\n )\n labels.append(label)\n else:\n@@ -252,6 +257,7 @@\n is_correct_answer=True,\n is_correct_document=True,\n origin=\"gold-label\",\n+ meta=meta_qa,\n )\n \n labels.append(label)\n", "issue": "Load additional fields from SQUAD-format file to meta field for labels\n**Is your feature request related to a problem? Please describe.**\r\nCurrently `eval_data_from_json` loads additional fields for Documents as meta field but it doesn't do it for Labels.\r\n\r\n**Describe the solution you'd like**\r\n`eval_data_from_json` loads additional fields to `Label.meta` dict analogically to how it's done for Documents.\r\n\r\n**Describe alternatives you've considered**\r\nCurrently I'd would have to rewrite whole loading data from json code to achieve this.\r\n\n", "before_files": [{"content": "import typing\nfrom typing import Dict, List, Optional, Tuple, Union, Generator\n\nimport json\nimport logging\nfrom datetime import datetime\n\nfrom haystack.schema import Document, Label, Answer, Span\nfrom haystack.nodes.preprocessor import PreProcessor\n\nif typing.TYPE_CHECKING:\n # This results in a circular import if we don't use typing.TYPE_CHECKING\n from haystack.document_stores.base import BaseDocumentStore\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef eval_data_from_json(\n filename: str,\n max_docs: Optional[Union[int, bool]] = None,\n preprocessor: Optional[PreProcessor] = None,\n open_domain: bool = False,\n) -> Tuple[List[Document], List[Label]]:\n \"\"\"\n Read Documents + Labels from a SQuAD-style file.\n Document and Labels can then be indexed to the DocumentStore and be used for evaluation.\n\n :param filename: Path to file in SQuAD format\n :param max_docs: This sets the number of documents that will be loaded. By default, this is set to None, thus reading in all available eval documents.\n :param open_domain: Set this to True if your file is an open domain dataset where two different answers to the same question might be found in different contexts.\n \"\"\"\n docs: List[Document] = []\n labels = []\n problematic_ids = []\n\n with open(filename, \"r\", encoding=\"utf-8\") as file:\n data = json.load(file)\n if \"title\" not in data[\"data\"][0]:\n logger.warning(\"No title information found for documents in QA file: %s\", filename)\n\n for squad_document in data[\"data\"]:\n if max_docs and len(docs) > max_docs:\n break\n # Extracting paragraphs and their labels from a SQuAD document dict\n cur_docs, cur_labels, cur_problematic_ids = _extract_docs_and_labels_from_dict(\n squad_document, preprocessor, open_domain\n )\n docs.extend(cur_docs)\n labels.extend(cur_labels)\n problematic_ids.extend(cur_problematic_ids)\n if len(problematic_ids) > 0:\n logger.warning(\n \"Could not convert an answer for %s questions.\\nThere were conversion errors for question ids: %s\",\n len(problematic_ids),\n problematic_ids,\n )\n return docs, labels\n\n\ndef eval_data_from_jsonl(\n filename: str,\n batch_size: Optional[int] = None,\n max_docs: Optional[Union[int, bool]] = None,\n preprocessor: Optional[PreProcessor] = None,\n open_domain: bool = False,\n) -> Generator[Tuple[List[Document], List[Label]], None, None]:\n \"\"\"\n Read Documents + Labels from a SQuAD-style file in jsonl format, i.e. one document per line.\n Document and Labels can then be indexed to the DocumentStore and be used for evaluation.\n\n This is a generator which will yield one tuple per iteration containing a list\n of batch_size documents and a list with the documents' labels.\n If batch_size is set to None, this method will yield all documents and labels.\n\n :param filename: Path to file in SQuAD format\n :param max_docs: This sets the number of documents that will be loaded. By default, this is set to None, thus reading in all available eval documents.\n :param open_domain: Set this to True if your file is an open domain dataset where two different answers to the same question might be found in different contexts.\n \"\"\"\n docs: List[Document] = []\n labels = []\n problematic_ids = []\n\n with open(filename, \"r\", encoding=\"utf-8\") as file:\n for document in file:\n if max_docs and len(docs) > max_docs:\n break\n # Extracting paragraphs and their labels from a SQuAD document dict\n squad_document = json.loads(document)\n cur_docs, cur_labels, cur_problematic_ids = _extract_docs_and_labels_from_dict(\n squad_document, preprocessor, open_domain\n )\n docs.extend(cur_docs)\n labels.extend(cur_labels)\n problematic_ids.extend(cur_problematic_ids)\n\n if batch_size is not None and len(docs) >= batch_size:\n if len(problematic_ids) > 0:\n logger.warning(\n \"Could not convert an answer for %s questions.\\n\"\n \"There were conversion errors for question ids: %s\",\n len(problematic_ids),\n problematic_ids,\n )\n yield docs, labels\n docs = []\n labels = []\n problematic_ids = []\n\n yield docs, labels\n\n\ndef squad_json_to_jsonl(squad_file: str, output_file: str):\n \"\"\"\n Converts a SQuAD-json-file into jsonl format with one document per line.\n\n :param squad_file: SQuAD-file in json format.\n :param output_file: Name of output file (SQuAD in jsonl format)\n \"\"\"\n with open(squad_file, encoding=\"utf-8\") as json_file, open(output_file, \"w\", encoding=\"utf-8\") as jsonl_file:\n squad_json = json.load(json_file)\n\n for doc in squad_json[\"data\"]:\n json.dump(doc, jsonl_file)\n jsonl_file.write(\"\\n\")\n\n\ndef _extract_docs_and_labels_from_dict(\n document_dict: Dict, preprocessor: Optional[PreProcessor] = None, open_domain: bool = False\n):\n \"\"\"\n Set open_domain to True if you are trying to load open_domain labels (i.e. labels without doc id or start idx)\n \"\"\"\n docs = []\n labels = []\n problematic_ids = []\n\n # get all extra fields from document level (e.g. title)\n meta_doc = {k: v for k, v in document_dict.items() if k not in (\"paragraphs\", \"title\")}\n for paragraph in document_dict[\"paragraphs\"]:\n ## Create Metadata\n cur_meta = {\"name\": document_dict.get(\"title\", None)}\n # all other fields from paragraph level\n meta_paragraph = {k: v for k, v in paragraph.items() if k not in (\"qas\", \"context\")}\n cur_meta.update(meta_paragraph)\n # meta from parent document\n cur_meta.update(meta_doc)\n\n ## Create Document\n cur_full_doc = Document(content=paragraph[\"context\"], meta=cur_meta)\n if preprocessor is not None:\n splits_docs = preprocessor.process(documents=[cur_full_doc])\n # we need to pull in _split_id into the document id for unique reference in labels\n splits: List[Document] = []\n offset = 0\n for d in splits_docs:\n id = f\"{d.id}-{d.meta['_split_id']}\"\n d.meta[\"_split_offset\"] = offset\n offset += len(d.content)\n # offset correction based on splitting method\n if preprocessor.split_by == \"word\":\n offset += 1\n elif preprocessor.split_by == \"passage\":\n offset += 2\n else:\n raise NotImplementedError\n mydoc = Document(content=d.content, id=id, meta=d.meta)\n splits.append(mydoc)\n else:\n splits = [cur_full_doc]\n docs.extend(splits)\n\n ## Assign Labels to corresponding documents\n for qa in paragraph[\"qas\"]:\n if not qa.get(\"is_impossible\", False):\n for answer in qa[\"answers\"]:\n ans = answer[\"text\"]\n # TODO The following block of code means that answer_start is never calculated\n # and cur_id is always None for open_domain\n # This can be rewritten so that this function could try to calculate offsets\n # and populate id in open_domain mode\n if open_domain:\n # TODO check with Branden why we want to treat open_domain here differently.\n # Shouldn't this be something configured at eval time only?\n cur_ans_start = answer.get(\"answer_start\", 0)\n # cur_id = '0'\n label = Label(\n query=qa[\"question\"],\n answer=Answer(answer=ans, type=\"extractive\", score=0.0),\n document=None, # type: ignore\n is_correct_answer=True,\n is_correct_document=True,\n origin=\"gold-label\",\n )\n labels.append(label)\n else:\n ans_position = cur_full_doc.content[\n answer[\"answer_start\"] : answer[\"answer_start\"] + len(str(ans))\n ]\n if ans != ans_position:\n # do not use answer\n problematic_ids.append(qa.get(\"id\", \"missing\"))\n break\n # find corresponding document or split\n if len(splits) == 1:\n # cur_id = splits[0].id\n cur_ans_start = answer[\"answer_start\"]\n cur_doc = splits[0]\n else:\n for s in splits:\n # If answer start offset is contained in passage we assign the label to that passage\n if (answer[\"answer_start\"] >= s.meta[\"_split_offset\"]) and (\n answer[\"answer_start\"] < (s.meta[\"_split_offset\"] + len(s.content))\n ):\n cur_doc = s\n cur_ans_start = answer[\"answer_start\"] - s.meta[\"_split_offset\"]\n # If a document is splitting an answer we add the whole answer text to the document\n if s.content[cur_ans_start : cur_ans_start + len(ans)] != ans:\n s.content = s.content[:cur_ans_start] + ans\n break\n cur_answer = Answer(\n answer=ans,\n type=\"extractive\",\n score=0.0,\n context=cur_doc.content,\n offsets_in_document=[Span(start=cur_ans_start, end=cur_ans_start + len(ans))],\n offsets_in_context=[Span(start=cur_ans_start, end=cur_ans_start + len(ans))],\n document_ids=[cur_doc.id],\n )\n label = Label(\n query=qa[\"question\"],\n answer=cur_answer,\n document=cur_doc,\n is_correct_answer=True,\n is_correct_document=True,\n origin=\"gold-label\",\n )\n labels.append(label)\n else:\n # for no_answer we need to assign each split as not fitting to the question\n for s in splits:\n label = Label(\n query=qa[\"question\"],\n answer=Answer(\n answer=\"\",\n type=\"extractive\",\n score=0.0,\n offsets_in_document=[Span(start=0, end=0)],\n offsets_in_context=[Span(start=0, end=0)],\n ),\n document=s,\n is_correct_answer=True,\n is_correct_document=True,\n origin=\"gold-label\",\n )\n\n labels.append(label)\n\n return docs, labels, problematic_ids\n\n\ndef convert_date_to_rfc3339(date: str) -> str:\n \"\"\"\n Converts a date to RFC3339 format, as Weaviate requires dates to be in RFC3339 format including the time and\n timezone.\n\n If the provided date string does not contain a time and/or timezone, we use 00:00 as default time\n and UTC as default time zone.\n\n This method cannot be part of WeaviateDocumentStore, as this would result in a circular import between weaviate.py\n and filter_utils.py.\n \"\"\"\n parsed_datetime = datetime.fromisoformat(date)\n if parsed_datetime.utcoffset() is None:\n converted_date = parsed_datetime.isoformat() + \"Z\"\n else:\n converted_date = parsed_datetime.isoformat()\n\n return converted_date\n", "path": "haystack/document_stores/utils.py"}], "after_files": [{"content": "import typing\nfrom typing import Dict, List, Optional, Tuple, Union, Generator\n\nimport json\nimport logging\nfrom datetime import datetime\n\nfrom haystack.schema import Document, Label, Answer, Span\nfrom haystack.nodes.preprocessor import PreProcessor\n\nif typing.TYPE_CHECKING:\n # This results in a circular import if we don't use typing.TYPE_CHECKING\n from haystack.document_stores.base import BaseDocumentStore\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef eval_data_from_json(\n filename: str,\n max_docs: Optional[Union[int, bool]] = None,\n preprocessor: Optional[PreProcessor] = None,\n open_domain: bool = False,\n) -> Tuple[List[Document], List[Label]]:\n \"\"\"\n Read Documents + Labels from a SQuAD-style file.\n Document and Labels can then be indexed to the DocumentStore and be used for evaluation.\n\n :param filename: Path to file in SQuAD format\n :param max_docs: This sets the number of documents that will be loaded. By default, this is set to None, thus reading in all available eval documents.\n :param open_domain: Set this to True if your file is an open domain dataset where two different answers to the same question might be found in different contexts.\n \"\"\"\n docs: List[Document] = []\n labels = []\n problematic_ids = []\n\n with open(filename, \"r\", encoding=\"utf-8\") as file:\n data = json.load(file)\n if \"title\" not in data[\"data\"][0]:\n logger.warning(\"No title information found for documents in QA file: %s\", filename)\n\n for squad_document in data[\"data\"]:\n if max_docs and len(docs) > max_docs:\n break\n # Extracting paragraphs and their labels from a SQuAD document dict\n cur_docs, cur_labels, cur_problematic_ids = _extract_docs_and_labels_from_dict(\n squad_document, preprocessor, open_domain\n )\n docs.extend(cur_docs)\n labels.extend(cur_labels)\n problematic_ids.extend(cur_problematic_ids)\n if len(problematic_ids) > 0:\n logger.warning(\n \"Could not convert an answer for %s questions.\\nThere were conversion errors for question ids: %s\",\n len(problematic_ids),\n problematic_ids,\n )\n return docs, labels\n\n\ndef eval_data_from_jsonl(\n filename: str,\n batch_size: Optional[int] = None,\n max_docs: Optional[Union[int, bool]] = None,\n preprocessor: Optional[PreProcessor] = None,\n open_domain: bool = False,\n) -> Generator[Tuple[List[Document], List[Label]], None, None]:\n \"\"\"\n Read Documents + Labels from a SQuAD-style file in jsonl format, i.e. one document per line.\n Document and Labels can then be indexed to the DocumentStore and be used for evaluation.\n\n This is a generator which will yield one tuple per iteration containing a list\n of batch_size documents and a list with the documents' labels.\n If batch_size is set to None, this method will yield all documents and labels.\n\n :param filename: Path to file in SQuAD format\n :param max_docs: This sets the number of documents that will be loaded. By default, this is set to None, thus reading in all available eval documents.\n :param open_domain: Set this to True if your file is an open domain dataset where two different answers to the same question might be found in different contexts.\n \"\"\"\n docs: List[Document] = []\n labels = []\n problematic_ids = []\n\n with open(filename, \"r\", encoding=\"utf-8\") as file:\n for document in file:\n if max_docs and len(docs) > max_docs:\n break\n # Extracting paragraphs and their labels from a SQuAD document dict\n squad_document = json.loads(document)\n cur_docs, cur_labels, cur_problematic_ids = _extract_docs_and_labels_from_dict(\n squad_document, preprocessor, open_domain\n )\n docs.extend(cur_docs)\n labels.extend(cur_labels)\n problematic_ids.extend(cur_problematic_ids)\n\n if batch_size is not None and len(docs) >= batch_size:\n if len(problematic_ids) > 0:\n logger.warning(\n \"Could not convert an answer for %s questions.\\n\"\n \"There were conversion errors for question ids: %s\",\n len(problematic_ids),\n problematic_ids,\n )\n yield docs, labels\n docs = []\n labels = []\n problematic_ids = []\n\n yield docs, labels\n\n\ndef squad_json_to_jsonl(squad_file: str, output_file: str):\n \"\"\"\n Converts a SQuAD-json-file into jsonl format with one document per line.\n\n :param squad_file: SQuAD-file in json format.\n :param output_file: Name of output file (SQuAD in jsonl format)\n \"\"\"\n with open(squad_file, encoding=\"utf-8\") as json_file, open(output_file, \"w\", encoding=\"utf-8\") as jsonl_file:\n squad_json = json.load(json_file)\n\n for doc in squad_json[\"data\"]:\n json.dump(doc, jsonl_file)\n jsonl_file.write(\"\\n\")\n\n\ndef _extract_docs_and_labels_from_dict(\n document_dict: Dict, preprocessor: Optional[PreProcessor] = None, open_domain: bool = False\n):\n \"\"\"\n Set open_domain to True if you are trying to load open_domain labels (i.e. labels without doc id or start idx)\n \"\"\"\n docs = []\n labels = []\n problematic_ids = []\n\n # get all extra fields from document level (e.g. title)\n meta_doc = {k: v for k, v in document_dict.items() if k not in (\"paragraphs\", \"title\")}\n for paragraph in document_dict[\"paragraphs\"]:\n ## Create Metadata\n cur_meta = {\"name\": document_dict.get(\"title\", None)}\n # all other fields from paragraph level\n meta_paragraph = {k: v for k, v in paragraph.items() if k not in (\"qas\", \"context\")}\n cur_meta.update(meta_paragraph)\n # meta from parent document\n cur_meta.update(meta_doc)\n\n ## Create Document\n cur_full_doc = Document(content=paragraph[\"context\"], meta=cur_meta)\n if preprocessor is not None:\n splits_docs = preprocessor.process(documents=[cur_full_doc])\n # we need to pull in _split_id into the document id for unique reference in labels\n splits: List[Document] = []\n offset = 0\n for d in splits_docs:\n id = f\"{d.id}-{d.meta['_split_id']}\"\n d.meta[\"_split_offset\"] = offset\n offset += len(d.content)\n # offset correction based on splitting method\n if preprocessor.split_by == \"word\":\n offset += 1\n elif preprocessor.split_by == \"passage\":\n offset += 2\n else:\n raise NotImplementedError\n mydoc = Document(content=d.content, id=id, meta=d.meta)\n splits.append(mydoc)\n else:\n splits = [cur_full_doc]\n docs.extend(splits)\n\n ## Assign Labels to corresponding documents\n for qa in paragraph[\"qas\"]:\n meta_qa = {\n k: v for k, v in qa.items() if k not in (\"is_impossible\", \"answers\", \"question\", \"id\", \"missing\")\n }\n if not qa.get(\"is_impossible\", False):\n for answer in qa[\"answers\"]:\n ans = answer[\"text\"]\n # TODO The following block of code means that answer_start is never calculated\n # and cur_id is always None for open_domain\n # This can be rewritten so that this function could try to calculate offsets\n # and populate id in open_domain mode\n if open_domain:\n # TODO check with Branden why we want to treat open_domain here differently.\n # Shouldn't this be something configured at eval time only?\n cur_ans_start = answer.get(\"answer_start\", 0)\n # cur_id = '0'\n label = Label(\n query=qa[\"question\"],\n answer=Answer(answer=ans, type=\"extractive\", score=0.0),\n document=None, # type: ignore\n is_correct_answer=True,\n is_correct_document=True,\n origin=\"gold-label\",\n meta=meta_qa,\n )\n labels.append(label)\n else:\n ans_position = cur_full_doc.content[\n answer[\"answer_start\"] : answer[\"answer_start\"] + len(str(ans))\n ]\n if ans != ans_position:\n # do not use answer\n problematic_ids.append(qa.get(\"id\", \"missing\"))\n break\n # find corresponding document or split\n if len(splits) == 1:\n # cur_id = splits[0].id\n cur_ans_start = answer[\"answer_start\"]\n cur_doc = splits[0]\n else:\n for s in splits:\n # If answer start offset is contained in passage we assign the label to that passage\n if (answer[\"answer_start\"] >= s.meta[\"_split_offset\"]) and (\n answer[\"answer_start\"] < (s.meta[\"_split_offset\"] + len(s.content))\n ):\n cur_doc = s\n cur_ans_start = answer[\"answer_start\"] - s.meta[\"_split_offset\"]\n # If a document is splitting an answer we add the whole answer text to the document\n if s.content[cur_ans_start : cur_ans_start + len(ans)] != ans:\n s.content = s.content[:cur_ans_start] + ans\n break\n cur_answer = Answer(\n answer=ans,\n type=\"extractive\",\n score=0.0,\n context=cur_doc.content,\n offsets_in_document=[Span(start=cur_ans_start, end=cur_ans_start + len(ans))],\n offsets_in_context=[Span(start=cur_ans_start, end=cur_ans_start + len(ans))],\n document_ids=[cur_doc.id],\n )\n label = Label(\n query=qa[\"question\"],\n answer=cur_answer,\n document=cur_doc,\n is_correct_answer=True,\n is_correct_document=True,\n origin=\"gold-label\",\n meta=meta_qa,\n )\n labels.append(label)\n else:\n # for no_answer we need to assign each split as not fitting to the question\n for s in splits:\n label = Label(\n query=qa[\"question\"],\n answer=Answer(\n answer=\"\",\n type=\"extractive\",\n score=0.0,\n offsets_in_document=[Span(start=0, end=0)],\n offsets_in_context=[Span(start=0, end=0)],\n ),\n document=s,\n is_correct_answer=True,\n is_correct_document=True,\n origin=\"gold-label\",\n meta=meta_qa,\n )\n\n labels.append(label)\n\n return docs, labels, problematic_ids\n\n\ndef convert_date_to_rfc3339(date: str) -> str:\n \"\"\"\n Converts a date to RFC3339 format, as Weaviate requires dates to be in RFC3339 format including the time and\n timezone.\n\n If the provided date string does not contain a time and/or timezone, we use 00:00 as default time\n and UTC as default time zone.\n\n This method cannot be part of WeaviateDocumentStore, as this would result in a circular import between weaviate.py\n and filter_utils.py.\n \"\"\"\n parsed_datetime = datetime.fromisoformat(date)\n if parsed_datetime.utcoffset() is None:\n converted_date = parsed_datetime.isoformat() + \"Z\"\n else:\n converted_date = parsed_datetime.isoformat()\n\n return converted_date\n", "path": "haystack/document_stores/utils.py"}]} | 3,572 | 295 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.