problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_28625
|
rasdani/github-patches
|
git_diff
|
ray-project__ray-1094
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Ray fails to compile from master branch with Python 3 only
Relevant log output:
```
+ git clone https://github.com/ray-project/catapult.git /tmp/pip-_r2ylpv0-build/src/thirdparty/catapult
Cloning into '/tmp/pip-_r2ylpv0-build/src/thirdparty/catapult'...
+ break
+ pushd /tmp/pip-_r2ylpv0-build/src/thirdparty/catapult
/tmp/pip-_r2ylpv0-build/src/thirdparty/catapult /tmp/pip-_r2ylpv0-build/python
+ git checkout 18cd334755701cf0c3b90b7172126c686d2eb787
Note: checking out '18cd334755701cf0c3b90b7172126c686d2eb787'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
git checkout -b <new-branch-name>
HEAD is now at 18cd334... Allow timeline scroll-to-zoom without holding ALT
+ popd
/tmp/pip-_r2ylpv0-build/python
+ [[ ! -f /tmp/pip-_r2ylpv0-build/src/thirdparty/../../python/ray/core/src/catapult_files/index.html ]]
+ python2 /tmp/pip-_r2ylpv0-build/src/thirdparty/catapult/tracing/bin/vulcanize_trace_viewer --config chrome --output /tmp/pip-_r2ylpv0-build/src/thirdparty/../../python/ray/core/src/catapult_files/trace_viewer_full.html
/tmp/pip-_r2ylpv0-build/src/thirdparty/build_ui.sh: line 52: python2: command not found
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-_r2ylpv0-build/python/setup.py", line 99, in <module>
license="Apache 2.0")
File "/usr/local/lib/python3.6/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/local/lib/python3.6/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/usr/local/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.6/site-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/usr/local/lib/python3.6/distutils/command/install.py", line 545, in run
self.run_command('build')
File "/usr/local/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.6/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/local/lib/python3.6/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/tmp/pip-_r2ylpv0-build/python/setup.py", line 38, in run
subprocess.check_call(["../build.sh", sys.executable])
File "/usr/local/lib/python3.6/subprocess.py", line 291, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['../build.sh', '/usr/local/bin/python3.6']' returned non-zero exit status 127.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/setup.py`
Content:
```
1 from __future__ import absolute_import
2 from __future__ import division
3 from __future__ import print_function
4
5 import os
6 import shutil
7 import subprocess
8 import sys
9
10 from setuptools import setup, find_packages, Distribution
11 import setuptools.command.build_ext as _build_ext
12
13 # Ideally, we could include these files by putting them in a
14 # MANIFEST.in or using the package_data argument to setup, but the
15 # MANIFEST.in gets applied at the very beginning when setup.py runs
16 # before these files have been created, so we have to move the files
17 # manually.
18 ray_files = [
19 "ray/core/src/common/thirdparty/redis/src/redis-server",
20 "ray/core/src/common/redis_module/libray_redis_module.so",
21 "ray/core/src/plasma/plasma_store",
22 "ray/core/src/plasma/plasma_manager",
23 "ray/core/src/local_scheduler/local_scheduler",
24 "ray/core/src/local_scheduler/liblocal_scheduler_library.so",
25 "ray/core/src/global_scheduler/global_scheduler",
26 "ray/core/src/catapult_files/index.html",
27 "ray/core/src/catapult_files/trace_viewer_full.html",
28 "ray/WebUI.ipynb"
29 ]
30
31
32 class build_ext(_build_ext.build_ext):
33 def run(self):
34 # Note: We are passing in sys.executable so that we use the same
35 # version of Python to build pyarrow inside the build.sh script. Note
36 # that certain flags will not be passed along such as --user or sudo.
37 # TODO(rkn): Fix this.
38 subprocess.check_call(["../build.sh", sys.executable])
39
40 # We also need to install pyarrow along with Ray, so make sure that the
41 # relevant non-Python pyarrow files get copied.
42 pyarrow_files = [
43 os.path.join("ray/pyarrow_files/pyarrow", filename)
44 for filename in os.listdir("./ray/pyarrow_files/pyarrow")
45 if not os.path.isdir(os.path.join("ray/pyarrow_files/pyarrow",
46 filename))]
47
48 files_to_include = ray_files + pyarrow_files
49
50 for filename in files_to_include:
51 self.move_file(filename)
52 # Copy over the autogenerated flatbuffer Python bindings.
53 generated_python_directory = "ray/core/generated"
54 for filename in os.listdir(generated_python_directory):
55 if filename[-3:] == ".py":
56 self.move_file(os.path.join(generated_python_directory,
57 filename))
58
59 def move_file(self, filename):
60 # TODO(rkn): This feels very brittle. It may not handle all cases. See
61 # https://github.com/apache/arrow/blob/master/python/setup.py for an
62 # example.
63 source = filename
64 destination = os.path.join(self.build_lib, filename)
65 # Create the target directory if it doesn't already exist.
66 parent_directory = os.path.dirname(destination)
67 if not os.path.exists(parent_directory):
68 os.makedirs(parent_directory)
69 print("Copying {} to {}.".format(source, destination))
70 shutil.copy(source, destination)
71
72
73 class BinaryDistribution(Distribution):
74 def has_ext_modules(self):
75 return True
76
77
78 setup(name="ray",
79 version="0.2.1",
80 packages=find_packages(),
81 cmdclass={"build_ext": build_ext},
82 # The BinaryDistribution argument triggers build_ext.
83 distclass=BinaryDistribution,
84 install_requires=["numpy",
85 "funcsigs",
86 "click",
87 "colorama",
88 "psutil",
89 "pytest",
90 "redis",
91 "cloudpickle >= 0.2.2",
92 # The six module is required by pyarrow.
93 "six >= 1.0.0",
94 "flatbuffers"],
95 setup_requires=["cython >= 0.23"],
96 entry_points={"console_scripts": ["ray=ray.scripts.scripts:main"]},
97 include_package_data=True,
98 zip_safe=False,
99 license="Apache 2.0")
100
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/setup.py b/python/setup.py
--- a/python/setup.py
+++ b/python/setup.py
@@ -23,11 +23,23 @@
"ray/core/src/local_scheduler/local_scheduler",
"ray/core/src/local_scheduler/liblocal_scheduler_library.so",
"ray/core/src/global_scheduler/global_scheduler",
- "ray/core/src/catapult_files/index.html",
- "ray/core/src/catapult_files/trace_viewer_full.html",
"ray/WebUI.ipynb"
]
+optional_ray_files = []
+
+ray_ui_files = [
+ "ray/core/src/catapult_files/index.html",
+ "ray/core/src/catapult_files/trace_viewer_full.html"
+]
+
+# The UI files are mandatory if the INCLUDE_UI environment variable equals 1.
+# Otherwise, they are optional.
+if "INCLUDE_UI" in os.environ and os.environ["INCLUDE_UI"] == "1":
+ ray_files += ray_ui_files
+else:
+ optional_ray_files += ray_ui_files
+
class build_ext(_build_ext.build_ext):
def run(self):
@@ -56,6 +68,14 @@
self.move_file(os.path.join(generated_python_directory,
filename))
+ # Try to copy over the optional files.
+ for filename in optional_ray_files:
+ try:
+ self.move_file(filename)
+ except Exception as e:
+ print("Failed to copy optional file {}. This is ok."
+ .format(filename))
+
def move_file(self, filename):
# TODO(rkn): This feels very brittle. It may not handle all cases. See
# https://github.com/apache/arrow/blob/master/python/setup.py for an
|
{"golden_diff": "diff --git a/python/setup.py b/python/setup.py\n--- a/python/setup.py\n+++ b/python/setup.py\n@@ -23,11 +23,23 @@\n \"ray/core/src/local_scheduler/local_scheduler\",\n \"ray/core/src/local_scheduler/liblocal_scheduler_library.so\",\n \"ray/core/src/global_scheduler/global_scheduler\",\n- \"ray/core/src/catapult_files/index.html\",\n- \"ray/core/src/catapult_files/trace_viewer_full.html\",\n \"ray/WebUI.ipynb\"\n ]\n \n+optional_ray_files = []\n+\n+ray_ui_files = [\n+ \"ray/core/src/catapult_files/index.html\",\n+ \"ray/core/src/catapult_files/trace_viewer_full.html\"\n+]\n+\n+# The UI files are mandatory if the INCLUDE_UI environment variable equals 1.\n+# Otherwise, they are optional.\n+if \"INCLUDE_UI\" in os.environ and os.environ[\"INCLUDE_UI\"] == \"1\":\n+ ray_files += ray_ui_files\n+else:\n+ optional_ray_files += ray_ui_files\n+\n \n class build_ext(_build_ext.build_ext):\n def run(self):\n@@ -56,6 +68,14 @@\n self.move_file(os.path.join(generated_python_directory,\n filename))\n \n+ # Try to copy over the optional files.\n+ for filename in optional_ray_files:\n+ try:\n+ self.move_file(filename)\n+ except Exception as e:\n+ print(\"Failed to copy optional file {}. This is ok.\"\n+ .format(filename))\n+\n def move_file(self, filename):\n # TODO(rkn): This feels very brittle. It may not handle all cases. See\n # https://github.com/apache/arrow/blob/master/python/setup.py for an\n", "issue": "Ray fails to compile from master branch with Python 3 only\nRelevant log output:\r\n\r\n```\r\n + git clone https://github.com/ray-project/catapult.git /tmp/pip-_r2ylpv0-build/src/thirdparty/catapult\r\n Cloning into '/tmp/pip-_r2ylpv0-build/src/thirdparty/catapult'...\r\n + break\r\n + pushd /tmp/pip-_r2ylpv0-build/src/thirdparty/catapult\r\n /tmp/pip-_r2ylpv0-build/src/thirdparty/catapult /tmp/pip-_r2ylpv0-build/python\r\n + git checkout 18cd334755701cf0c3b90b7172126c686d2eb787\r\n Note: checking out '18cd334755701cf0c3b90b7172126c686d2eb787'.\r\n \r\n You are in 'detached HEAD' state. You can look around, make experimental\r\n changes and commit them, and you can discard any commits you make in this\r\n state without impacting any branches by performing another checkout.\r\n \r\n If you want to create a new branch to retain commits you create, you may\r\n do so (now or later) by using -b with the checkout command again. Example:\r\n \r\n git checkout -b <new-branch-name>\r\n \r\n HEAD is now at 18cd334... Allow timeline scroll-to-zoom without holding ALT\r\n + popd\r\n /tmp/pip-_r2ylpv0-build/python\r\n + [[ ! -f /tmp/pip-_r2ylpv0-build/src/thirdparty/../../python/ray/core/src/catapult_files/index.html ]]\r\n + python2 /tmp/pip-_r2ylpv0-build/src/thirdparty/catapult/tracing/bin/vulcanize_trace_viewer --config chrome --output /tmp/pip-_r2ylpv0-build/src/thirdparty/../../python/ray/core/src/catapult_files/trace_viewer_full.html\r\n /tmp/pip-_r2ylpv0-build/src/thirdparty/build_ui.sh: line 52: python2: command not found\r\n Traceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/tmp/pip-_r2ylpv0-build/python/setup.py\", line 99, in <module>\r\n license=\"Apache 2.0\")\r\n File \"/usr/local/lib/python3.6/distutils/core.py\", line 148, in setup\r\n dist.run_commands()\r\n File \"/usr/local/lib/python3.6/distutils/dist.py\", line 955, in run_commands\r\n self.run_command(cmd)\r\n File \"/usr/local/lib/python3.6/distutils/dist.py\", line 974, in run_command\r\n cmd_obj.run()\r\n File \"/usr/local/lib/python3.6/site-packages/setuptools/command/install.py\", line 61, in run\r\n return orig.install.run(self)\r\n File \"/usr/local/lib/python3.6/distutils/command/install.py\", line 545, in run\r\n self.run_command('build')\r\n File \"/usr/local/lib/python3.6/distutils/cmd.py\", line 313, in run_command\r\n self.distribution.run_command(command)\r\n File \"/usr/local/lib/python3.6/distutils/dist.py\", line 974, in run_command\r\n cmd_obj.run()\r\n File \"/usr/local/lib/python3.6/distutils/command/build.py\", line 135, in run\r\n self.run_command(cmd_name)\r\n File \"/usr/local/lib/python3.6/distutils/cmd.py\", line 313, in run_command\r\n self.distribution.run_command(command)\r\n File \"/usr/local/lib/python3.6/distutils/dist.py\", line 974, in run_command\r\n cmd_obj.run()\r\n File \"/tmp/pip-_r2ylpv0-build/python/setup.py\", line 38, in run\r\n subprocess.check_call([\"../build.sh\", sys.executable])\r\n File \"/usr/local/lib/python3.6/subprocess.py\", line 291, in check_call\r\n raise CalledProcessError(retcode, cmd)\r\n subprocess.CalledProcessError: Command '['../build.sh', '/usr/local/bin/python3.6']' returned non-zero exit status 127.\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport shutil\nimport subprocess\nimport sys\n\nfrom setuptools import setup, find_packages, Distribution\nimport setuptools.command.build_ext as _build_ext\n\n# Ideally, we could include these files by putting them in a\n# MANIFEST.in or using the package_data argument to setup, but the\n# MANIFEST.in gets applied at the very beginning when setup.py runs\n# before these files have been created, so we have to move the files\n# manually.\nray_files = [\n \"ray/core/src/common/thirdparty/redis/src/redis-server\",\n \"ray/core/src/common/redis_module/libray_redis_module.so\",\n \"ray/core/src/plasma/plasma_store\",\n \"ray/core/src/plasma/plasma_manager\",\n \"ray/core/src/local_scheduler/local_scheduler\",\n \"ray/core/src/local_scheduler/liblocal_scheduler_library.so\",\n \"ray/core/src/global_scheduler/global_scheduler\",\n \"ray/core/src/catapult_files/index.html\",\n \"ray/core/src/catapult_files/trace_viewer_full.html\",\n \"ray/WebUI.ipynb\"\n]\n\n\nclass build_ext(_build_ext.build_ext):\n def run(self):\n # Note: We are passing in sys.executable so that we use the same\n # version of Python to build pyarrow inside the build.sh script. Note\n # that certain flags will not be passed along such as --user or sudo.\n # TODO(rkn): Fix this.\n subprocess.check_call([\"../build.sh\", sys.executable])\n\n # We also need to install pyarrow along with Ray, so make sure that the\n # relevant non-Python pyarrow files get copied.\n pyarrow_files = [\n os.path.join(\"ray/pyarrow_files/pyarrow\", filename)\n for filename in os.listdir(\"./ray/pyarrow_files/pyarrow\")\n if not os.path.isdir(os.path.join(\"ray/pyarrow_files/pyarrow\",\n filename))]\n\n files_to_include = ray_files + pyarrow_files\n\n for filename in files_to_include:\n self.move_file(filename)\n # Copy over the autogenerated flatbuffer Python bindings.\n generated_python_directory = \"ray/core/generated\"\n for filename in os.listdir(generated_python_directory):\n if filename[-3:] == \".py\":\n self.move_file(os.path.join(generated_python_directory,\n filename))\n\n def move_file(self, filename):\n # TODO(rkn): This feels very brittle. It may not handle all cases. See\n # https://github.com/apache/arrow/blob/master/python/setup.py for an\n # example.\n source = filename\n destination = os.path.join(self.build_lib, filename)\n # Create the target directory if it doesn't already exist.\n parent_directory = os.path.dirname(destination)\n if not os.path.exists(parent_directory):\n os.makedirs(parent_directory)\n print(\"Copying {} to {}.\".format(source, destination))\n shutil.copy(source, destination)\n\n\nclass BinaryDistribution(Distribution):\n def has_ext_modules(self):\n return True\n\n\nsetup(name=\"ray\",\n version=\"0.2.1\",\n packages=find_packages(),\n cmdclass={\"build_ext\": build_ext},\n # The BinaryDistribution argument triggers build_ext.\n distclass=BinaryDistribution,\n install_requires=[\"numpy\",\n \"funcsigs\",\n \"click\",\n \"colorama\",\n \"psutil\",\n \"pytest\",\n \"redis\",\n \"cloudpickle >= 0.2.2\",\n # The six module is required by pyarrow.\n \"six >= 1.0.0\",\n \"flatbuffers\"],\n setup_requires=[\"cython >= 0.23\"],\n entry_points={\"console_scripts\": [\"ray=ray.scripts.scripts:main\"]},\n include_package_data=True,\n zip_safe=False,\n license=\"Apache 2.0\")\n", "path": "python/setup.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport shutil\nimport subprocess\nimport sys\n\nfrom setuptools import setup, find_packages, Distribution\nimport setuptools.command.build_ext as _build_ext\n\n# Ideally, we could include these files by putting them in a\n# MANIFEST.in or using the package_data argument to setup, but the\n# MANIFEST.in gets applied at the very beginning when setup.py runs\n# before these files have been created, so we have to move the files\n# manually.\nray_files = [\n \"ray/core/src/common/thirdparty/redis/src/redis-server\",\n \"ray/core/src/common/redis_module/libray_redis_module.so\",\n \"ray/core/src/plasma/plasma_store\",\n \"ray/core/src/plasma/plasma_manager\",\n \"ray/core/src/local_scheduler/local_scheduler\",\n \"ray/core/src/local_scheduler/liblocal_scheduler_library.so\",\n \"ray/core/src/global_scheduler/global_scheduler\",\n \"ray/WebUI.ipynb\"\n]\n\noptional_ray_files = []\n\nray_ui_files = [\n \"ray/core/src/catapult_files/index.html\",\n \"ray/core/src/catapult_files/trace_viewer_full.html\"\n]\n\n# The UI files are mandatory if the INCLUDE_UI environment variable equals 1.\n# Otherwise, they are optional.\nif \"INCLUDE_UI\" in os.environ and os.environ[\"INCLUDE_UI\"] == \"1\":\n ray_files += ray_ui_files\nelse:\n optional_ray_files += ray_ui_files\n\n\nclass build_ext(_build_ext.build_ext):\n def run(self):\n # Note: We are passing in sys.executable so that we use the same\n # version of Python to build pyarrow inside the build.sh script. Note\n # that certain flags will not be passed along such as --user or sudo.\n # TODO(rkn): Fix this.\n subprocess.check_call([\"../build.sh\", sys.executable])\n\n # We also need to install pyarrow along with Ray, so make sure that the\n # relevant non-Python pyarrow files get copied.\n pyarrow_files = [\n os.path.join(\"ray/pyarrow_files/pyarrow\", filename)\n for filename in os.listdir(\"./ray/pyarrow_files/pyarrow\")\n if not os.path.isdir(os.path.join(\"ray/pyarrow_files/pyarrow\",\n filename))]\n\n files_to_include = ray_files + pyarrow_files\n\n for filename in files_to_include:\n self.move_file(filename)\n # Copy over the autogenerated flatbuffer Python bindings.\n generated_python_directory = \"ray/core/generated\"\n for filename in os.listdir(generated_python_directory):\n if filename[-3:] == \".py\":\n self.move_file(os.path.join(generated_python_directory,\n filename))\n\n # Try to copy over the optional files.\n for filename in optional_ray_files:\n try:\n self.move_file(filename)\n except Exception as e:\n print(\"Failed to copy optional file {}. This is ok.\"\n .format(filename))\n\n def move_file(self, filename):\n # TODO(rkn): This feels very brittle. It may not handle all cases. See\n # https://github.com/apache/arrow/blob/master/python/setup.py for an\n # example.\n source = filename\n destination = os.path.join(self.build_lib, filename)\n # Create the target directory if it doesn't already exist.\n parent_directory = os.path.dirname(destination)\n if not os.path.exists(parent_directory):\n os.makedirs(parent_directory)\n print(\"Copying {} to {}.\".format(source, destination))\n shutil.copy(source, destination)\n\n\nclass BinaryDistribution(Distribution):\n def has_ext_modules(self):\n return True\n\n\nsetup(name=\"ray\",\n version=\"0.2.1\",\n packages=find_packages(),\n cmdclass={\"build_ext\": build_ext},\n # The BinaryDistribution argument triggers build_ext.\n distclass=BinaryDistribution,\n install_requires=[\"numpy\",\n \"funcsigs\",\n \"click\",\n \"colorama\",\n \"psutil\",\n \"pytest\",\n \"redis\",\n \"cloudpickle >= 0.2.2\",\n # The six module is required by pyarrow.\n \"six >= 1.0.0\",\n \"flatbuffers\"],\n setup_requires=[\"cython >= 0.23\"],\n entry_points={\"console_scripts\": [\"ray=ray.scripts.scripts:main\"]},\n include_package_data=True,\n zip_safe=False,\n license=\"Apache 2.0\")\n", "path": "python/setup.py"}]}
| 2,244 | 366 |
gh_patches_debug_15050
|
rasdani/github-patches
|
git_diff
|
ansible__awx-8136
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support `!include` and `!import` constructors in `awx import -f yaml` command
<!-- Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support, please use:
- http://webchat.freenode.net/?channels=ansible-awx
- https://groups.google.com/forum/#!forum/awx-project
We have to limit this because of limited volunteer time to respond to issues! -->
##### ISSUE TYPE
- Feature Idea
##### SUMMARY
<!-- Briefly describe the problem or desired enhancement. -->
Support `!import` and `!include` with `awx import -f yaml` command. This is useful when working with a large number of AWX resources. This allows breaking config into smaller files, and having a top-level file like this:
```yaml
# resources.yaml
projects:
- !include projects/project1.yaml
- !include projects/project2.yaml
job_templates:
- !include jobs/job1.yaml
# ....
```
Which allows managing resources in individual files (e.g. using a friendly directory structure), and still being able to import with `awx import -f yaml < resources.yaml`.
#### SUGGESTED IMPLEMENTATION
Use `awxkit.yaml_file.Loader` instead of `yaml.safe_load` in https://github.com/ansible/awx/blob/devel/awxkit/awxkit/cli/resource.py#L143
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awxkit/awxkit/cli/resource.py`
Content:
```
1 import yaml
2 import json
3 import os
4
5 from awxkit import api, config
6 from awxkit.exceptions import ImportExportError
7 from awxkit.utils import to_str
8 from awxkit.api.pages import Page
9 from awxkit.api.pages.api import EXPORTABLE_RESOURCES
10 from awxkit.cli.format import FORMATTERS, format_response, add_authentication_arguments
11 from awxkit.cli.utils import CustomRegistryMeta, cprint
12
13
14 CONTROL_RESOURCES = ['ping', 'config', 'me', 'metrics']
15
16 DEPRECATED_RESOURCES = {
17 'ad_hoc_commands': 'ad_hoc',
18 'applications': 'application',
19 'credentials': 'credential',
20 'credential_types': 'credential_type',
21 'groups': 'group',
22 'hosts': 'host',
23 'instances': 'instance',
24 'instance_groups': 'instance_group',
25 'inventory': 'inventories',
26 'inventory_scripts': 'inventory_script',
27 'inventory_sources': 'inventory_source',
28 'inventory_updates': 'inventory_update',
29 'jobs': 'job',
30 'job_templates': 'job_template',
31 'labels': 'label',
32 'workflow_job_template_nodes': 'node',
33 'notification_templates': 'notification_template',
34 'organizations': 'organization',
35 'projects': 'project',
36 'project_updates': 'project_update',
37 'roles': 'role',
38 'schedules': 'schedule',
39 'settings': 'setting',
40 'teams': 'team',
41 'workflow_job_templates': 'workflow',
42 'workflow_jobs': 'workflow_job',
43 'users': 'user'
44 }
45 DEPRECATED_RESOURCES_REVERSE = dict(
46 (v, k) for k, v in DEPRECATED_RESOURCES.items()
47 )
48
49
50 class CustomCommand(metaclass=CustomRegistryMeta):
51 """Base class for implementing custom commands.
52
53 Custom commands represent static code which should run - they are
54 responsible for returning and formatting their own output (which may or may
55 not be JSON/YAML).
56 """
57
58 help_text = ''
59
60 @property
61 def name(self):
62 raise NotImplementedError()
63
64 def handle(self, client, parser):
65 """To be implemented by subclasses.
66 Should return a dictionary that is JSON serializable
67 """
68 raise NotImplementedError()
69
70
71 class Login(CustomCommand):
72 name = 'login'
73 help_text = 'authenticate and retrieve an OAuth2 token'
74
75 def print_help(self, parser):
76 add_authentication_arguments(parser, os.environ)
77 parser.print_help()
78
79 def handle(self, client, parser):
80 auth = parser.add_argument_group('OAuth2.0 Options')
81 auth.add_argument('--description', help='description of the generated OAuth2.0 token', metavar='TEXT')
82 auth.add_argument('--conf.client_id', metavar='TEXT')
83 auth.add_argument('--conf.client_secret', metavar='TEXT')
84 auth.add_argument(
85 '--conf.scope', choices=['read', 'write'], default='write'
86 )
87 if client.help:
88 self.print_help(parser)
89 raise SystemExit()
90 parsed = parser.parse_known_args()[0]
91 kwargs = {
92 'client_id': getattr(parsed, 'conf.client_id', None),
93 'client_secret': getattr(parsed, 'conf.client_secret', None),
94 'scope': getattr(parsed, 'conf.scope', None),
95 }
96 if getattr(parsed, 'description', None):
97 kwargs['description'] = parsed.description
98 try:
99 token = api.Api().get_oauth2_token(**kwargs)
100 except Exception as e:
101 self.print_help(parser)
102 cprint(
103 'Error retrieving an OAuth2.0 token ({}).'.format(e.__class__),
104 'red'
105 )
106 else:
107 fmt = client.get_config('format')
108 if fmt == 'human':
109 print('export TOWER_OAUTH_TOKEN={}'.format(token))
110 else:
111 print(to_str(FORMATTERS[fmt]({'token': token}, '.')).strip())
112
113
114 class Config(CustomCommand):
115 name = 'config'
116 help_text = 'print current configuration values'
117
118 def handle(self, client, parser):
119 if client.help:
120 parser.print_help()
121 raise SystemExit()
122 return {
123 'base_url': config.base_url,
124 'token': client.get_config('token'),
125 'use_sessions': config.use_sessions,
126 'credentials': config.credentials,
127 }
128
129
130 class Import(CustomCommand):
131 name = 'import'
132 help_text = 'import resources into Tower'
133
134 def handle(self, client, parser):
135 if client.help:
136 parser.print_help()
137 raise SystemExit()
138
139 fmt = client.get_config('format')
140 if fmt == 'json':
141 data = json.load(client.stdin)
142 elif fmt == 'yaml':
143 data = yaml.safe_load(client.stdin)
144 else:
145 raise ImportExportError("Unsupported format for Import: " + fmt)
146
147 client.authenticate()
148 client.v2.import_assets(data)
149
150 return {}
151
152
153 class Export(CustomCommand):
154 name = 'export'
155 help_text = 'export resources from Tower'
156
157 def extend_parser(self, parser):
158 resources = parser.add_argument_group('resources')
159
160 for resource in EXPORTABLE_RESOURCES:
161 # This parsing pattern will result in 3 different possible outcomes:
162 # 1) the resource flag is not used at all, which will result in the attr being None
163 # 2) the resource flag is used with no argument, which will result in the attr being ''
164 # 3) the resource flag is used with an argument, and the attr will be that argument's value
165 resources.add_argument('--{}'.format(resource), nargs='?', const='')
166
167 def handle(self, client, parser):
168 self.extend_parser(parser)
169
170 if client.help:
171 parser.print_help()
172 raise SystemExit()
173
174 parsed = parser.parse_known_args()[0]
175 kwargs = {resource: getattr(parsed, resource, None) for resource in EXPORTABLE_RESOURCES}
176
177 client.authenticate()
178 return client.v2.export_assets(**kwargs)
179
180
181 def parse_resource(client, skip_deprecated=False):
182 subparsers = client.parser.add_subparsers(
183 dest='resource',
184 metavar='resource',
185 )
186
187 # check if the user is running a custom command
188 for command in CustomCommand.__subclasses__():
189 client.subparsers[command.name] = subparsers.add_parser(
190 command.name, help=command.help_text
191 )
192
193 if hasattr(client, 'v2'):
194 for k in client.v2.json.keys():
195 if k in ('dashboard',):
196 # the Dashboard API is deprecated and not supported
197 continue
198
199 # argparse aliases are *only* supported in Python3 (not 2.7)
200 kwargs = {}
201 if not skip_deprecated:
202 if k in DEPRECATED_RESOURCES:
203 kwargs['aliases'] = [DEPRECATED_RESOURCES[k]]
204
205 client.subparsers[k] = subparsers.add_parser(
206 k, help='', **kwargs
207 )
208
209 resource = client.parser.parse_known_args()[0].resource
210 if resource in DEPRECATED_RESOURCES.values():
211 client.argv[
212 client.argv.index(resource)
213 ] = DEPRECATED_RESOURCES_REVERSE[resource]
214 resource = DEPRECATED_RESOURCES_REVERSE[resource]
215
216 if resource in CustomCommand.registry:
217 parser = client.subparsers[resource]
218 command = CustomCommand.registry[resource]()
219 response = command.handle(client, parser)
220 if response:
221 _filter = client.get_config('filter')
222 if (
223 resource == 'config' and
224 client.get_config('format') == 'human'
225 ):
226 response = {
227 'count': len(response),
228 'results': [
229 {'key': k, 'value': v}
230 for k, v in response.items()
231 ]
232 }
233 _filter = 'key, value'
234 try:
235 connection = client.root.connection
236 except AttributeError:
237 connection = None
238 formatted = format_response(
239 Page.from_json(response, connection=connection),
240 fmt=client.get_config('format'),
241 filter=_filter
242 )
243 print(formatted)
244 raise SystemExit()
245 else:
246 return resource
247
248
249 def is_control_resource(resource):
250 # special root level resources that don't don't represent database
251 # entities that follow the list/detail semantic
252 return resource in CONTROL_RESOURCES
253
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/awxkit/awxkit/cli/resource.py b/awxkit/awxkit/cli/resource.py
--- a/awxkit/awxkit/cli/resource.py
+++ b/awxkit/awxkit/cli/resource.py
@@ -2,7 +2,7 @@
import json
import os
-from awxkit import api, config
+from awxkit import api, config, yaml_file
from awxkit.exceptions import ImportExportError
from awxkit.utils import to_str
from awxkit.api.pages import Page
@@ -140,7 +140,7 @@
if fmt == 'json':
data = json.load(client.stdin)
elif fmt == 'yaml':
- data = yaml.safe_load(client.stdin)
+ data = yaml.load(client.stdin, Loader=yaml_file.Loader)
else:
raise ImportExportError("Unsupported format for Import: " + fmt)
|
{"golden_diff": "diff --git a/awxkit/awxkit/cli/resource.py b/awxkit/awxkit/cli/resource.py\n--- a/awxkit/awxkit/cli/resource.py\n+++ b/awxkit/awxkit/cli/resource.py\n@@ -2,7 +2,7 @@\n import json\n import os\n \n-from awxkit import api, config\n+from awxkit import api, config, yaml_file\n from awxkit.exceptions import ImportExportError\n from awxkit.utils import to_str\n from awxkit.api.pages import Page\n@@ -140,7 +140,7 @@\n if fmt == 'json':\n data = json.load(client.stdin)\n elif fmt == 'yaml':\n- data = yaml.safe_load(client.stdin)\n+ data = yaml.load(client.stdin, Loader=yaml_file.Loader)\n else:\n raise ImportExportError(\"Unsupported format for Import: \" + fmt)\n", "issue": "Support `!include` and `!import` constructors in `awx import -f yaml` command\n<!-- Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support, please use:\r\n\r\n- http://webchat.freenode.net/?channels=ansible-awx\r\n- https://groups.google.com/forum/#!forum/awx-project\r\n\r\nWe have to limit this because of limited volunteer time to respond to issues! -->\r\n\r\n##### ISSUE TYPE\r\n - Feature Idea\r\n\r\n##### SUMMARY\r\n<!-- Briefly describe the problem or desired enhancement. -->\r\n\r\nSupport `!import` and `!include` with `awx import -f yaml` command. This is useful when working with a large number of AWX resources. This allows breaking config into smaller files, and having a top-level file like this:\r\n\r\n```yaml\r\n# resources.yaml\r\nprojects:\r\n- !include projects/project1.yaml\r\n- !include projects/project2.yaml\r\njob_templates:\r\n- !include jobs/job1.yaml\r\n\r\n# ....\r\n```\r\n\r\nWhich allows managing resources in individual files (e.g. using a friendly directory structure), and still being able to import with `awx import -f yaml < resources.yaml`.\r\n\r\n#### SUGGESTED IMPLEMENTATION\r\n\r\nUse `awxkit.yaml_file.Loader` instead of `yaml.safe_load` in https://github.com/ansible/awx/blob/devel/awxkit/awxkit/cli/resource.py#L143\r\n\r\n\n", "before_files": [{"content": "import yaml\nimport json\nimport os\n\nfrom awxkit import api, config\nfrom awxkit.exceptions import ImportExportError\nfrom awxkit.utils import to_str\nfrom awxkit.api.pages import Page\nfrom awxkit.api.pages.api import EXPORTABLE_RESOURCES\nfrom awxkit.cli.format import FORMATTERS, format_response, add_authentication_arguments\nfrom awxkit.cli.utils import CustomRegistryMeta, cprint\n\n\nCONTROL_RESOURCES = ['ping', 'config', 'me', 'metrics']\n\nDEPRECATED_RESOURCES = {\n 'ad_hoc_commands': 'ad_hoc',\n 'applications': 'application',\n 'credentials': 'credential',\n 'credential_types': 'credential_type',\n 'groups': 'group',\n 'hosts': 'host',\n 'instances': 'instance',\n 'instance_groups': 'instance_group',\n 'inventory': 'inventories',\n 'inventory_scripts': 'inventory_script',\n 'inventory_sources': 'inventory_source',\n 'inventory_updates': 'inventory_update',\n 'jobs': 'job',\n 'job_templates': 'job_template',\n 'labels': 'label',\n 'workflow_job_template_nodes': 'node',\n 'notification_templates': 'notification_template',\n 'organizations': 'organization',\n 'projects': 'project',\n 'project_updates': 'project_update',\n 'roles': 'role',\n 'schedules': 'schedule',\n 'settings': 'setting',\n 'teams': 'team',\n 'workflow_job_templates': 'workflow',\n 'workflow_jobs': 'workflow_job',\n 'users': 'user'\n}\nDEPRECATED_RESOURCES_REVERSE = dict(\n (v, k) for k, v in DEPRECATED_RESOURCES.items()\n)\n\n\nclass CustomCommand(metaclass=CustomRegistryMeta):\n \"\"\"Base class for implementing custom commands.\n\n Custom commands represent static code which should run - they are\n responsible for returning and formatting their own output (which may or may\n not be JSON/YAML).\n \"\"\"\n\n help_text = ''\n\n @property\n def name(self):\n raise NotImplementedError()\n\n def handle(self, client, parser):\n \"\"\"To be implemented by subclasses.\n Should return a dictionary that is JSON serializable\n \"\"\"\n raise NotImplementedError()\n\n\nclass Login(CustomCommand):\n name = 'login'\n help_text = 'authenticate and retrieve an OAuth2 token'\n\n def print_help(self, parser):\n add_authentication_arguments(parser, os.environ)\n parser.print_help()\n\n def handle(self, client, parser):\n auth = parser.add_argument_group('OAuth2.0 Options')\n auth.add_argument('--description', help='description of the generated OAuth2.0 token', metavar='TEXT')\n auth.add_argument('--conf.client_id', metavar='TEXT')\n auth.add_argument('--conf.client_secret', metavar='TEXT')\n auth.add_argument(\n '--conf.scope', choices=['read', 'write'], default='write'\n )\n if client.help:\n self.print_help(parser)\n raise SystemExit()\n parsed = parser.parse_known_args()[0]\n kwargs = {\n 'client_id': getattr(parsed, 'conf.client_id', None),\n 'client_secret': getattr(parsed, 'conf.client_secret', None),\n 'scope': getattr(parsed, 'conf.scope', None),\n }\n if getattr(parsed, 'description', None):\n kwargs['description'] = parsed.description\n try:\n token = api.Api().get_oauth2_token(**kwargs)\n except Exception as e:\n self.print_help(parser)\n cprint(\n 'Error retrieving an OAuth2.0 token ({}).'.format(e.__class__),\n 'red'\n )\n else:\n fmt = client.get_config('format')\n if fmt == 'human':\n print('export TOWER_OAUTH_TOKEN={}'.format(token))\n else:\n print(to_str(FORMATTERS[fmt]({'token': token}, '.')).strip())\n\n\nclass Config(CustomCommand):\n name = 'config'\n help_text = 'print current configuration values'\n\n def handle(self, client, parser):\n if client.help:\n parser.print_help()\n raise SystemExit()\n return {\n 'base_url': config.base_url,\n 'token': client.get_config('token'),\n 'use_sessions': config.use_sessions,\n 'credentials': config.credentials,\n }\n\n\nclass Import(CustomCommand):\n name = 'import'\n help_text = 'import resources into Tower'\n\n def handle(self, client, parser):\n if client.help:\n parser.print_help()\n raise SystemExit()\n\n fmt = client.get_config('format')\n if fmt == 'json':\n data = json.load(client.stdin)\n elif fmt == 'yaml':\n data = yaml.safe_load(client.stdin)\n else:\n raise ImportExportError(\"Unsupported format for Import: \" + fmt)\n\n client.authenticate()\n client.v2.import_assets(data)\n\n return {}\n\n\nclass Export(CustomCommand):\n name = 'export'\n help_text = 'export resources from Tower'\n\n def extend_parser(self, parser):\n resources = parser.add_argument_group('resources')\n\n for resource in EXPORTABLE_RESOURCES:\n # This parsing pattern will result in 3 different possible outcomes:\n # 1) the resource flag is not used at all, which will result in the attr being None\n # 2) the resource flag is used with no argument, which will result in the attr being ''\n # 3) the resource flag is used with an argument, and the attr will be that argument's value\n resources.add_argument('--{}'.format(resource), nargs='?', const='')\n\n def handle(self, client, parser):\n self.extend_parser(parser)\n\n if client.help:\n parser.print_help()\n raise SystemExit()\n\n parsed = parser.parse_known_args()[0]\n kwargs = {resource: getattr(parsed, resource, None) for resource in EXPORTABLE_RESOURCES}\n\n client.authenticate()\n return client.v2.export_assets(**kwargs)\n\n\ndef parse_resource(client, skip_deprecated=False):\n subparsers = client.parser.add_subparsers(\n dest='resource',\n metavar='resource',\n )\n\n # check if the user is running a custom command\n for command in CustomCommand.__subclasses__():\n client.subparsers[command.name] = subparsers.add_parser(\n command.name, help=command.help_text\n )\n\n if hasattr(client, 'v2'):\n for k in client.v2.json.keys():\n if k in ('dashboard',):\n # the Dashboard API is deprecated and not supported\n continue\n\n # argparse aliases are *only* supported in Python3 (not 2.7)\n kwargs = {}\n if not skip_deprecated:\n if k in DEPRECATED_RESOURCES:\n kwargs['aliases'] = [DEPRECATED_RESOURCES[k]]\n\n client.subparsers[k] = subparsers.add_parser(\n k, help='', **kwargs\n )\n\n resource = client.parser.parse_known_args()[0].resource\n if resource in DEPRECATED_RESOURCES.values():\n client.argv[\n client.argv.index(resource)\n ] = DEPRECATED_RESOURCES_REVERSE[resource]\n resource = DEPRECATED_RESOURCES_REVERSE[resource]\n\n if resource in CustomCommand.registry:\n parser = client.subparsers[resource]\n command = CustomCommand.registry[resource]()\n response = command.handle(client, parser)\n if response:\n _filter = client.get_config('filter')\n if (\n resource == 'config' and\n client.get_config('format') == 'human'\n ):\n response = {\n 'count': len(response),\n 'results': [\n {'key': k, 'value': v}\n for k, v in response.items()\n ]\n }\n _filter = 'key, value'\n try:\n connection = client.root.connection\n except AttributeError:\n connection = None\n formatted = format_response(\n Page.from_json(response, connection=connection),\n fmt=client.get_config('format'),\n filter=_filter\n )\n print(formatted)\n raise SystemExit()\n else:\n return resource\n\n\ndef is_control_resource(resource):\n # special root level resources that don't don't represent database\n # entities that follow the list/detail semantic\n return resource in CONTROL_RESOURCES\n", "path": "awxkit/awxkit/cli/resource.py"}], "after_files": [{"content": "import yaml\nimport json\nimport os\n\nfrom awxkit import api, config, yaml_file\nfrom awxkit.exceptions import ImportExportError\nfrom awxkit.utils import to_str\nfrom awxkit.api.pages import Page\nfrom awxkit.api.pages.api import EXPORTABLE_RESOURCES\nfrom awxkit.cli.format import FORMATTERS, format_response, add_authentication_arguments\nfrom awxkit.cli.utils import CustomRegistryMeta, cprint\n\n\nCONTROL_RESOURCES = ['ping', 'config', 'me', 'metrics']\n\nDEPRECATED_RESOURCES = {\n 'ad_hoc_commands': 'ad_hoc',\n 'applications': 'application',\n 'credentials': 'credential',\n 'credential_types': 'credential_type',\n 'groups': 'group',\n 'hosts': 'host',\n 'instances': 'instance',\n 'instance_groups': 'instance_group',\n 'inventory': 'inventories',\n 'inventory_scripts': 'inventory_script',\n 'inventory_sources': 'inventory_source',\n 'inventory_updates': 'inventory_update',\n 'jobs': 'job',\n 'job_templates': 'job_template',\n 'labels': 'label',\n 'workflow_job_template_nodes': 'node',\n 'notification_templates': 'notification_template',\n 'organizations': 'organization',\n 'projects': 'project',\n 'project_updates': 'project_update',\n 'roles': 'role',\n 'schedules': 'schedule',\n 'settings': 'setting',\n 'teams': 'team',\n 'workflow_job_templates': 'workflow',\n 'workflow_jobs': 'workflow_job',\n 'users': 'user'\n}\nDEPRECATED_RESOURCES_REVERSE = dict(\n (v, k) for k, v in DEPRECATED_RESOURCES.items()\n)\n\n\nclass CustomCommand(metaclass=CustomRegistryMeta):\n \"\"\"Base class for implementing custom commands.\n\n Custom commands represent static code which should run - they are\n responsible for returning and formatting their own output (which may or may\n not be JSON/YAML).\n \"\"\"\n\n help_text = ''\n\n @property\n def name(self):\n raise NotImplementedError()\n\n def handle(self, client, parser):\n \"\"\"To be implemented by subclasses.\n Should return a dictionary that is JSON serializable\n \"\"\"\n raise NotImplementedError()\n\n\nclass Login(CustomCommand):\n name = 'login'\n help_text = 'authenticate and retrieve an OAuth2 token'\n\n def print_help(self, parser):\n add_authentication_arguments(parser, os.environ)\n parser.print_help()\n\n def handle(self, client, parser):\n auth = parser.add_argument_group('OAuth2.0 Options')\n auth.add_argument('--description', help='description of the generated OAuth2.0 token', metavar='TEXT')\n auth.add_argument('--conf.client_id', metavar='TEXT')\n auth.add_argument('--conf.client_secret', metavar='TEXT')\n auth.add_argument(\n '--conf.scope', choices=['read', 'write'], default='write'\n )\n if client.help:\n self.print_help(parser)\n raise SystemExit()\n parsed = parser.parse_known_args()[0]\n kwargs = {\n 'client_id': getattr(parsed, 'conf.client_id', None),\n 'client_secret': getattr(parsed, 'conf.client_secret', None),\n 'scope': getattr(parsed, 'conf.scope', None),\n }\n if getattr(parsed, 'description', None):\n kwargs['description'] = parsed.description\n try:\n token = api.Api().get_oauth2_token(**kwargs)\n except Exception as e:\n self.print_help(parser)\n cprint(\n 'Error retrieving an OAuth2.0 token ({}).'.format(e.__class__),\n 'red'\n )\n else:\n fmt = client.get_config('format')\n if fmt == 'human':\n print('export TOWER_OAUTH_TOKEN={}'.format(token))\n else:\n print(to_str(FORMATTERS[fmt]({'token': token}, '.')).strip())\n\n\nclass Config(CustomCommand):\n name = 'config'\n help_text = 'print current configuration values'\n\n def handle(self, client, parser):\n if client.help:\n parser.print_help()\n raise SystemExit()\n return {\n 'base_url': config.base_url,\n 'token': client.get_config('token'),\n 'use_sessions': config.use_sessions,\n 'credentials': config.credentials,\n }\n\n\nclass Import(CustomCommand):\n name = 'import'\n help_text = 'import resources into Tower'\n\n def handle(self, client, parser):\n if client.help:\n parser.print_help()\n raise SystemExit()\n\n fmt = client.get_config('format')\n if fmt == 'json':\n data = json.load(client.stdin)\n elif fmt == 'yaml':\n data = yaml.load(client.stdin, Loader=yaml_file.Loader)\n else:\n raise ImportExportError(\"Unsupported format for Import: \" + fmt)\n\n client.authenticate()\n client.v2.import_assets(data)\n\n return {}\n\n\nclass Export(CustomCommand):\n name = 'export'\n help_text = 'export resources from Tower'\n\n def extend_parser(self, parser):\n resources = parser.add_argument_group('resources')\n\n for resource in EXPORTABLE_RESOURCES:\n # This parsing pattern will result in 3 different possible outcomes:\n # 1) the resource flag is not used at all, which will result in the attr being None\n # 2) the resource flag is used with no argument, which will result in the attr being ''\n # 3) the resource flag is used with an argument, and the attr will be that argument's value\n resources.add_argument('--{}'.format(resource), nargs='?', const='')\n\n def handle(self, client, parser):\n self.extend_parser(parser)\n\n if client.help:\n parser.print_help()\n raise SystemExit()\n\n parsed = parser.parse_known_args()[0]\n kwargs = {resource: getattr(parsed, resource, None) for resource in EXPORTABLE_RESOURCES}\n\n client.authenticate()\n return client.v2.export_assets(**kwargs)\n\n\ndef parse_resource(client, skip_deprecated=False):\n subparsers = client.parser.add_subparsers(\n dest='resource',\n metavar='resource',\n )\n\n # check if the user is running a custom command\n for command in CustomCommand.__subclasses__():\n client.subparsers[command.name] = subparsers.add_parser(\n command.name, help=command.help_text\n )\n\n if hasattr(client, 'v2'):\n for k in client.v2.json.keys():\n if k in ('dashboard',):\n # the Dashboard API is deprecated and not supported\n continue\n\n # argparse aliases are *only* supported in Python3 (not 2.7)\n kwargs = {}\n if not skip_deprecated:\n if k in DEPRECATED_RESOURCES:\n kwargs['aliases'] = [DEPRECATED_RESOURCES[k]]\n\n client.subparsers[k] = subparsers.add_parser(\n k, help='', **kwargs\n )\n\n resource = client.parser.parse_known_args()[0].resource\n if resource in DEPRECATED_RESOURCES.values():\n client.argv[\n client.argv.index(resource)\n ] = DEPRECATED_RESOURCES_REVERSE[resource]\n resource = DEPRECATED_RESOURCES_REVERSE[resource]\n\n if resource in CustomCommand.registry:\n parser = client.subparsers[resource]\n command = CustomCommand.registry[resource]()\n response = command.handle(client, parser)\n if response:\n _filter = client.get_config('filter')\n if (\n resource == 'config' and\n client.get_config('format') == 'human'\n ):\n response = {\n 'count': len(response),\n 'results': [\n {'key': k, 'value': v}\n for k, v in response.items()\n ]\n }\n _filter = 'key, value'\n try:\n connection = client.root.connection\n except AttributeError:\n connection = None\n formatted = format_response(\n Page.from_json(response, connection=connection),\n fmt=client.get_config('format'),\n filter=_filter\n )\n print(formatted)\n raise SystemExit()\n else:\n return resource\n\n\ndef is_control_resource(resource):\n # special root level resources that don't don't represent database\n # entities that follow the list/detail semantic\n return resource in CONTROL_RESOURCES\n", "path": "awxkit/awxkit/cli/resource.py"}]}
| 2,978 | 201 |
gh_patches_debug_19284
|
rasdani/github-patches
|
git_diff
|
openai__evals-1395
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`Failed to open: ../registry/data/social_iqa/few_shot.jsonl` with custom registry
### Describe the bug
I have created a new eval within a custom registry and installed evals from pypi (using a recent checkout from github). Calling `oaievals` works fine if I do not use few_show samples, but it does not if you include few shot examples.
As can be seen in the error thread below, the issue is caused by calling `self._prefix_registry_path(self.few_shot_jsonl)` (see [_prefix_registry_path](https://github.com/openai/evals/blob/4b7a66bd45f06156656e021e170e7574f6cde3f5/evals/eval.py#L159C6-L159C6)) which transforms the path string into a `Path` object. Then, `open_by_file_pattern` ([here](https://github.com/openai/evals/blob/4b7a66bd45f06156656e021e170e7574f6cde3f5/evals/data.py#L47)) still treats it as a string, unaware of the fact that it is actually a `Path`
### To Reproduce
1. Create a new eval with few-shot samples.
2. Call `oaieval`, for instance:
```
oaieval text-ada-001 social_iqa_few_shot --registry_path=../registry
```
3. You'll get an error thread as follows:
```
[2023-11-03 14:40:52,931] [registry.py:254] Loading registry from /home/lorenzo/venv/recog-LLM_capabilities/lib/python3.9/site-packages/evals/registry/evals
[2023-11-03 14:40:54,382] [registry.py:254] Loading registry from /home/lorenzo/.evals/evals
[2023-11-03 14:40:54,382] [registry.py:254] Loading registry from ../registry/evals
[2023-11-03 14:40:55,007] [oaieval.py:189] [1;35mRun started: 2311031440552KUYNQGH[0m
[2023-11-03 14:40:55,010] [data.py:90] Fetching ../registry/data/social_iqa/few_shot.jsonl
Traceback (most recent call last):
File "/home/lorenzo/venv/recog-LLM_capabilities/lib/python3.9/site-packages/evals/data.py", line 54, in open_by_file_pattern
if filename.endswith(".gz"):
AttributeError: 'PosixPath' object has no attribute 'endswith'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/lorenzo/venv/recog-LLM_capabilities/bin/oaieval", line 8, in <module>
sys.exit(main())
File "/home/lorenzo/venv/recog-LLM_capabilities/lib/python3.9/site-packages/evals/cli/oaieval.py", line 274, in main
run(args)
File "/home/lorenzo/venv/recog-LLM_capabilities/lib/python3.9/site-packages/evals/cli/oaieval.py", line 215, in run
eval: Eval = eval_class(
File "/home/lorenzo/venv/recog-LLM_capabilities/lib/python3.9/site-packages/evals/elsuite/basic/match.py", line 28, in __init__
self.few_shot = evals.get_jsonl(self._prefix_registry_path(self.few_shot_jsonl))
File "/home/lorenzo/venv/recog-LLM_capabilities/lib/python3.9/site-packages/evals/data.py", line 129, in get_jsonl
return _get_jsonl_file(path)
File "/home/lorenzo/venv/recog-LLM_capabilities/lib/python3.9/site-packages/evals/data.py", line 91, in _get_jsonl_file
with open_by_file_pattern(path, mode="r") as f:
File "/home/lorenzo/venv/recog-LLM_capabilities/lib/python3.9/site-packages/evals/data.py", line 75, in open_by_file_pattern
raise RuntimeError(f"Failed to open: {filename}") from e
RuntimeError: Failed to open: ../registry/data/social_iqa/few_shot.jsonl
```
### Code snippets
_No response_
### OS
Ubuntu 20.04
### Python version
python 3.9
### Library version
git+https://github.com/openai/evals.git@dd96814dd96bd64f3098afca8dc873aa8d8ce4c8
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evals/data.py`
Content:
```
1 """
2 This file defines utilities for working with data and files of various types.
3 """
4 import csv
5 import dataclasses
6 import gzip
7 import itertools
8 import json
9 import logging
10 import os
11 import urllib
12 from collections.abc import Iterator
13 from functools import partial
14 from pathlib import Path
15 from typing import Any, List, Optional, Sequence, Text, Union
16
17 import blobfile as bf
18 import lz4.frame
19 import pydantic
20 import pyzstd
21
22 logger = logging.getLogger(__name__)
23
24
25 def gzip_open(filename: str, mode: str = "rb", openhook: Any = open) -> gzip.GzipFile:
26 """Wrap the given openhook in gzip."""
27 if mode and "b" not in mode:
28 mode += "b"
29
30 return gzip.GzipFile(fileobj=openhook(filename, mode), mode=mode)
31
32
33 def lz4_open(filename: str, mode: str = "rb", openhook: Any = open) -> lz4.frame.LZ4FrameFile:
34 if mode and "b" not in mode:
35 mode += "b"
36
37 return lz4.frame.LZ4FrameFile(openhook(filename, mode), mode=mode)
38
39
40 def zstd_open(filename: str, mode: str = "rb", openhook: Any = open) -> pyzstd.ZstdFile:
41 if mode and "b" not in mode:
42 mode += "b"
43
44 return pyzstd.ZstdFile(openhook(filename, mode), mode=mode)
45
46
47 def open_by_file_pattern(filename: str, mode: str = "r", **kwargs: Any) -> Any:
48 """Can read/write to files on gcs/local with or without gzipping. If file
49 is stored on gcs, streams with blobfile. Otherwise use vanilla python open. If
50 filename endswith gz, then zip/unzip contents on the fly (note that gcs paths and
51 gzip are compatible)"""
52 open_fn = partial(bf.BlobFile, **kwargs)
53 try:
54 if filename.endswith(".gz"):
55 return gzip_open(filename, openhook=open_fn, mode=mode)
56 elif filename.endswith(".lz4"):
57 return lz4_open(filename, openhook=open_fn, mode=mode)
58 elif filename.endswith(".zst"):
59 return zstd_open(filename, openhook=open_fn, mode=mode)
60 else:
61 scheme = urllib.parse.urlparse(filename).scheme
62 if (not os.path.exists(filename)) and (scheme == "" or scheme == "file"):
63 return open_fn(
64 os.path.join(
65 os.path.dirname(os.path.abspath(__file__)),
66 "registry",
67 "data",
68 filename,
69 ),
70 mode=mode,
71 )
72 else:
73 return open_fn(filename, mode=mode)
74 except Exception as e:
75 raise RuntimeError(f"Failed to open: {filename}") from e
76
77
78 def _decode_json(line, path, line_number):
79 try:
80 return json.loads(line)
81 except json.JSONDecodeError as e:
82 custom_error_message = (
83 f"Error parsing JSON on line {line_number}: {e.msg} at {path}:{line_number}:{e.colno}"
84 )
85 logger.error(custom_error_message)
86 raise ValueError(custom_error_message) from None
87
88
89 def _get_jsonl_file(path):
90 logger.info(f"Fetching {path}")
91 with open_by_file_pattern(path, mode="r") as f:
92 return [_decode_json(line, path, i + 1) for i, line in enumerate(f)]
93
94
95 def _get_json_file(path):
96 logger.info(f"Fetching {path}")
97 with open_by_file_pattern(path, mode="r") as f:
98 return json.loads(f.read())
99
100
101 def _stream_jsonl_file(path) -> Iterator:
102 logger.info(f"Streaming {path}")
103 with bf.BlobFile(path, "r", streaming=True) as f:
104 for line in f:
105 yield json.loads(line)
106
107
108 def get_lines(path) -> list[dict]:
109 """
110 Get a list of lines from a file.
111 """
112 with open_by_file_pattern(path, mode="r") as f:
113 return f.readlines()
114
115
116 def get_jsonl(path: str) -> list[dict]:
117 """
118 Extract json lines from the given path.
119 If the path is a directory, look in subpaths recursively.
120
121 Return all lines from all jsonl files as a single list.
122 """
123 if bf.isdir(path):
124 result = []
125 for filename in bf.listdir(path):
126 if filename.endswith(".jsonl"):
127 result += get_jsonl(os.path.join(path, filename))
128 return result
129 return _get_jsonl_file(path)
130
131
132 def get_jsonls(paths: Sequence[str], line_limit=None) -> list[dict]:
133 return list(iter_jsonls(paths, line_limit))
134
135
136 def get_json(path) -> dict:
137 if bf.isdir(path):
138 raise ValueError("Path is a directory, only files are supported")
139 return _get_json_file(path)
140
141
142 def iter_jsonls(paths: Union[str, list[str]], line_limit=None) -> Iterator[dict]:
143 """
144 For each path in the input, iterate over the jsonl files in that path.
145 Look in subdirectories recursively.
146
147 Use an iterator to conserve memory.
148 """
149 if type(paths) == str:
150 paths = [paths]
151
152 def _iter():
153 for path in paths:
154 if bf.isdir(path):
155 for filename in bf.listdir(path):
156 if filename.endswith(".jsonl"):
157 yield from iter_jsonls([os.path.join(path, filename)])
158 else:
159 yield from _stream_jsonl_file(path)
160
161 return itertools.islice(_iter(), line_limit)
162
163
164 def get_csv(path, fieldnames=None):
165 with bf.BlobFile(path, "r", cache_dir="/tmp/bf_cache", streaming=False) as f:
166 reader = csv.DictReader(f, fieldnames=fieldnames)
167 return [row for row in reader]
168
169
170 def _to_py_types(o: Any, exclude_keys: List[Text]) -> Any:
171 if isinstance(o, dict):
172 return {
173 k: _to_py_types(v, exclude_keys=exclude_keys)
174 for k, v in o.items()
175 if k not in exclude_keys
176 }
177
178 if isinstance(o, list):
179 return [_to_py_types(v, exclude_keys=exclude_keys) for v in o]
180
181 if isinstance(o, Path):
182 return o.as_posix()
183
184 if dataclasses.is_dataclass(o):
185 return _to_py_types(dataclasses.asdict(o), exclude_keys=exclude_keys)
186
187 # pydantic data classes
188 if isinstance(o, pydantic.BaseModel):
189 return {
190 k: _to_py_types(v, exclude_keys=exclude_keys)
191 for k, v in json.loads(o.json()).items()
192 if k not in exclude_keys
193 }
194
195 return o
196
197
198 class EnhancedJSONEncoder(json.JSONEncoder):
199 def __init__(self, exclude_keys: Optional[List[Text]] = None, **kwargs: Any):
200 super().__init__(**kwargs)
201 self.exclude_keys = exclude_keys if exclude_keys else []
202
203 def default(self, o: Any) -> str:
204 return _to_py_types(o, self.exclude_keys)
205
206
207 def jsondumps(o: Any, ensure_ascii: bool = False, **kwargs: Any) -> str:
208 # The JSONEncoder class's .default method is only applied to dictionary values,
209 # not keys. In order to exclude keys from the output of this jsondumps method
210 # we need to exclude them outside the encoder.
211 if isinstance(o, dict) and "exclude_keys" in kwargs:
212 for key in kwargs["exclude_keys"]:
213 del o[key]
214 return json.dumps(o, cls=EnhancedJSONEncoder, ensure_ascii=ensure_ascii, **kwargs)
215
216
217 def jsondump(o: Any, fp: Any, ensure_ascii: bool = False, **kwargs: Any) -> None:
218 json.dump(o, fp, cls=EnhancedJSONEncoder, ensure_ascii=ensure_ascii, **kwargs)
219
220
221 def jsonloads(s: str, **kwargs: Any) -> Any:
222 return json.loads(s, **kwargs)
223
224
225 def jsonload(fp: Any, **kwargs: Any) -> Any:
226 return json.load(fp, **kwargs)
227
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/evals/data.py b/evals/data.py
--- a/evals/data.py
+++ b/evals/data.py
@@ -43,13 +43,16 @@
return pyzstd.ZstdFile(openhook(filename, mode), mode=mode)
-
-def open_by_file_pattern(filename: str, mode: str = "r", **kwargs: Any) -> Any:
+def open_by_file_pattern(filename: Union[str, Path], mode: str = "r", **kwargs: Any) -> Any:
"""Can read/write to files on gcs/local with or without gzipping. If file
is stored on gcs, streams with blobfile. Otherwise use vanilla python open. If
filename endswith gz, then zip/unzip contents on the fly (note that gcs paths and
gzip are compatible)"""
open_fn = partial(bf.BlobFile, **kwargs)
+
+ if isinstance(filename, Path):
+ filename = filename.as_posix()
+
try:
if filename.endswith(".gz"):
return gzip_open(filename, openhook=open_fn, mode=mode)
|
{"golden_diff": "diff --git a/evals/data.py b/evals/data.py\n--- a/evals/data.py\n+++ b/evals/data.py\n@@ -43,13 +43,16 @@\n \n return pyzstd.ZstdFile(openhook(filename, mode), mode=mode)\n \n-\n-def open_by_file_pattern(filename: str, mode: str = \"r\", **kwargs: Any) -> Any:\n+def open_by_file_pattern(filename: Union[str, Path], mode: str = \"r\", **kwargs: Any) -> Any:\n \"\"\"Can read/write to files on gcs/local with or without gzipping. If file\n is stored on gcs, streams with blobfile. Otherwise use vanilla python open. If\n filename endswith gz, then zip/unzip contents on the fly (note that gcs paths and\n gzip are compatible)\"\"\"\n open_fn = partial(bf.BlobFile, **kwargs)\n+\n+ if isinstance(filename, Path):\n+ filename = filename.as_posix()\n+\n try:\n if filename.endswith(\".gz\"):\n return gzip_open(filename, openhook=open_fn, mode=mode)\n", "issue": "`Failed to open: ../registry/data/social_iqa/few_shot.jsonl` with custom registry\n### Describe the bug\n\nI have created a new eval within a custom registry and installed evals from pypi (using a recent checkout from github). Calling `oaievals` works fine if I do not use few_show samples, but it does not if you include few shot examples. \r\n\r\nAs can be seen in the error thread below, the issue is caused by calling `self._prefix_registry_path(self.few_shot_jsonl)` (see [_prefix_registry_path](https://github.com/openai/evals/blob/4b7a66bd45f06156656e021e170e7574f6cde3f5/evals/eval.py#L159C6-L159C6)) which transforms the path string into a `Path` object. Then, `open_by_file_pattern` ([here](https://github.com/openai/evals/blob/4b7a66bd45f06156656e021e170e7574f6cde3f5/evals/data.py#L47)) still treats it as a string, unaware of the fact that it is actually a `Path`\n\n### To Reproduce\n\n1. Create a new eval with few-shot samples. \r\n2. Call `oaieval`, for instance: \r\n```\r\noaieval text-ada-001 social_iqa_few_shot --registry_path=../registry\r\n```\r\n3. You'll get an error thread as follows: \r\n```\r\n[2023-11-03 14:40:52,931] [registry.py:254] Loading registry from /home/lorenzo/venv/recog-LLM_capabilities/lib/python3.9/site-packages/evals/registry/evals\r\n[2023-11-03 14:40:54,382] [registry.py:254] Loading registry from /home/lorenzo/.evals/evals\r\n[2023-11-03 14:40:54,382] [registry.py:254] Loading registry from ../registry/evals\r\n[2023-11-03 14:40:55,007] [oaieval.py:189] \u001b[1;35mRun started: 2311031440552KUYNQGH\u001b[0m\r\n[2023-11-03 14:40:55,010] [data.py:90] Fetching ../registry/data/social_iqa/few_shot.jsonl\r\nTraceback (most recent call last):\r\n File \"/home/lorenzo/venv/recog-LLM_capabilities/lib/python3.9/site-packages/evals/data.py\", line 54, in open_by_file_pattern\r\n if filename.endswith(\".gz\"):\r\nAttributeError: 'PosixPath' object has no attribute 'endswith'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/lorenzo/venv/recog-LLM_capabilities/bin/oaieval\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/lorenzo/venv/recog-LLM_capabilities/lib/python3.9/site-packages/evals/cli/oaieval.py\", line 274, in main\r\n run(args)\r\n File \"/home/lorenzo/venv/recog-LLM_capabilities/lib/python3.9/site-packages/evals/cli/oaieval.py\", line 215, in run\r\n eval: Eval = eval_class(\r\n File \"/home/lorenzo/venv/recog-LLM_capabilities/lib/python3.9/site-packages/evals/elsuite/basic/match.py\", line 28, in __init__\r\n self.few_shot = evals.get_jsonl(self._prefix_registry_path(self.few_shot_jsonl))\r\n File \"/home/lorenzo/venv/recog-LLM_capabilities/lib/python3.9/site-packages/evals/data.py\", line 129, in get_jsonl\r\n return _get_jsonl_file(path)\r\n File \"/home/lorenzo/venv/recog-LLM_capabilities/lib/python3.9/site-packages/evals/data.py\", line 91, in _get_jsonl_file\r\n with open_by_file_pattern(path, mode=\"r\") as f:\r\n File \"/home/lorenzo/venv/recog-LLM_capabilities/lib/python3.9/site-packages/evals/data.py\", line 75, in open_by_file_pattern\r\n raise RuntimeError(f\"Failed to open: {filename}\") from e\r\nRuntimeError: Failed to open: ../registry/data/social_iqa/few_shot.jsonl\r\n```\r\n\r\n\n\n### Code snippets\n\n_No response_\n\n### OS\n\nUbuntu 20.04\n\n### Python version\n\npython 3.9\n\n### Library version\n\ngit+https://github.com/openai/evals.git@dd96814dd96bd64f3098afca8dc873aa8d8ce4c8\n", "before_files": [{"content": "\"\"\"\nThis file defines utilities for working with data and files of various types.\n\"\"\"\nimport csv\nimport dataclasses\nimport gzip\nimport itertools\nimport json\nimport logging\nimport os\nimport urllib\nfrom collections.abc import Iterator\nfrom functools import partial\nfrom pathlib import Path\nfrom typing import Any, List, Optional, Sequence, Text, Union\n\nimport blobfile as bf\nimport lz4.frame\nimport pydantic\nimport pyzstd\n\nlogger = logging.getLogger(__name__)\n\n\ndef gzip_open(filename: str, mode: str = \"rb\", openhook: Any = open) -> gzip.GzipFile:\n \"\"\"Wrap the given openhook in gzip.\"\"\"\n if mode and \"b\" not in mode:\n mode += \"b\"\n\n return gzip.GzipFile(fileobj=openhook(filename, mode), mode=mode)\n\n\ndef lz4_open(filename: str, mode: str = \"rb\", openhook: Any = open) -> lz4.frame.LZ4FrameFile:\n if mode and \"b\" not in mode:\n mode += \"b\"\n\n return lz4.frame.LZ4FrameFile(openhook(filename, mode), mode=mode)\n\n\ndef zstd_open(filename: str, mode: str = \"rb\", openhook: Any = open) -> pyzstd.ZstdFile:\n if mode and \"b\" not in mode:\n mode += \"b\"\n\n return pyzstd.ZstdFile(openhook(filename, mode), mode=mode)\n\n\ndef open_by_file_pattern(filename: str, mode: str = \"r\", **kwargs: Any) -> Any:\n \"\"\"Can read/write to files on gcs/local with or without gzipping. If file\n is stored on gcs, streams with blobfile. Otherwise use vanilla python open. If\n filename endswith gz, then zip/unzip contents on the fly (note that gcs paths and\n gzip are compatible)\"\"\"\n open_fn = partial(bf.BlobFile, **kwargs)\n try:\n if filename.endswith(\".gz\"):\n return gzip_open(filename, openhook=open_fn, mode=mode)\n elif filename.endswith(\".lz4\"):\n return lz4_open(filename, openhook=open_fn, mode=mode)\n elif filename.endswith(\".zst\"):\n return zstd_open(filename, openhook=open_fn, mode=mode)\n else:\n scheme = urllib.parse.urlparse(filename).scheme\n if (not os.path.exists(filename)) and (scheme == \"\" or scheme == \"file\"):\n return open_fn(\n os.path.join(\n os.path.dirname(os.path.abspath(__file__)),\n \"registry\",\n \"data\",\n filename,\n ),\n mode=mode,\n )\n else:\n return open_fn(filename, mode=mode)\n except Exception as e:\n raise RuntimeError(f\"Failed to open: {filename}\") from e\n\n\ndef _decode_json(line, path, line_number):\n try:\n return json.loads(line)\n except json.JSONDecodeError as e:\n custom_error_message = (\n f\"Error parsing JSON on line {line_number}: {e.msg} at {path}:{line_number}:{e.colno}\"\n )\n logger.error(custom_error_message)\n raise ValueError(custom_error_message) from None\n\n\ndef _get_jsonl_file(path):\n logger.info(f\"Fetching {path}\")\n with open_by_file_pattern(path, mode=\"r\") as f:\n return [_decode_json(line, path, i + 1) for i, line in enumerate(f)]\n\n\ndef _get_json_file(path):\n logger.info(f\"Fetching {path}\")\n with open_by_file_pattern(path, mode=\"r\") as f:\n return json.loads(f.read())\n\n\ndef _stream_jsonl_file(path) -> Iterator:\n logger.info(f\"Streaming {path}\")\n with bf.BlobFile(path, \"r\", streaming=True) as f:\n for line in f:\n yield json.loads(line)\n\n\ndef get_lines(path) -> list[dict]:\n \"\"\"\n Get a list of lines from a file.\n \"\"\"\n with open_by_file_pattern(path, mode=\"r\") as f:\n return f.readlines()\n\n\ndef get_jsonl(path: str) -> list[dict]:\n \"\"\"\n Extract json lines from the given path.\n If the path is a directory, look in subpaths recursively.\n\n Return all lines from all jsonl files as a single list.\n \"\"\"\n if bf.isdir(path):\n result = []\n for filename in bf.listdir(path):\n if filename.endswith(\".jsonl\"):\n result += get_jsonl(os.path.join(path, filename))\n return result\n return _get_jsonl_file(path)\n\n\ndef get_jsonls(paths: Sequence[str], line_limit=None) -> list[dict]:\n return list(iter_jsonls(paths, line_limit))\n\n\ndef get_json(path) -> dict:\n if bf.isdir(path):\n raise ValueError(\"Path is a directory, only files are supported\")\n return _get_json_file(path)\n\n\ndef iter_jsonls(paths: Union[str, list[str]], line_limit=None) -> Iterator[dict]:\n \"\"\"\n For each path in the input, iterate over the jsonl files in that path.\n Look in subdirectories recursively.\n\n Use an iterator to conserve memory.\n \"\"\"\n if type(paths) == str:\n paths = [paths]\n\n def _iter():\n for path in paths:\n if bf.isdir(path):\n for filename in bf.listdir(path):\n if filename.endswith(\".jsonl\"):\n yield from iter_jsonls([os.path.join(path, filename)])\n else:\n yield from _stream_jsonl_file(path)\n\n return itertools.islice(_iter(), line_limit)\n\n\ndef get_csv(path, fieldnames=None):\n with bf.BlobFile(path, \"r\", cache_dir=\"/tmp/bf_cache\", streaming=False) as f:\n reader = csv.DictReader(f, fieldnames=fieldnames)\n return [row for row in reader]\n\n\ndef _to_py_types(o: Any, exclude_keys: List[Text]) -> Any:\n if isinstance(o, dict):\n return {\n k: _to_py_types(v, exclude_keys=exclude_keys)\n for k, v in o.items()\n if k not in exclude_keys\n }\n\n if isinstance(o, list):\n return [_to_py_types(v, exclude_keys=exclude_keys) for v in o]\n\n if isinstance(o, Path):\n return o.as_posix()\n\n if dataclasses.is_dataclass(o):\n return _to_py_types(dataclasses.asdict(o), exclude_keys=exclude_keys)\n\n # pydantic data classes\n if isinstance(o, pydantic.BaseModel):\n return {\n k: _to_py_types(v, exclude_keys=exclude_keys)\n for k, v in json.loads(o.json()).items()\n if k not in exclude_keys\n }\n\n return o\n\n\nclass EnhancedJSONEncoder(json.JSONEncoder):\n def __init__(self, exclude_keys: Optional[List[Text]] = None, **kwargs: Any):\n super().__init__(**kwargs)\n self.exclude_keys = exclude_keys if exclude_keys else []\n\n def default(self, o: Any) -> str:\n return _to_py_types(o, self.exclude_keys)\n\n\ndef jsondumps(o: Any, ensure_ascii: bool = False, **kwargs: Any) -> str:\n # The JSONEncoder class's .default method is only applied to dictionary values,\n # not keys. In order to exclude keys from the output of this jsondumps method\n # we need to exclude them outside the encoder.\n if isinstance(o, dict) and \"exclude_keys\" in kwargs:\n for key in kwargs[\"exclude_keys\"]:\n del o[key]\n return json.dumps(o, cls=EnhancedJSONEncoder, ensure_ascii=ensure_ascii, **kwargs)\n\n\ndef jsondump(o: Any, fp: Any, ensure_ascii: bool = False, **kwargs: Any) -> None:\n json.dump(o, fp, cls=EnhancedJSONEncoder, ensure_ascii=ensure_ascii, **kwargs)\n\n\ndef jsonloads(s: str, **kwargs: Any) -> Any:\n return json.loads(s, **kwargs)\n\n\ndef jsonload(fp: Any, **kwargs: Any) -> Any:\n return json.load(fp, **kwargs)\n", "path": "evals/data.py"}], "after_files": [{"content": "\"\"\"\nThis file defines utilities for working with data and files of various types.\n\"\"\"\nimport csv\nimport dataclasses\nimport gzip\nimport itertools\nimport json\nimport logging\nimport os\nimport urllib\nfrom collections.abc import Iterator\nfrom functools import partial\nfrom pathlib import Path\nfrom typing import Any, List, Optional, Sequence, Text, Union\n\nimport blobfile as bf\nimport lz4.frame\nimport pydantic\nimport pyzstd\n\nlogger = logging.getLogger(__name__)\n\n\ndef gzip_open(filename: str, mode: str = \"rb\", openhook: Any = open) -> gzip.GzipFile:\n \"\"\"Wrap the given openhook in gzip.\"\"\"\n if mode and \"b\" not in mode:\n mode += \"b\"\n\n return gzip.GzipFile(fileobj=openhook(filename, mode), mode=mode)\n\n\ndef lz4_open(filename: str, mode: str = \"rb\", openhook: Any = open) -> lz4.frame.LZ4FrameFile:\n if mode and \"b\" not in mode:\n mode += \"b\"\n\n return lz4.frame.LZ4FrameFile(openhook(filename, mode), mode=mode)\n\n\ndef zstd_open(filename: str, mode: str = \"rb\", openhook: Any = open) -> pyzstd.ZstdFile:\n if mode and \"b\" not in mode:\n mode += \"b\"\n\n return pyzstd.ZstdFile(openhook(filename, mode), mode=mode)\n\ndef open_by_file_pattern(filename: Union[str, Path], mode: str = \"r\", **kwargs: Any) -> Any:\n \"\"\"Can read/write to files on gcs/local with or without gzipping. If file\n is stored on gcs, streams with blobfile. Otherwise use vanilla python open. If\n filename endswith gz, then zip/unzip contents on the fly (note that gcs paths and\n gzip are compatible)\"\"\"\n open_fn = partial(bf.BlobFile, **kwargs)\n\n if isinstance(filename, Path):\n filename = filename.as_posix()\n\n try:\n if filename.endswith(\".gz\"):\n return gzip_open(filename, openhook=open_fn, mode=mode)\n elif filename.endswith(\".lz4\"):\n return lz4_open(filename, openhook=open_fn, mode=mode)\n elif filename.endswith(\".zst\"):\n return zstd_open(filename, openhook=open_fn, mode=mode)\n else:\n scheme = urllib.parse.urlparse(filename).scheme\n if (not os.path.exists(filename)) and (scheme == \"\" or scheme == \"file\"):\n return open_fn(\n os.path.join(\n os.path.dirname(os.path.abspath(__file__)),\n \"registry\",\n \"data\",\n filename,\n ),\n mode=mode,\n )\n else:\n return open_fn(filename, mode=mode)\n except Exception as e:\n raise RuntimeError(f\"Failed to open: {filename}\") from e\n\n\ndef _decode_json(line, path, line_number):\n try:\n return json.loads(line)\n except json.JSONDecodeError as e:\n custom_error_message = (\n f\"Error parsing JSON on line {line_number}: {e.msg} at {path}:{line_number}:{e.colno}\"\n )\n logger.error(custom_error_message)\n raise ValueError(custom_error_message) from None\n\n\ndef _get_jsonl_file(path):\n logger.info(f\"Fetching {path}\")\n with open_by_file_pattern(path, mode=\"r\") as f:\n return [_decode_json(line, path, i + 1) for i, line in enumerate(f)]\n\n\ndef _get_json_file(path):\n logger.info(f\"Fetching {path}\")\n with open_by_file_pattern(path, mode=\"r\") as f:\n return json.loads(f.read())\n\n\ndef _stream_jsonl_file(path) -> Iterator:\n logger.info(f\"Streaming {path}\")\n with bf.BlobFile(path, \"r\", streaming=True) as f:\n for line in f:\n yield json.loads(line)\n\n\ndef get_lines(path) -> list[dict]:\n \"\"\"\n Get a list of lines from a file.\n \"\"\"\n with open_by_file_pattern(path, mode=\"r\") as f:\n return f.readlines()\n\n\ndef get_jsonl(path: str) -> list[dict]:\n \"\"\"\n Extract json lines from the given path.\n If the path is a directory, look in subpaths recursively.\n\n Return all lines from all jsonl files as a single list.\n \"\"\"\n if bf.isdir(path):\n result = []\n for filename in bf.listdir(path):\n if filename.endswith(\".jsonl\"):\n result += get_jsonl(os.path.join(path, filename))\n return result\n return _get_jsonl_file(path)\n\n\ndef get_jsonls(paths: Sequence[str], line_limit=None) -> list[dict]:\n return list(iter_jsonls(paths, line_limit))\n\n\ndef get_json(path) -> dict:\n if bf.isdir(path):\n raise ValueError(\"Path is a directory, only files are supported\")\n return _get_json_file(path)\n\n\ndef iter_jsonls(paths: Union[str, list[str]], line_limit=None) -> Iterator[dict]:\n \"\"\"\n For each path in the input, iterate over the jsonl files in that path.\n Look in subdirectories recursively.\n\n Use an iterator to conserve memory.\n \"\"\"\n if type(paths) == str:\n paths = [paths]\n\n def _iter():\n for path in paths:\n if bf.isdir(path):\n for filename in bf.listdir(path):\n if filename.endswith(\".jsonl\"):\n yield from iter_jsonls([os.path.join(path, filename)])\n else:\n yield from _stream_jsonl_file(path)\n\n return itertools.islice(_iter(), line_limit)\n\n\ndef get_csv(path, fieldnames=None):\n with bf.BlobFile(path, \"r\", cache_dir=\"/tmp/bf_cache\", streaming=False) as f:\n reader = csv.DictReader(f, fieldnames=fieldnames)\n return [row for row in reader]\n\n\ndef _to_py_types(o: Any, exclude_keys: List[Text]) -> Any:\n if isinstance(o, dict):\n return {\n k: _to_py_types(v, exclude_keys=exclude_keys)\n for k, v in o.items()\n if k not in exclude_keys\n }\n\n if isinstance(o, list):\n return [_to_py_types(v, exclude_keys=exclude_keys) for v in o]\n\n if isinstance(o, Path):\n return o.as_posix()\n\n if dataclasses.is_dataclass(o):\n return _to_py_types(dataclasses.asdict(o), exclude_keys=exclude_keys)\n\n # pydantic data classes\n if isinstance(o, pydantic.BaseModel):\n return {\n k: _to_py_types(v, exclude_keys=exclude_keys)\n for k, v in json.loads(o.json()).items()\n if k not in exclude_keys\n }\n\n return o\n\n\nclass EnhancedJSONEncoder(json.JSONEncoder):\n def __init__(self, exclude_keys: Optional[List[Text]] = None, **kwargs: Any):\n super().__init__(**kwargs)\n self.exclude_keys = exclude_keys if exclude_keys else []\n\n def default(self, o: Any) -> str:\n return _to_py_types(o, self.exclude_keys)\n\n\ndef jsondumps(o: Any, ensure_ascii: bool = False, **kwargs: Any) -> str:\n # The JSONEncoder class's .default method is only applied to dictionary values,\n # not keys. In order to exclude keys from the output of this jsondumps method\n # we need to exclude them outside the encoder.\n if isinstance(o, dict) and \"exclude_keys\" in kwargs:\n for key in kwargs[\"exclude_keys\"]:\n del o[key]\n return json.dumps(o, cls=EnhancedJSONEncoder, ensure_ascii=ensure_ascii, **kwargs)\n\n\ndef jsondump(o: Any, fp: Any, ensure_ascii: bool = False, **kwargs: Any) -> None:\n json.dump(o, fp, cls=EnhancedJSONEncoder, ensure_ascii=ensure_ascii, **kwargs)\n\n\ndef jsonloads(s: str, **kwargs: Any) -> Any:\n return json.loads(s, **kwargs)\n\n\ndef jsonload(fp: Any, **kwargs: Any) -> Any:\n return json.load(fp, **kwargs)\n", "path": "evals/data.py"}]}
| 3,786 | 245 |
gh_patches_debug_28651
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-1808
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reflection no longer runs on a schedule
## Description
<!-- A clear and concise description of what the bug is. -->
With some recent performance improvements, we introduced a known regression in that we no longer reflect DB objects on some schedule under all circumstances.
In particular, if you add a table (or tables) via some other client, it will never appear in the UI unless you've also done some other DDL operation _through_ the UI.
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
It should be possible to reflect DB objects when needed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/urls.py`
Content:
```
1 from django.urls import include, path, re_path
2 from rest_framework_nested import routers
3
4 from mathesar import views
5 from mathesar.api.db import viewsets as db_viewsets
6 from mathesar.api.ui import viewsets as ui_viewsets
7
8 db_router = routers.DefaultRouter()
9 db_router.register(r'tables', db_viewsets.TableViewSet, basename='table')
10 db_router.register(r'queries', db_viewsets.QueryViewSet, basename='query')
11 db_router.register(r'links', db_viewsets.LinkViewSet, basename='links')
12 db_router.register(r'schemas', db_viewsets.SchemaViewSet, basename='schema')
13 db_router.register(r'databases', db_viewsets.DatabaseViewSet, basename='database')
14 db_router.register(r'data_files', db_viewsets.DataFileViewSet, basename='data-file')
15
16 db_table_router = routers.NestedSimpleRouter(db_router, r'tables', lookup='table')
17 db_table_router.register(r'records', db_viewsets.RecordViewSet, basename='table-record')
18 db_table_router.register(r'settings', db_viewsets.TableSettingsViewSet, basename='table-setting')
19 db_table_router.register(r'columns', db_viewsets.ColumnViewSet, basename='table-column')
20 db_table_router.register(r'constraints', db_viewsets.ConstraintViewSet, basename='table-constraint')
21
22 ui_router = routers.DefaultRouter()
23 ui_router.register(r'databases', ui_viewsets.DatabaseViewSet, basename='database')
24 ui_router.register(r'users', ui_viewsets.UserViewSet, basename='user')
25 ui_router.register(r'database_roles', ui_viewsets.DatabaseRoleViewSet, basename='database_role')
26 ui_router.register(r'schema_roles', ui_viewsets.SchemaRoleViewSet, basename='schema_role')
27
28 urlpatterns = [
29 path('api/db/v0/', include(db_router.urls)),
30 path('api/db/v0/', include(db_table_router.urls)),
31 path('api/ui/v0/', include(ui_router.urls)),
32 path('auth/', include('django.contrib.auth.urls')),
33
34 path('', views.home, name='home'),
35 path('<db_name>/', views.schemas, name='schemas'),
36 re_path(
37 r'^(?P<db_name>\w+)/(?P<schema_id>\w+)/',
38 views.schema_home,
39 name='schema_home'
40 ),
41 ]
42
```
Path: `mathesar/views.py`
Content:
```
1 from django.shortcuts import render, redirect, get_object_or_404
2 from django.contrib.auth.decorators import login_required
3
4 from mathesar.models.base import Database, Schema, Table
5 from mathesar.api.serializers.databases import DatabaseSerializer, TypeSerializer
6 from mathesar.api.serializers.schemas import SchemaSerializer
7 from mathesar.api.serializers.tables import TableSerializer
8 from mathesar.api.serializers.queries import QuerySerializer
9 from mathesar.database.types import UIType
10 from mathesar.models.query import UIQuery
11
12
13 def get_schema_list(request, database):
14 schema_serializer = SchemaSerializer(
15 Schema.objects.filter(database=database),
16 many=True,
17 context={'request': request}
18 )
19 return schema_serializer.data
20
21
22 def get_database_list(request):
23 database_serializer = DatabaseSerializer(
24 Database.objects.all(),
25 many=True,
26 context={'request': request}
27 )
28 return database_serializer.data
29
30
31 def get_table_list(request, schema):
32 if schema is None:
33 return []
34 table_serializer = TableSerializer(
35 Table.objects.filter(schema=schema),
36 many=True,
37 context={'request': request}
38 )
39 return table_serializer.data
40
41
42 def get_queries_list(request, schema):
43 if schema is None:
44 return []
45 query_serializer = QuerySerializer(
46 UIQuery.objects.filter(base_table__schema=schema),
47 many=True,
48 context={'request': request}
49 )
50 return query_serializer.data
51
52
53 def get_ui_type_list(request, database):
54 if database is None:
55 return []
56 type_serializer = TypeSerializer(
57 UIType,
58 many=True,
59 context={'request': request}
60 )
61 return type_serializer.data
62
63
64 def get_common_data(request, database, schema=None):
65 return {
66 'current_db': database.name if database else None,
67 'current_schema': schema.id if schema else None,
68 'schemas': get_schema_list(request, database),
69 'databases': get_database_list(request),
70 'tables': get_table_list(request, schema),
71 'queries': get_queries_list(request, schema),
72 'abstract_types': get_ui_type_list(request, database)
73 }
74
75
76 def get_current_database(request, db_name):
77 # if there's a DB name passed in, try to retrieve the database, or return a 404 error.
78 if db_name is not None:
79 return get_object_or_404(Database, name=db_name)
80 else:
81 try:
82 # Try to get the first database available
83 return Database.objects.order_by('id').first()
84 except Database.DoesNotExist:
85 return None
86
87
88 def get_current_schema(request, schema_id, database):
89 # if there's a schema ID passed in, try to retrieve the schema, or return a 404 error.
90 if schema_id is not None:
91 return get_object_or_404(Schema, id=schema_id)
92 else:
93 try:
94 # Try to get the first schema in the DB
95 return Schema.objects.filter(database=database).order_by('id').first()
96 except Schema.DoesNotExist:
97 return None
98
99
100 def render_schema(request, database, schema):
101 # if there's no schema available, redirect to the schemas page.
102 if not schema:
103 return redirect('schemas', db_name=database.name)
104 else:
105 # We are redirecting so that the correct URL is passed to the frontend.
106 return redirect('schema_home', db_name=database.name, schema_id=schema.id)
107
108
109 @login_required
110 def home(request):
111 database = get_current_database(request, None)
112 return redirect('schemas', db_name=database.name)
113
114
115 @login_required
116 def schema_home(request, db_name, schema_id, **kwargs):
117 database = get_current_database(request, db_name)
118 schema = get_current_schema(request, schema_id, database)
119 return render(request, 'mathesar/index.html', {
120 'common_data': get_common_data(request, database, schema)
121 })
122
123
124 @login_required
125 def schemas(request, db_name):
126 database = get_current_database(request, db_name)
127 return render(request, 'mathesar/index.html', {
128 'common_data': get_common_data(request, database, None)
129 })
130
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mathesar/urls.py b/mathesar/urls.py
--- a/mathesar/urls.py
+++ b/mathesar/urls.py
@@ -29,6 +29,7 @@
path('api/db/v0/', include(db_router.urls)),
path('api/db/v0/', include(db_table_router.urls)),
path('api/ui/v0/', include(ui_router.urls)),
+ path('api/ui/v0/reflect/', views.reflect_all, name='reflect_all'),
path('auth/', include('django.contrib.auth.urls')),
path('', views.home, name='home'),
diff --git a/mathesar/views.py b/mathesar/views.py
--- a/mathesar/views.py
+++ b/mathesar/views.py
@@ -1,6 +1,10 @@
from django.shortcuts import render, redirect, get_object_or_404
from django.contrib.auth.decorators import login_required
+from rest_framework import status
+from rest_framework.decorators import api_view
+from rest_framework.response import Response
+from mathesar.state import reset_reflection
from mathesar.models.base import Database, Schema, Table
from mathesar.api.serializers.databases import DatabaseSerializer, TypeSerializer
from mathesar.api.serializers.schemas import SchemaSerializer
@@ -106,6 +110,13 @@
return redirect('schema_home', db_name=database.name, schema_id=schema.id)
+@login_required
+@api_view(['POST'])
+def reflect_all(_):
+ reset_reflection()
+ return Response(status=status.HTTP_200_OK)
+
+
@login_required
def home(request):
database = get_current_database(request, None)
|
{"golden_diff": "diff --git a/mathesar/urls.py b/mathesar/urls.py\n--- a/mathesar/urls.py\n+++ b/mathesar/urls.py\n@@ -29,6 +29,7 @@\n path('api/db/v0/', include(db_router.urls)),\n path('api/db/v0/', include(db_table_router.urls)),\n path('api/ui/v0/', include(ui_router.urls)),\n+ path('api/ui/v0/reflect/', views.reflect_all, name='reflect_all'),\n path('auth/', include('django.contrib.auth.urls')),\n \n path('', views.home, name='home'),\ndiff --git a/mathesar/views.py b/mathesar/views.py\n--- a/mathesar/views.py\n+++ b/mathesar/views.py\n@@ -1,6 +1,10 @@\n from django.shortcuts import render, redirect, get_object_or_404\n from django.contrib.auth.decorators import login_required\n+from rest_framework import status\n+from rest_framework.decorators import api_view\n+from rest_framework.response import Response\n \n+from mathesar.state import reset_reflection\n from mathesar.models.base import Database, Schema, Table\n from mathesar.api.serializers.databases import DatabaseSerializer, TypeSerializer\n from mathesar.api.serializers.schemas import SchemaSerializer\n@@ -106,6 +110,13 @@\n return redirect('schema_home', db_name=database.name, schema_id=schema.id)\n \n \n+@login_required\n+@api_view(['POST'])\n+def reflect_all(_):\n+ reset_reflection()\n+ return Response(status=status.HTTP_200_OK)\n+\n+\n @login_required\n def home(request):\n database = get_current_database(request, None)\n", "issue": "Reflection no longer runs on a schedule\n## Description\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nWith some recent performance improvements, we introduced a known regression in that we no longer reflect DB objects on some schedule under all circumstances.\r\n\r\nIn particular, if you add a table (or tables) via some other client, it will never appear in the UI unless you've also done some other DDL operation _through_ the UI.\r\n\r\n## Expected behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\nIt should be possible to reflect DB objects when needed.\n", "before_files": [{"content": "from django.urls import include, path, re_path\nfrom rest_framework_nested import routers\n\nfrom mathesar import views\nfrom mathesar.api.db import viewsets as db_viewsets\nfrom mathesar.api.ui import viewsets as ui_viewsets\n\ndb_router = routers.DefaultRouter()\ndb_router.register(r'tables', db_viewsets.TableViewSet, basename='table')\ndb_router.register(r'queries', db_viewsets.QueryViewSet, basename='query')\ndb_router.register(r'links', db_viewsets.LinkViewSet, basename='links')\ndb_router.register(r'schemas', db_viewsets.SchemaViewSet, basename='schema')\ndb_router.register(r'databases', db_viewsets.DatabaseViewSet, basename='database')\ndb_router.register(r'data_files', db_viewsets.DataFileViewSet, basename='data-file')\n\ndb_table_router = routers.NestedSimpleRouter(db_router, r'tables', lookup='table')\ndb_table_router.register(r'records', db_viewsets.RecordViewSet, basename='table-record')\ndb_table_router.register(r'settings', db_viewsets.TableSettingsViewSet, basename='table-setting')\ndb_table_router.register(r'columns', db_viewsets.ColumnViewSet, basename='table-column')\ndb_table_router.register(r'constraints', db_viewsets.ConstraintViewSet, basename='table-constraint')\n\nui_router = routers.DefaultRouter()\nui_router.register(r'databases', ui_viewsets.DatabaseViewSet, basename='database')\nui_router.register(r'users', ui_viewsets.UserViewSet, basename='user')\nui_router.register(r'database_roles', ui_viewsets.DatabaseRoleViewSet, basename='database_role')\nui_router.register(r'schema_roles', ui_viewsets.SchemaRoleViewSet, basename='schema_role')\n\nurlpatterns = [\n path('api/db/v0/', include(db_router.urls)),\n path('api/db/v0/', include(db_table_router.urls)),\n path('api/ui/v0/', include(ui_router.urls)),\n path('auth/', include('django.contrib.auth.urls')),\n\n path('', views.home, name='home'),\n path('<db_name>/', views.schemas, name='schemas'),\n re_path(\n r'^(?P<db_name>\\w+)/(?P<schema_id>\\w+)/',\n views.schema_home,\n name='schema_home'\n ),\n]\n", "path": "mathesar/urls.py"}, {"content": "from django.shortcuts import render, redirect, get_object_or_404\nfrom django.contrib.auth.decorators import login_required\n\nfrom mathesar.models.base import Database, Schema, Table\nfrom mathesar.api.serializers.databases import DatabaseSerializer, TypeSerializer\nfrom mathesar.api.serializers.schemas import SchemaSerializer\nfrom mathesar.api.serializers.tables import TableSerializer\nfrom mathesar.api.serializers.queries import QuerySerializer\nfrom mathesar.database.types import UIType\nfrom mathesar.models.query import UIQuery\n\n\ndef get_schema_list(request, database):\n schema_serializer = SchemaSerializer(\n Schema.objects.filter(database=database),\n many=True,\n context={'request': request}\n )\n return schema_serializer.data\n\n\ndef get_database_list(request):\n database_serializer = DatabaseSerializer(\n Database.objects.all(),\n many=True,\n context={'request': request}\n )\n return database_serializer.data\n\n\ndef get_table_list(request, schema):\n if schema is None:\n return []\n table_serializer = TableSerializer(\n Table.objects.filter(schema=schema),\n many=True,\n context={'request': request}\n )\n return table_serializer.data\n\n\ndef get_queries_list(request, schema):\n if schema is None:\n return []\n query_serializer = QuerySerializer(\n UIQuery.objects.filter(base_table__schema=schema),\n many=True,\n context={'request': request}\n )\n return query_serializer.data\n\n\ndef get_ui_type_list(request, database):\n if database is None:\n return []\n type_serializer = TypeSerializer(\n UIType,\n many=True,\n context={'request': request}\n )\n return type_serializer.data\n\n\ndef get_common_data(request, database, schema=None):\n return {\n 'current_db': database.name if database else None,\n 'current_schema': schema.id if schema else None,\n 'schemas': get_schema_list(request, database),\n 'databases': get_database_list(request),\n 'tables': get_table_list(request, schema),\n 'queries': get_queries_list(request, schema),\n 'abstract_types': get_ui_type_list(request, database)\n }\n\n\ndef get_current_database(request, db_name):\n # if there's a DB name passed in, try to retrieve the database, or return a 404 error.\n if db_name is not None:\n return get_object_or_404(Database, name=db_name)\n else:\n try:\n # Try to get the first database available\n return Database.objects.order_by('id').first()\n except Database.DoesNotExist:\n return None\n\n\ndef get_current_schema(request, schema_id, database):\n # if there's a schema ID passed in, try to retrieve the schema, or return a 404 error.\n if schema_id is not None:\n return get_object_or_404(Schema, id=schema_id)\n else:\n try:\n # Try to get the first schema in the DB\n return Schema.objects.filter(database=database).order_by('id').first()\n except Schema.DoesNotExist:\n return None\n\n\ndef render_schema(request, database, schema):\n # if there's no schema available, redirect to the schemas page.\n if not schema:\n return redirect('schemas', db_name=database.name)\n else:\n # We are redirecting so that the correct URL is passed to the frontend.\n return redirect('schema_home', db_name=database.name, schema_id=schema.id)\n\n\n@login_required\ndef home(request):\n database = get_current_database(request, None)\n return redirect('schemas', db_name=database.name)\n\n\n@login_required\ndef schema_home(request, db_name, schema_id, **kwargs):\n database = get_current_database(request, db_name)\n schema = get_current_schema(request, schema_id, database)\n return render(request, 'mathesar/index.html', {\n 'common_data': get_common_data(request, database, schema)\n })\n\n\n@login_required\ndef schemas(request, db_name):\n database = get_current_database(request, db_name)\n return render(request, 'mathesar/index.html', {\n 'common_data': get_common_data(request, database, None)\n })\n", "path": "mathesar/views.py"}], "after_files": [{"content": "from django.urls import include, path, re_path\nfrom rest_framework_nested import routers\n\nfrom mathesar import views\nfrom mathesar.api.db import viewsets as db_viewsets\nfrom mathesar.api.ui import viewsets as ui_viewsets\n\ndb_router = routers.DefaultRouter()\ndb_router.register(r'tables', db_viewsets.TableViewSet, basename='table')\ndb_router.register(r'queries', db_viewsets.QueryViewSet, basename='query')\ndb_router.register(r'links', db_viewsets.LinkViewSet, basename='links')\ndb_router.register(r'schemas', db_viewsets.SchemaViewSet, basename='schema')\ndb_router.register(r'databases', db_viewsets.DatabaseViewSet, basename='database')\ndb_router.register(r'data_files', db_viewsets.DataFileViewSet, basename='data-file')\n\ndb_table_router = routers.NestedSimpleRouter(db_router, r'tables', lookup='table')\ndb_table_router.register(r'records', db_viewsets.RecordViewSet, basename='table-record')\ndb_table_router.register(r'settings', db_viewsets.TableSettingsViewSet, basename='table-setting')\ndb_table_router.register(r'columns', db_viewsets.ColumnViewSet, basename='table-column')\ndb_table_router.register(r'constraints', db_viewsets.ConstraintViewSet, basename='table-constraint')\n\nui_router = routers.DefaultRouter()\nui_router.register(r'databases', ui_viewsets.DatabaseViewSet, basename='database')\nui_router.register(r'users', ui_viewsets.UserViewSet, basename='user')\nui_router.register(r'database_roles', ui_viewsets.DatabaseRoleViewSet, basename='database_role')\nui_router.register(r'schema_roles', ui_viewsets.SchemaRoleViewSet, basename='schema_role')\n\nurlpatterns = [\n path('api/db/v0/', include(db_router.urls)),\n path('api/db/v0/', include(db_table_router.urls)),\n path('api/ui/v0/', include(ui_router.urls)),\n path('api/ui/v0/reflect/', views.reflect_all, name='reflect_all'),\n path('auth/', include('django.contrib.auth.urls')),\n\n path('', views.home, name='home'),\n path('<db_name>/', views.schemas, name='schemas'),\n re_path(\n r'^(?P<db_name>\\w+)/(?P<schema_id>\\w+)/',\n views.schema_home,\n name='schema_home'\n ),\n]\n", "path": "mathesar/urls.py"}, {"content": "from django.shortcuts import render, redirect, get_object_or_404\nfrom django.contrib.auth.decorators import login_required\nfrom rest_framework import status\nfrom rest_framework.decorators import api_view\nfrom rest_framework.response import Response\n\nfrom mathesar.state import reset_reflection\nfrom mathesar.models.base import Database, Schema, Table\nfrom mathesar.api.serializers.databases import DatabaseSerializer, TypeSerializer\nfrom mathesar.api.serializers.schemas import SchemaSerializer\nfrom mathesar.api.serializers.tables import TableSerializer\nfrom mathesar.api.serializers.queries import QuerySerializer\nfrom mathesar.database.types import UIType\nfrom mathesar.models.query import UIQuery\n\n\ndef get_schema_list(request, database):\n schema_serializer = SchemaSerializer(\n Schema.objects.filter(database=database),\n many=True,\n context={'request': request}\n )\n return schema_serializer.data\n\n\ndef get_database_list(request):\n database_serializer = DatabaseSerializer(\n Database.objects.all(),\n many=True,\n context={'request': request}\n )\n return database_serializer.data\n\n\ndef get_table_list(request, schema):\n if schema is None:\n return []\n table_serializer = TableSerializer(\n Table.objects.filter(schema=schema),\n many=True,\n context={'request': request}\n )\n return table_serializer.data\n\n\ndef get_queries_list(request, schema):\n if schema is None:\n return []\n query_serializer = QuerySerializer(\n UIQuery.objects.filter(base_table__schema=schema),\n many=True,\n context={'request': request}\n )\n return query_serializer.data\n\n\ndef get_ui_type_list(request, database):\n if database is None:\n return []\n type_serializer = TypeSerializer(\n UIType,\n many=True,\n context={'request': request}\n )\n return type_serializer.data\n\n\ndef get_common_data(request, database, schema=None):\n return {\n 'current_db': database.name if database else None,\n 'current_schema': schema.id if schema else None,\n 'schemas': get_schema_list(request, database),\n 'databases': get_database_list(request),\n 'tables': get_table_list(request, schema),\n 'queries': get_queries_list(request, schema),\n 'abstract_types': get_ui_type_list(request, database)\n }\n\n\ndef get_current_database(request, db_name):\n # if there's a DB name passed in, try to retrieve the database, or return a 404 error.\n if db_name is not None:\n return get_object_or_404(Database, name=db_name)\n else:\n try:\n # Try to get the first database available\n return Database.objects.order_by('id').first()\n except Database.DoesNotExist:\n return None\n\n\ndef get_current_schema(request, schema_id, database):\n # if there's a schema ID passed in, try to retrieve the schema, or return a 404 error.\n if schema_id is not None:\n return get_object_or_404(Schema, id=schema_id)\n else:\n try:\n # Try to get the first schema in the DB\n return Schema.objects.filter(database=database).order_by('id').first()\n except Schema.DoesNotExist:\n return None\n\n\ndef render_schema(request, database, schema):\n # if there's no schema available, redirect to the schemas page.\n if not schema:\n return redirect('schemas', db_name=database.name)\n else:\n # We are redirecting so that the correct URL is passed to the frontend.\n return redirect('schema_home', db_name=database.name, schema_id=schema.id)\n\n\n@login_required\n@api_view(['POST'])\ndef reflect_all(_):\n reset_reflection()\n return Response(status=status.HTTP_200_OK)\n\n\n@login_required\ndef home(request):\n database = get_current_database(request, None)\n return redirect('schemas', db_name=database.name)\n\n\n@login_required\ndef schema_home(request, db_name, schema_id, **kwargs):\n database = get_current_database(request, db_name)\n schema = get_current_schema(request, schema_id, database)\n return render(request, 'mathesar/index.html', {\n 'common_data': get_common_data(request, database, schema)\n })\n\n\n@login_required\ndef schemas(request, db_name):\n database = get_current_database(request, db_name)\n return render(request, 'mathesar/index.html', {\n 'common_data': get_common_data(request, database, None)\n })\n", "path": "mathesar/views.py"}]}
| 2,106 | 356 |
gh_patches_debug_46346
|
rasdani/github-patches
|
git_diff
|
vaexio__vaex-1113
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG-REPORT]Overflowing memory using a custom selection in df.count() in Vaex 4.0.0a5
Using `vaex: 4.0.0a5` I am having the issue that my memory is overflowing when I do a count of a selection that combines a `df.select()` selection with an expression. The MWE below crashes for me. The machine that I am running this on has 251G memory, reaches 100% of this and then crashes.
I am pretty sure that the bug arises from combining the named selection and the `(x>y)` expression.
```
import vaex
import numpy as np
x,y,z = np.random.normal(size=(3,10_000_000))
df = vaex.from_arrays(x=x,y=y,z=z)
df.select('x*y<z*z',name='tmp')
df.count(selection='(tmp)&(x>y)')
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `packages/vaex-core/vaex/scopes.py`
Content:
```
1 from __future__ import division, print_function
2 import logging
3 import numpy as np
4 import pyarrow as pa
5 import vaex.array_types
6 import vaex.arrow.numpy_dispatch
7
8
9 from .utils import (_ensure_strings_from_expressions,
10 _ensure_string_from_expression,
11 _ensure_list,
12 _is_limit,
13 _isnumber,
14 _issequence,
15 _is_string,
16 _parse_reduction,
17 _parse_n,
18 _normalize_selection_name,
19 _normalize,
20 _parse_f,
21 _expand,
22 _expand_shape,
23 _expand_limits,
24 as_flat_float,
25 as_flat_array,
26 _split_and_combine_mask)
27 from .expression import expression_namespace
28 from vaex.arrow.numpy_dispatch import wrap, unwrap
29 import vaex.expression
30
31 logger = logging.getLogger('vaex.scopes')
32
33
34 class ScopeBase(object):
35 def get(self, attr, default=None): # otherwise pdb crashes during pytest
36 if attr == "__tracebackhide__":
37 return False
38 return default
39
40
41 class UnitScope(ScopeBase):
42 def __init__(self, df, value=None):
43 self.df = df
44 self.value = value
45
46 def __getitem__(self, variable):
47 import astropy.units
48 if variable in self.df.units:
49 unit = self.df.units[variable]
50 return (self.value * unit) if self.value is not None else unit
51 elif variable in self.df.virtual_columns:
52 return eval(self.df.virtual_columns[variable], expression_namespace, self)
53 elif variable in self.df.variables:
54 return astropy.units.dimensionless_unscaled # TODO units for variables?
55 else:
56 raise KeyError("unkown variable %s" % variable)
57
58
59 class _BlockScope(ScopeBase):
60 def __init__(self, df, i1, i2, mask=None, **variables):
61 """
62
63 :param DataFrameLocal DataFrame: the *local* DataFrame
64 :param i1: start index
65 :param i2: end index
66 :param values:
67 :return:
68 """
69 self.df = df
70 self.i1 = int(i1)
71 self.i2 = int(i2)
72 self.variables = variables
73 self.values = dict(self.variables)
74 self.buffers = {}
75 self.mask = mask if mask is not None else None
76
77 def move(self, i1, i2):
78 length_new = i2 - i1
79 length_old = self.i2 - self.i1
80 if length_new > length_old: # old buffers are too small, discard them
81 self.buffers = {}
82 else:
83 for name in list(self.buffers.keys()):
84 self.buffers[name] = self.buffers[name][:length_new]
85 self.i1 = int(i1)
86 self.i2 = int(i2)
87 self.values = dict(self.variables)
88
89 def __contains__(self, name): # otherwise pdb crashes during pytest
90 return name in self.buffers # not sure this should also include varibles, columns and virtual columns
91
92 def _ensure_buffer(self, column):
93 if column not in self.buffers:
94 logger.debug("creating column for: %s", column)
95 self.buffers[column] = np.zeros(self.i2 - self.i1)
96
97 def evaluate(self, expression, out=None):
98 if isinstance(expression, vaex.expression.Expression):
99 expression = expression.expression
100 try:
101 # logger.debug("try avoid evaluating: %s", expression)
102 result = self[expression]
103 except KeyError:
104 # logger.debug("no luck, eval: %s", expression)
105 # result = ne.evaluate(expression, local_dict=self, out=out)
106 # logger.debug("in eval")
107 # eval("def f(")
108 result = eval(expression, expression_namespace, self)
109 self.values[expression] = result
110 # if out is not None:
111 # out[:] = result
112 # result = out
113 # logger.debug("out eval")
114 # logger.debug("done with eval of %s", expression)
115 result = unwrap(result)
116 return result
117
118 def __getitem__(self, variable):
119 # logger.debug("get " + variable)
120 # return self.df.columns[variable][self.i1:self.i2]
121 if variable == 'df':
122 return self # to support df['no!identifier']
123 if variable in expression_namespace:
124 return expression_namespace[variable]
125 try:
126 if variable in self.values:
127 return self.values[variable]
128 elif variable in self.df.columns:
129 offset = self.df._index_start
130 # if self.df._needs_copy(variable):
131 # self._ensure_buffer(variable)
132 # self.values[variable] = self.buffers[variable] = self.df.columns[variable][self.i1:self.i2].astype(np.float64)
133 # Previously we casted anything to .astype(np.float64), this led to rounding off of int64, when exporting
134 # self.values[variable] = self.df.columns[variable][offset+self.i1:offset+self.i2][:]
135 # else:
136 values = self.df.columns[variable][offset+self.i1:offset+self.i2]
137 if self.mask is not None:
138 # TODO: we may want to put this in array_types
139 if isinstance(values, (pa.Array, pa.ChunkedArray)):
140 values = values.filter(vaex.array_types.to_arrow(self.mask))
141 else:
142 values = values[self.mask]
143 values = wrap(values)
144 self.values[variable] = values
145 elif variable in list(self.df.virtual_columns.keys()):
146 expression = self.df.virtual_columns[variable]
147 if isinstance(expression, dict):
148 function = expression['function']
149 arguments = [self.evaluate(k) for k in expression['arguments']]
150 self.values[variable] = function(*arguments)
151 else:
152 # self._ensure_buffer(variable)
153 values = self.evaluate(expression)
154 values = wrap(values)
155 self.values[variable] = values
156 # self.values[variable] = self.buffers[variable]
157 elif variable in self.df.functions:
158 f = self.df.functions[variable].f
159 return vaex.arrow.numpy_dispatch.autowrapper(f)
160 if variable not in self.values:
161 raise KeyError("Unknown variables or column: %r" % (variable,))
162
163 return self.values[variable]
164 except:
165 # logger.exception("error in evaluating: %r" % variable)
166 raise
167
168
169 class _BlockScopeSelection(ScopeBase):
170 def __init__(self, df, i1, i2, selection=None, cache=False, filter_mask=None):
171 self.df = df
172 self.i1 = i1
173 self.i2 = i2
174 self.selection = selection
175 self.store_in_cache = cache
176 self.filter_mask = filter_mask
177
178 def evaluate(self, expression):
179 if expression is True:
180 expression = "default"
181 try:
182 expression = _ensure_string_from_expression(expression)
183 result = eval(expression, expression_namespace, self)
184 except:
185 import traceback as tb
186 tb.print_stack()
187 raise
188 result = unwrap(result)
189 return result
190
191 def __contains__(self, name): # otherwise pdb crashes during pytest
192 return False
193
194 def __getitem__(self, variable):
195 if variable == "__tracebackhide__": # required for tracebacks
196 return False
197 # logger.debug("getitem for selection: %s", variable)
198 try:
199 selection = self.selection
200 if selection is None and self.df.has_selection(variable):
201 selection = self.df.get_selection(variable)
202 # logger.debug("selection for %r: %s %r", variable, selection, self.df.selection_histories)
203 key = (self.i1, self.i2)
204 if selection:
205 assert variable in self.df._selection_masks, "%s mask not found" % (variable, )
206 cache = self.df._selection_mask_caches[variable]
207 # logger.debug("selection cache: %r" % cache)
208 full_mask = self.df._selection_masks[variable]
209 selection_in_cache, mask = cache.get(key, (None, None))
210
211 # logger.debug("mask for %r is %r", variable, mask)
212 if selection_in_cache == selection:
213 if self.filter_mask is not None:
214 return mask[self.filter_mask]
215 return mask
216 # logger.debug("was not cached")
217 if variable in self.df.variables:
218 return self.df.variables[variable]
219 mask_values = selection.evaluate(self.df, variable, self.i1, self.i2, self.filter_mask)
220
221 # get a view on a subset of the mask
222 sub_mask = full_mask.view(self.i1, self.i2)
223 sub_mask_array = np.asarray(sub_mask)
224 # and update it
225 if self.filter_mask is not None: # if we have a mask, the selection we evaluated is also filtered
226 sub_mask_array[:] = 0
227 sub_mask_array[:][self.filter_mask] = mask_values
228 else:
229 sub_mask_array[:] = mask_values
230 # logger.debug("put selection in mask with key %r" % (key,))
231 if self.store_in_cache:
232 cache[key] = selection, sub_mask_array
233 # cache[key] = selection, mask_values
234 if self.filter_mask is not None:
235 return sub_mask_array[self.filter_mask]
236 else:
237 return sub_mask_array
238 # return mask_values
239 else:
240 offset = self.df._index_start
241 if variable in expression_namespace:
242 return expression_namespace[variable]
243 elif variable in self.df.columns:
244 values = self.df.columns[variable][offset+self.i1:offset+self.i2]
245 # TODO: we may want to put this in array_types
246 if self.filter_mask is not None:
247 if isinstance(values, (pa.Array, pa.ChunkedArray)):
248 values = values.filter(vaex.array_types.to_arrow(self.filter_mask))
249 else:
250 values = values[self.filter_mask]
251 values = wrap(values)
252 return values
253 elif variable in self.df.variables:
254 return self.df.variables[variable]
255 elif variable in self.df.virtual_columns:
256 expression = self.df.virtual_columns[variable]
257 # self._ensure_buffer(variable)
258 if expression == variable:
259 raise ValueError(f'Recursion protection: virtual column {variable} refers to itself')
260 values = self.evaluate(expression) # , out=self.buffers[variable])
261 values = wrap(values)
262 return values
263 elif variable in self.df.functions:
264 f = self.df.functions[variable].f
265 return vaex.arrow.numpy_dispatch.autowrapper(f)
266 raise KeyError("Unknown variables or column: %r" % (variable,))
267 except:
268 import traceback as tb
269 tb.print_exc()
270 logger.exception("error in evaluating: %r" % variable)
271 raise
272
273
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/packages/vaex-core/vaex/scopes.py b/packages/vaex-core/vaex/scopes.py
--- a/packages/vaex-core/vaex/scopes.py
+++ b/packages/vaex-core/vaex/scopes.py
@@ -106,7 +106,7 @@
# logger.debug("in eval")
# eval("def f(")
result = eval(expression, expression_namespace, self)
- self.values[expression] = result
+ self.values[expression] = wrap(result)
# if out is not None:
# out[:] = result
# result = out
@@ -140,19 +140,17 @@
values = values.filter(vaex.array_types.to_arrow(self.mask))
else:
values = values[self.mask]
- values = wrap(values)
- self.values[variable] = values
+ self.values[variable] = wrap(values)
elif variable in list(self.df.virtual_columns.keys()):
expression = self.df.virtual_columns[variable]
if isinstance(expression, dict):
function = expression['function']
arguments = [self.evaluate(k) for k in expression['arguments']]
- self.values[variable] = function(*arguments)
+ self.values[variable] = wrap(function(*arguments))
else:
# self._ensure_buffer(variable)
values = self.evaluate(expression)
- values = wrap(values)
- self.values[variable] = values
+ self.values[variable] = wrap(values)
# self.values[variable] = self.buffers[variable]
elif variable in self.df.functions:
f = self.df.functions[variable].f
@@ -211,11 +209,11 @@
# logger.debug("mask for %r is %r", variable, mask)
if selection_in_cache == selection:
if self.filter_mask is not None:
- return mask[self.filter_mask]
- return mask
+ return wrap(mask[self.filter_mask])
+ return wrap(mask)
# logger.debug("was not cached")
if variable in self.df.variables:
- return self.df.variables[variable]
+ return wrap(self.df.variables[variable])
mask_values = selection.evaluate(self.df, variable, self.i1, self.i2, self.filter_mask)
# get a view on a subset of the mask
@@ -232,14 +230,14 @@
cache[key] = selection, sub_mask_array
# cache[key] = selection, mask_values
if self.filter_mask is not None:
- return sub_mask_array[self.filter_mask]
+ return wrap(sub_mask_array[self.filter_mask])
else:
- return sub_mask_array
+ return wrap(sub_mask_array)
# return mask_values
else:
offset = self.df._index_start
if variable in expression_namespace:
- return expression_namespace[variable]
+ return wrap(expression_namespace[variable])
elif variable in self.df.columns:
values = self.df.columns[variable][offset+self.i1:offset+self.i2]
# TODO: we may want to put this in array_types
@@ -248,8 +246,7 @@
values = values.filter(vaex.array_types.to_arrow(self.filter_mask))
else:
values = values[self.filter_mask]
- values = wrap(values)
- return values
+ return wrap(values)
elif variable in self.df.variables:
return self.df.variables[variable]
elif variable in self.df.virtual_columns:
@@ -258,8 +255,7 @@
if expression == variable:
raise ValueError(f'Recursion protection: virtual column {variable} refers to itself')
values = self.evaluate(expression) # , out=self.buffers[variable])
- values = wrap(values)
- return values
+ return wrap(values)
elif variable in self.df.functions:
f = self.df.functions[variable].f
return vaex.arrow.numpy_dispatch.autowrapper(f)
|
{"golden_diff": "diff --git a/packages/vaex-core/vaex/scopes.py b/packages/vaex-core/vaex/scopes.py\n--- a/packages/vaex-core/vaex/scopes.py\n+++ b/packages/vaex-core/vaex/scopes.py\n@@ -106,7 +106,7 @@\n # logger.debug(\"in eval\")\n # eval(\"def f(\")\n result = eval(expression, expression_namespace, self)\n- self.values[expression] = result\n+ self.values[expression] = wrap(result)\n # if out is not None:\n # out[:] = result\n # result = out\n@@ -140,19 +140,17 @@\n values = values.filter(vaex.array_types.to_arrow(self.mask))\n else:\n values = values[self.mask]\n- values = wrap(values)\n- self.values[variable] = values\n+ self.values[variable] = wrap(values)\n elif variable in list(self.df.virtual_columns.keys()):\n expression = self.df.virtual_columns[variable]\n if isinstance(expression, dict):\n function = expression['function']\n arguments = [self.evaluate(k) for k in expression['arguments']]\n- self.values[variable] = function(*arguments)\n+ self.values[variable] = wrap(function(*arguments))\n else:\n # self._ensure_buffer(variable)\n values = self.evaluate(expression)\n- values = wrap(values)\n- self.values[variable] = values\n+ self.values[variable] = wrap(values)\n # self.values[variable] = self.buffers[variable]\n elif variable in self.df.functions:\n f = self.df.functions[variable].f\n@@ -211,11 +209,11 @@\n # logger.debug(\"mask for %r is %r\", variable, mask)\n if selection_in_cache == selection:\n if self.filter_mask is not None:\n- return mask[self.filter_mask]\n- return mask\n+ return wrap(mask[self.filter_mask])\n+ return wrap(mask)\n # logger.debug(\"was not cached\")\n if variable in self.df.variables:\n- return self.df.variables[variable]\n+ return wrap(self.df.variables[variable])\n mask_values = selection.evaluate(self.df, variable, self.i1, self.i2, self.filter_mask)\n \n # get a view on a subset of the mask\n@@ -232,14 +230,14 @@\n cache[key] = selection, sub_mask_array\n # cache[key] = selection, mask_values\n if self.filter_mask is not None:\n- return sub_mask_array[self.filter_mask]\n+ return wrap(sub_mask_array[self.filter_mask])\n else:\n- return sub_mask_array\n+ return wrap(sub_mask_array)\n # return mask_values\n else:\n offset = self.df._index_start\n if variable in expression_namespace:\n- return expression_namespace[variable]\n+ return wrap(expression_namespace[variable])\n elif variable in self.df.columns:\n values = self.df.columns[variable][offset+self.i1:offset+self.i2]\n # TODO: we may want to put this in array_types\n@@ -248,8 +246,7 @@\n values = values.filter(vaex.array_types.to_arrow(self.filter_mask))\n else:\n values = values[self.filter_mask]\n- values = wrap(values)\n- return values\n+ return wrap(values)\n elif variable in self.df.variables:\n return self.df.variables[variable]\n elif variable in self.df.virtual_columns:\n@@ -258,8 +255,7 @@\n if expression == variable:\n raise ValueError(f'Recursion protection: virtual column {variable} refers to itself')\n values = self.evaluate(expression) # , out=self.buffers[variable])\n- values = wrap(values)\n- return values\n+ return wrap(values)\n elif variable in self.df.functions:\n f = self.df.functions[variable].f\n return vaex.arrow.numpy_dispatch.autowrapper(f)\n", "issue": "[BUG-REPORT]Overflowing memory using a custom selection in df.count() in Vaex 4.0.0a5\nUsing `vaex: 4.0.0a5` I am having the issue that my memory is overflowing when I do a count of a selection that combines a `df.select()` selection with an expression. The MWE below crashes for me. The machine that I am running this on has 251G memory, reaches 100% of this and then crashes.\r\n\r\nI am pretty sure that the bug arises from combining the named selection and the `(x>y)` expression.\r\n\r\n```\r\nimport vaex\r\nimport numpy as np\r\n\r\nx,y,z = np.random.normal(size=(3,10_000_000))\r\n\r\ndf = vaex.from_arrays(x=x,y=y,z=z)\r\n\r\ndf.select('x*y<z*z',name='tmp')\r\n\r\ndf.count(selection='(tmp)&(x>y)')\r\n```\n", "before_files": [{"content": "from __future__ import division, print_function\nimport logging\nimport numpy as np\nimport pyarrow as pa\nimport vaex.array_types\nimport vaex.arrow.numpy_dispatch\n\n\nfrom .utils import (_ensure_strings_from_expressions,\n _ensure_string_from_expression,\n _ensure_list,\n _is_limit,\n _isnumber,\n _issequence,\n _is_string,\n _parse_reduction,\n _parse_n,\n _normalize_selection_name,\n _normalize,\n _parse_f,\n _expand,\n _expand_shape,\n _expand_limits,\n as_flat_float,\n as_flat_array,\n _split_and_combine_mask)\nfrom .expression import expression_namespace\nfrom vaex.arrow.numpy_dispatch import wrap, unwrap\nimport vaex.expression\n\nlogger = logging.getLogger('vaex.scopes')\n\n\nclass ScopeBase(object):\n def get(self, attr, default=None): # otherwise pdb crashes during pytest\n if attr == \"__tracebackhide__\":\n return False\n return default\n\n\nclass UnitScope(ScopeBase):\n def __init__(self, df, value=None):\n self.df = df\n self.value = value\n\n def __getitem__(self, variable):\n import astropy.units\n if variable in self.df.units:\n unit = self.df.units[variable]\n return (self.value * unit) if self.value is not None else unit\n elif variable in self.df.virtual_columns:\n return eval(self.df.virtual_columns[variable], expression_namespace, self)\n elif variable in self.df.variables:\n return astropy.units.dimensionless_unscaled # TODO units for variables?\n else:\n raise KeyError(\"unkown variable %s\" % variable)\n\n\nclass _BlockScope(ScopeBase):\n def __init__(self, df, i1, i2, mask=None, **variables):\n \"\"\"\n\n :param DataFrameLocal DataFrame: the *local* DataFrame\n :param i1: start index\n :param i2: end index\n :param values:\n :return:\n \"\"\"\n self.df = df\n self.i1 = int(i1)\n self.i2 = int(i2)\n self.variables = variables\n self.values = dict(self.variables)\n self.buffers = {}\n self.mask = mask if mask is not None else None\n\n def move(self, i1, i2):\n length_new = i2 - i1\n length_old = self.i2 - self.i1\n if length_new > length_old: # old buffers are too small, discard them\n self.buffers = {}\n else:\n for name in list(self.buffers.keys()):\n self.buffers[name] = self.buffers[name][:length_new]\n self.i1 = int(i1)\n self.i2 = int(i2)\n self.values = dict(self.variables)\n\n def __contains__(self, name): # otherwise pdb crashes during pytest\n return name in self.buffers # not sure this should also include varibles, columns and virtual columns\n\n def _ensure_buffer(self, column):\n if column not in self.buffers:\n logger.debug(\"creating column for: %s\", column)\n self.buffers[column] = np.zeros(self.i2 - self.i1)\n\n def evaluate(self, expression, out=None):\n if isinstance(expression, vaex.expression.Expression):\n expression = expression.expression\n try:\n # logger.debug(\"try avoid evaluating: %s\", expression)\n result = self[expression]\n except KeyError:\n # logger.debug(\"no luck, eval: %s\", expression)\n # result = ne.evaluate(expression, local_dict=self, out=out)\n # logger.debug(\"in eval\")\n # eval(\"def f(\")\n result = eval(expression, expression_namespace, self)\n self.values[expression] = result\n # if out is not None:\n # out[:] = result\n # result = out\n # logger.debug(\"out eval\")\n # logger.debug(\"done with eval of %s\", expression)\n result = unwrap(result)\n return result\n\n def __getitem__(self, variable):\n # logger.debug(\"get \" + variable)\n # return self.df.columns[variable][self.i1:self.i2]\n if variable == 'df':\n return self # to support df['no!identifier']\n if variable in expression_namespace:\n return expression_namespace[variable]\n try:\n if variable in self.values:\n return self.values[variable]\n elif variable in self.df.columns:\n offset = self.df._index_start\n # if self.df._needs_copy(variable):\n # self._ensure_buffer(variable)\n # self.values[variable] = self.buffers[variable] = self.df.columns[variable][self.i1:self.i2].astype(np.float64)\n # Previously we casted anything to .astype(np.float64), this led to rounding off of int64, when exporting\n # self.values[variable] = self.df.columns[variable][offset+self.i1:offset+self.i2][:]\n # else:\n values = self.df.columns[variable][offset+self.i1:offset+self.i2]\n if self.mask is not None:\n # TODO: we may want to put this in array_types\n if isinstance(values, (pa.Array, pa.ChunkedArray)):\n values = values.filter(vaex.array_types.to_arrow(self.mask))\n else:\n values = values[self.mask]\n values = wrap(values)\n self.values[variable] = values\n elif variable in list(self.df.virtual_columns.keys()):\n expression = self.df.virtual_columns[variable]\n if isinstance(expression, dict):\n function = expression['function']\n arguments = [self.evaluate(k) for k in expression['arguments']]\n self.values[variable] = function(*arguments)\n else:\n # self._ensure_buffer(variable)\n values = self.evaluate(expression)\n values = wrap(values)\n self.values[variable] = values\n # self.values[variable] = self.buffers[variable]\n elif variable in self.df.functions:\n f = self.df.functions[variable].f\n return vaex.arrow.numpy_dispatch.autowrapper(f)\n if variable not in self.values:\n raise KeyError(\"Unknown variables or column: %r\" % (variable,))\n\n return self.values[variable]\n except:\n # logger.exception(\"error in evaluating: %r\" % variable)\n raise\n\n\nclass _BlockScopeSelection(ScopeBase):\n def __init__(self, df, i1, i2, selection=None, cache=False, filter_mask=None):\n self.df = df\n self.i1 = i1\n self.i2 = i2\n self.selection = selection\n self.store_in_cache = cache\n self.filter_mask = filter_mask\n\n def evaluate(self, expression):\n if expression is True:\n expression = \"default\"\n try:\n expression = _ensure_string_from_expression(expression)\n result = eval(expression, expression_namespace, self)\n except:\n import traceback as tb\n tb.print_stack()\n raise\n result = unwrap(result)\n return result\n\n def __contains__(self, name): # otherwise pdb crashes during pytest\n return False\n\n def __getitem__(self, variable):\n if variable == \"__tracebackhide__\": # required for tracebacks\n return False\n # logger.debug(\"getitem for selection: %s\", variable)\n try:\n selection = self.selection\n if selection is None and self.df.has_selection(variable):\n selection = self.df.get_selection(variable)\n # logger.debug(\"selection for %r: %s %r\", variable, selection, self.df.selection_histories)\n key = (self.i1, self.i2)\n if selection:\n assert variable in self.df._selection_masks, \"%s mask not found\" % (variable, )\n cache = self.df._selection_mask_caches[variable]\n # logger.debug(\"selection cache: %r\" % cache)\n full_mask = self.df._selection_masks[variable]\n selection_in_cache, mask = cache.get(key, (None, None))\n\n # logger.debug(\"mask for %r is %r\", variable, mask)\n if selection_in_cache == selection:\n if self.filter_mask is not None:\n return mask[self.filter_mask]\n return mask\n # logger.debug(\"was not cached\")\n if variable in self.df.variables:\n return self.df.variables[variable]\n mask_values = selection.evaluate(self.df, variable, self.i1, self.i2, self.filter_mask)\n \n # get a view on a subset of the mask\n sub_mask = full_mask.view(self.i1, self.i2)\n sub_mask_array = np.asarray(sub_mask)\n # and update it\n if self.filter_mask is not None: # if we have a mask, the selection we evaluated is also filtered\n sub_mask_array[:] = 0\n sub_mask_array[:][self.filter_mask] = mask_values\n else:\n sub_mask_array[:] = mask_values\n # logger.debug(\"put selection in mask with key %r\" % (key,))\n if self.store_in_cache:\n cache[key] = selection, sub_mask_array\n # cache[key] = selection, mask_values\n if self.filter_mask is not None:\n return sub_mask_array[self.filter_mask]\n else:\n return sub_mask_array\n # return mask_values\n else:\n offset = self.df._index_start\n if variable in expression_namespace:\n return expression_namespace[variable]\n elif variable in self.df.columns:\n values = self.df.columns[variable][offset+self.i1:offset+self.i2]\n # TODO: we may want to put this in array_types\n if self.filter_mask is not None:\n if isinstance(values, (pa.Array, pa.ChunkedArray)):\n values = values.filter(vaex.array_types.to_arrow(self.filter_mask))\n else:\n values = values[self.filter_mask]\n values = wrap(values)\n return values\n elif variable in self.df.variables:\n return self.df.variables[variable]\n elif variable in self.df.virtual_columns:\n expression = self.df.virtual_columns[variable]\n # self._ensure_buffer(variable)\n if expression == variable:\n raise ValueError(f'Recursion protection: virtual column {variable} refers to itself')\n values = self.evaluate(expression) # , out=self.buffers[variable])\n values = wrap(values)\n return values\n elif variable in self.df.functions:\n f = self.df.functions[variable].f\n return vaex.arrow.numpy_dispatch.autowrapper(f)\n raise KeyError(\"Unknown variables or column: %r\" % (variable,))\n except:\n import traceback as tb\n tb.print_exc()\n logger.exception(\"error in evaluating: %r\" % variable)\n raise\n\n", "path": "packages/vaex-core/vaex/scopes.py"}], "after_files": [{"content": "from __future__ import division, print_function\nimport logging\nimport numpy as np\nimport pyarrow as pa\nimport vaex.array_types\nimport vaex.arrow.numpy_dispatch\n\n\nfrom .utils import (_ensure_strings_from_expressions,\n _ensure_string_from_expression,\n _ensure_list,\n _is_limit,\n _isnumber,\n _issequence,\n _is_string,\n _parse_reduction,\n _parse_n,\n _normalize_selection_name,\n _normalize,\n _parse_f,\n _expand,\n _expand_shape,\n _expand_limits,\n as_flat_float,\n as_flat_array,\n _split_and_combine_mask)\nfrom .expression import expression_namespace\nfrom vaex.arrow.numpy_dispatch import wrap, unwrap\nimport vaex.expression\n\nlogger = logging.getLogger('vaex.scopes')\n\n\nclass ScopeBase(object):\n def get(self, attr, default=None): # otherwise pdb crashes during pytest\n if attr == \"__tracebackhide__\":\n return False\n return default\n\n\nclass UnitScope(ScopeBase):\n def __init__(self, df, value=None):\n self.df = df\n self.value = value\n\n def __getitem__(self, variable):\n import astropy.units\n if variable in self.df.units:\n unit = self.df.units[variable]\n return (self.value * unit) if self.value is not None else unit\n elif variable in self.df.virtual_columns:\n return eval(self.df.virtual_columns[variable], expression_namespace, self)\n elif variable in self.df.variables:\n return astropy.units.dimensionless_unscaled # TODO units for variables?\n else:\n raise KeyError(\"unkown variable %s\" % variable)\n\n\nclass _BlockScope(ScopeBase):\n def __init__(self, df, i1, i2, mask=None, **variables):\n \"\"\"\n\n :param DataFrameLocal DataFrame: the *local* DataFrame\n :param i1: start index\n :param i2: end index\n :param values:\n :return:\n \"\"\"\n self.df = df\n self.i1 = int(i1)\n self.i2 = int(i2)\n self.variables = variables\n self.values = dict(self.variables)\n self.buffers = {}\n self.mask = mask if mask is not None else None\n\n def move(self, i1, i2):\n length_new = i2 - i1\n length_old = self.i2 - self.i1\n if length_new > length_old: # old buffers are too small, discard them\n self.buffers = {}\n else:\n for name in list(self.buffers.keys()):\n self.buffers[name] = self.buffers[name][:length_new]\n self.i1 = int(i1)\n self.i2 = int(i2)\n self.values = dict(self.variables)\n\n def __contains__(self, name): # otherwise pdb crashes during pytest\n return name in self.buffers # not sure this should also include varibles, columns and virtual columns\n\n def _ensure_buffer(self, column):\n if column not in self.buffers:\n logger.debug(\"creating column for: %s\", column)\n self.buffers[column] = np.zeros(self.i2 - self.i1)\n\n def evaluate(self, expression, out=None):\n if isinstance(expression, vaex.expression.Expression):\n expression = expression.expression\n try:\n # logger.debug(\"try avoid evaluating: %s\", expression)\n result = self[expression]\n except KeyError:\n # logger.debug(\"no luck, eval: %s\", expression)\n # result = ne.evaluate(expression, local_dict=self, out=out)\n # logger.debug(\"in eval\")\n # eval(\"def f(\")\n result = eval(expression, expression_namespace, self)\n self.values[expression] = wrap(result)\n # if out is not None:\n # out[:] = result\n # result = out\n # logger.debug(\"out eval\")\n # logger.debug(\"done with eval of %s\", expression)\n result = unwrap(result)\n return result\n\n def __getitem__(self, variable):\n # logger.debug(\"get \" + variable)\n # return self.df.columns[variable][self.i1:self.i2]\n if variable == 'df':\n return self # to support df['no!identifier']\n if variable in expression_namespace:\n return expression_namespace[variable]\n try:\n if variable in self.values:\n return self.values[variable]\n elif variable in self.df.columns:\n offset = self.df._index_start\n # if self.df._needs_copy(variable):\n # self._ensure_buffer(variable)\n # self.values[variable] = self.buffers[variable] = self.df.columns[variable][self.i1:self.i2].astype(np.float64)\n # Previously we casted anything to .astype(np.float64), this led to rounding off of int64, when exporting\n # self.values[variable] = self.df.columns[variable][offset+self.i1:offset+self.i2][:]\n # else:\n values = self.df.columns[variable][offset+self.i1:offset+self.i2]\n if self.mask is not None:\n # TODO: we may want to put this in array_types\n if isinstance(values, (pa.Array, pa.ChunkedArray)):\n values = values.filter(vaex.array_types.to_arrow(self.mask))\n else:\n values = values[self.mask]\n self.values[variable] = wrap(values)\n elif variable in list(self.df.virtual_columns.keys()):\n expression = self.df.virtual_columns[variable]\n if isinstance(expression, dict):\n function = expression['function']\n arguments = [self.evaluate(k) for k in expression['arguments']]\n self.values[variable] = wrap(function(*arguments))\n else:\n # self._ensure_buffer(variable)\n values = self.evaluate(expression)\n self.values[variable] = wrap(values)\n # self.values[variable] = self.buffers[variable]\n elif variable in self.df.functions:\n f = self.df.functions[variable].f\n return vaex.arrow.numpy_dispatch.autowrapper(f)\n if variable not in self.values:\n raise KeyError(\"Unknown variables or column: %r\" % (variable,))\n\n return self.values[variable]\n except:\n # logger.exception(\"error in evaluating: %r\" % variable)\n raise\n\n\nclass _BlockScopeSelection(ScopeBase):\n def __init__(self, df, i1, i2, selection=None, cache=False, filter_mask=None):\n self.df = df\n self.i1 = i1\n self.i2 = i2\n self.selection = selection\n self.store_in_cache = cache\n self.filter_mask = filter_mask\n\n def evaluate(self, expression):\n if expression is True:\n expression = \"default\"\n try:\n expression = _ensure_string_from_expression(expression)\n result = eval(expression, expression_namespace, self)\n except:\n import traceback as tb\n tb.print_stack()\n raise\n result = unwrap(result)\n return result\n\n def __contains__(self, name): # otherwise pdb crashes during pytest\n return False\n\n def __getitem__(self, variable):\n if variable == \"__tracebackhide__\": # required for tracebacks\n return False\n # logger.debug(\"getitem for selection: %s\", variable)\n try:\n selection = self.selection\n if selection is None and self.df.has_selection(variable):\n selection = self.df.get_selection(variable)\n # logger.debug(\"selection for %r: %s %r\", variable, selection, self.df.selection_histories)\n key = (self.i1, self.i2)\n if selection:\n assert variable in self.df._selection_masks, \"%s mask not found\" % (variable, )\n cache = self.df._selection_mask_caches[variable]\n # logger.debug(\"selection cache: %r\" % cache)\n full_mask = self.df._selection_masks[variable]\n selection_in_cache, mask = cache.get(key, (None, None))\n\n # logger.debug(\"mask for %r is %r\", variable, mask)\n if selection_in_cache == selection:\n if self.filter_mask is not None:\n return wrap(mask[self.filter_mask])\n return wrap(mask)\n # logger.debug(\"was not cached\")\n if variable in self.df.variables:\n return wrap(self.df.variables[variable])\n mask_values = selection.evaluate(self.df, variable, self.i1, self.i2, self.filter_mask)\n \n # get a view on a subset of the mask\n sub_mask = full_mask.view(self.i1, self.i2)\n sub_mask_array = np.asarray(sub_mask)\n # and update it\n if self.filter_mask is not None: # if we have a mask, the selection we evaluated is also filtered\n sub_mask_array[:] = 0\n sub_mask_array[:][self.filter_mask] = mask_values\n else:\n sub_mask_array[:] = mask_values\n # logger.debug(\"put selection in mask with key %r\" % (key,))\n if self.store_in_cache:\n cache[key] = selection, sub_mask_array\n # cache[key] = selection, mask_values\n if self.filter_mask is not None:\n return wrap(sub_mask_array[self.filter_mask])\n else:\n return wrap(sub_mask_array)\n # return mask_values\n else:\n offset = self.df._index_start\n if variable in expression_namespace:\n return wrap(expression_namespace[variable])\n elif variable in self.df.columns:\n values = self.df.columns[variable][offset+self.i1:offset+self.i2]\n # TODO: we may want to put this in array_types\n if self.filter_mask is not None:\n if isinstance(values, (pa.Array, pa.ChunkedArray)):\n values = values.filter(vaex.array_types.to_arrow(self.filter_mask))\n else:\n values = values[self.filter_mask]\n return wrap(values)\n elif variable in self.df.variables:\n return self.df.variables[variable]\n elif variable in self.df.virtual_columns:\n expression = self.df.virtual_columns[variable]\n # self._ensure_buffer(variable)\n if expression == variable:\n raise ValueError(f'Recursion protection: virtual column {variable} refers to itself')\n values = self.evaluate(expression) # , out=self.buffers[variable])\n return wrap(values)\n elif variable in self.df.functions:\n f = self.df.functions[variable].f\n return vaex.arrow.numpy_dispatch.autowrapper(f)\n raise KeyError(\"Unknown variables or column: %r\" % (variable,))\n except:\n import traceback as tb\n tb.print_exc()\n logger.exception(\"error in evaluating: %r\" % variable)\n raise\n\n", "path": "packages/vaex-core/vaex/scopes.py"}]}
| 3,495 | 869 |
gh_patches_debug_38585
|
rasdani/github-patches
|
git_diff
|
PaddlePaddle__models-2067
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ocr_recognition 评估、预测错误
1、ocr_recognition CTC model eval.py有函数缺少参数:
<img width="909" alt="e8ec154b99d031a872f8f222e1088e92" src="https://user-images.githubusercontent.com/46314656/53880163-453f1e00-404b-11e9-85c7-8dd84fe81020.png">
2、预测时CTC和attention模型的报错:
<img width="987" alt="93b4cc1a5a0309688355e42a33bddad7" src="https://user-images.githubusercontent.com/46314656/53880294-9e0eb680-404b-11e9-9619-47c6bdaff889.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `PaddleCV/ocr_recognition/infer.py`
Content:
```
1 from __future__ import print_function
2 import paddle.fluid as fluid
3 from utility import add_arguments, print_arguments, to_lodtensor, get_ctc_feeder_data, get_attention_feeder_for_infer
4 import paddle.fluid.profiler as profiler
5 from crnn_ctc_model import ctc_infer
6 from attention_model import attention_infer
7 import numpy as np
8 import data_reader
9 import argparse
10 import functools
11 import os
12 import time
13
14 parser = argparse.ArgumentParser(description=__doc__)
15 add_arg = functools.partial(add_arguments, argparser=parser)
16 # yapf: disable
17 add_arg('model', str, "crnn_ctc", "Which type of network to be used. 'crnn_ctc' or 'attention'")
18 add_arg('model_path', str, None, "The model path to be used for inference.")
19 add_arg('input_images_dir', str, None, "The directory of images.")
20 add_arg('input_images_list', str, None, "The list file of images.")
21 add_arg('dict', str, None, "The dictionary. The result of inference will be index sequence if the dictionary was None.")
22 add_arg('use_gpu', bool, True, "Whether use GPU to infer.")
23 add_arg('iterations', int, 0, "The number of iterations. Zero or less means whole test set. More than 0 means the test set might be looped until # of iterations is reached.")
24 add_arg('profile', bool, False, "Whether to use profiling.")
25 add_arg('skip_batch_num', int, 0, "The number of first minibatches to skip as warm-up for better performance test.")
26 add_arg('batch_size', int, 1, "The minibatch size.")
27 # yapf: enable
28
29
30 def inference(args):
31 """OCR inference"""
32 if args.model == "crnn_ctc":
33 infer = ctc_infer
34 get_feeder_data = get_ctc_feeder_data
35 else:
36 infer = attention_infer
37 get_feeder_data = get_attention_feeder_for_infer
38 eos = 1
39 sos = 0
40 num_classes = data_reader.num_classes()
41 data_shape = data_reader.data_shape()
42 # define network
43 images = fluid.layers.data(name='pixel', shape=data_shape, dtype='float32')
44 ids = infer(images, num_classes, use_cudnn=True if args.use_gpu else False)
45 # data reader
46 infer_reader = data_reader.inference(
47 batch_size=args.batch_size,
48 infer_images_dir=args.input_images_dir,
49 infer_list_file=args.input_images_list,
50 cycle=True if args.iterations > 0 else False,
51 model=args.model)
52 # prepare environment
53 place = fluid.CPUPlace()
54 if args.use_gpu:
55 place = fluid.CUDAPlace(0)
56
57 exe = fluid.Executor(place)
58 exe.run(fluid.default_startup_program())
59
60 # load dictionary
61 dict_map = None
62 if args.dict is not None and os.path.isfile(args.dict):
63 dict_map = {}
64 with open(args.dict) as dict_file:
65 for i, word in enumerate(dict_file):
66 dict_map[i] = word.strip()
67 print("Loaded dict from %s" % args.dict)
68
69 # load init model
70 model_dir = args.model_path
71 model_file_name = None
72 if not os.path.isdir(args.model_path):
73 model_dir = os.path.dirname(args.model_path)
74 model_file_name = os.path.basename(args.model_path)
75 fluid.io.load_params(exe, dirname=model_dir, filename=model_file_name)
76 print("Init model from: %s." % args.model_path)
77
78 batch_times = []
79 iters = 0
80 for data in infer_reader():
81 feed_dict = get_feeder_data(data, place, need_label=False)
82 if args.iterations > 0 and iters == args.iterations + args.skip_batch_num:
83 break
84 if iters < args.skip_batch_num:
85 print("Warm-up itaration")
86 if iters == args.skip_batch_num:
87 profiler.reset_profiler()
88
89 start = time.time()
90 result = exe.run(fluid.default_main_program(),
91 feed=feed_dict,
92 fetch_list=[ids],
93 return_numpy=False)
94 indexes = prune(np.array(result[0]).flatten(), 0, 1)
95 batch_time = time.time() - start
96 fps = args.batch_size / batch_time
97 batch_times.append(batch_time)
98 if dict_map is not None:
99 print("Iteration %d, latency: %.5f s, fps: %f, result: %s" % (
100 iters,
101 batch_time,
102 fps,
103 [dict_map[index] for index in indexes], ))
104 else:
105 print("Iteration %d, latency: %.5f s, fps: %f, result: %s" % (
106 iters,
107 batch_time,
108 fps,
109 indexes, ))
110
111 iters += 1
112
113 latencies = batch_times[args.skip_batch_num:]
114 latency_avg = np.average(latencies)
115 latency_pc99 = np.percentile(latencies, 99)
116 fpses = np.divide(args.batch_size, latencies)
117 fps_avg = np.average(fpses)
118 fps_pc99 = np.percentile(fpses, 1)
119
120 # Benchmark output
121 print('\nTotal examples (incl. warm-up): %d' % (iters * args.batch_size))
122 print('average latency: %.5f s, 99pc latency: %.5f s' % (latency_avg,
123 latency_pc99))
124 print('average fps: %.5f, fps for 99pc latency: %.5f' % (fps_avg, fps_pc99))
125
126
127 def prune(words, sos, eos):
128 """Remove unused tokens in prediction result."""
129 start_index = 0
130 end_index = len(words)
131 if sos in words:
132 start_index = np.where(words == sos)[0][0] + 1
133 if eos in words:
134 end_index = np.where(words == eos)[0][0]
135 return words[start_index:end_index]
136
137
138 def main():
139 args = parser.parse_args()
140 print_arguments(args)
141 if args.profile:
142 if args.use_gpu:
143 with profiler.cuda_profiler("cuda_profiler.txt", 'csv') as nvprof:
144 inference(args)
145 else:
146 with profiler.profiler("CPU", sorted_key='total') as cpuprof:
147 inference(args)
148 else:
149 inference(args)
150
151
152 if __name__ == "__main__":
153 main()
154
```
Path: `PaddleCV/ocr_recognition/utility.py`
Content:
```
1 """Contains common utility functions."""
2 # Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserve.
3 #
4 #Licensed under the Apache License, Version 2.0 (the "License");
5 #you may not use this file except in compliance with the License.
6 #You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 #Unless required by applicable law or agreed to in writing, software
11 #distributed under the License is distributed on an "AS IS" BASIS,
12 #WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 #See the License for the specific language governing permissions and
14 #limitations under the License.
15
16 from __future__ import absolute_import
17 from __future__ import division
18 from __future__ import print_function
19 import distutils.util
20 import numpy as np
21 from paddle.fluid import core
22 import paddle.fluid as fluid
23 import six
24
25
26 def print_arguments(args):
27 """Print argparse's arguments.
28
29 Usage:
30
31 .. code-block:: python
32
33 parser = argparse.ArgumentParser()
34 parser.add_argument("name", default="Jonh", type=str, help="User name.")
35 args = parser.parse_args()
36 print_arguments(args)
37
38 :param args: Input argparse.Namespace for printing.
39 :type args: argparse.Namespace
40 """
41 print("----------- Configuration Arguments -----------")
42 for arg, value in sorted(six.iteritems(vars(args))):
43 print("%s: %s" % (arg, value))
44 print("------------------------------------------------")
45
46
47 def add_arguments(argname, type, default, help, argparser, **kwargs):
48 """Add argparse's argument.
49
50 Usage:
51
52 .. code-block:: python
53
54 parser = argparse.ArgumentParser()
55 add_argument("name", str, "Jonh", "User name.", parser)
56 args = parser.parse_args()
57 """
58 type = distutils.util.strtobool if type == bool else type
59 argparser.add_argument(
60 "--" + argname,
61 default=default,
62 type=type,
63 help=help + ' Default: %(default)s.',
64 **kwargs)
65
66
67 def to_lodtensor(data, place):
68 seq_lens = [len(seq) for seq in data]
69 cur_len = 0
70 lod = [cur_len]
71 for l in seq_lens:
72 cur_len += l
73 lod.append(cur_len)
74 flattened_data = np.concatenate(data, axis=0).astype("int32")
75 flattened_data = flattened_data.reshape([len(flattened_data), 1])
76 res = core.LoDTensor()
77 res.set(flattened_data, place)
78 res.set_lod([lod])
79 return res
80
81
82 def get_ctc_feeder_data(data, place, need_label=True):
83 pixel_tensor = core.LoDTensor()
84 pixel_data = None
85 pixel_data = np.concatenate(
86 list(map(lambda x: x[0][np.newaxis, :], data)), axis=0).astype("float32")
87 pixel_tensor.set(pixel_data, place)
88 label_tensor = to_lodtensor(list(map(lambda x: x[1], data)), place)
89 if need_label:
90 return {"pixel": pixel_tensor, "label": label_tensor}
91 else:
92 return {"pixel": pixel_tensor}
93
94
95 def get_attention_feeder_data(data, place, need_label=True):
96 pixel_tensor = core.LoDTensor()
97 pixel_data = None
98 pixel_data = np.concatenate(
99 list(map(lambda x: x[0][np.newaxis, :], data)), axis=0).astype("float32")
100 pixel_tensor.set(pixel_data, place)
101 label_in_tensor = to_lodtensor(list(map(lambda x: x[1], data)), place)
102 label_out_tensor = to_lodtensor(list(map(lambda x: x[2], data)), place)
103 if need_label:
104 return {
105 "pixel": pixel_tensor,
106 "label_in": label_in_tensor,
107 "label_out": label_out_tensor
108 }
109 else:
110 return {"pixel": pixel_tensor}
111
112
113 def get_attention_feeder_for_infer(data, place):
114 batch_size = len(data)
115 init_ids_data = np.array([0 for _ in range(batch_size)], dtype='int64')
116 init_scores_data = np.array(
117 [1. for _ in range(batch_size)], dtype='float32')
118 init_ids_data = init_ids_data.reshape((batch_size, 1))
119 init_scores_data = init_scores_data.reshape((batch_size, 1))
120 init_recursive_seq_lens = [1] * batch_size
121 init_recursive_seq_lens = [init_recursive_seq_lens, init_recursive_seq_lens]
122 init_ids = fluid.create_lod_tensor(init_ids_data, init_recursive_seq_lens,
123 place)
124 init_scores = fluid.create_lod_tensor(init_scores_data,
125 init_recursive_seq_lens, place)
126
127 pixel_tensor = core.LoDTensor()
128 pixel_data = None
129 pixel_data = np.concatenate(
130 list(map(lambda x: x[0][np.newaxis, :], data)), axis=0).astype("float32")
131 pixel_tensor.set(pixel_data, place)
132 return {
133 "pixel": pixel_tensor,
134 "init_ids": init_ids,
135 "init_scores": init_scores
136 }
137
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/PaddleCV/ocr_recognition/infer.py b/PaddleCV/ocr_recognition/infer.py
--- a/PaddleCV/ocr_recognition/infer.py
+++ b/PaddleCV/ocr_recognition/infer.py
@@ -31,7 +31,7 @@
"""OCR inference"""
if args.model == "crnn_ctc":
infer = ctc_infer
- get_feeder_data = get_ctc_feeder_data
+ get_feeder_data = get_ctc_feeder_for_infer
else:
infer = attention_infer
get_feeder_data = get_attention_feeder_for_infer
@@ -78,7 +78,7 @@
batch_times = []
iters = 0
for data in infer_reader():
- feed_dict = get_feeder_data(data, place, need_label=False)
+ feed_dict = get_feeder_data(data, place)
if args.iterations > 0 and iters == args.iterations + args.skip_batch_num:
break
if iters < args.skip_batch_num:
diff --git a/PaddleCV/ocr_recognition/utility.py b/PaddleCV/ocr_recognition/utility.py
--- a/PaddleCV/ocr_recognition/utility.py
+++ b/PaddleCV/ocr_recognition/utility.py
@@ -83,7 +83,8 @@
pixel_tensor = core.LoDTensor()
pixel_data = None
pixel_data = np.concatenate(
- list(map(lambda x: x[0][np.newaxis, :], data)), axis=0).astype("float32")
+ list(map(lambda x: x[0][np.newaxis, :], data)),
+ axis=0).astype("float32")
pixel_tensor.set(pixel_data, place)
label_tensor = to_lodtensor(list(map(lambda x: x[1], data)), place)
if need_label:
@@ -92,11 +93,16 @@
return {"pixel": pixel_tensor}
+def get_ctc_feeder_for_infer(data, place):
+ return get_ctc_feeder_data(data, place, need_label=False)
+
+
def get_attention_feeder_data(data, place, need_label=True):
pixel_tensor = core.LoDTensor()
pixel_data = None
pixel_data = np.concatenate(
- list(map(lambda x: x[0][np.newaxis, :], data)), axis=0).astype("float32")
+ list(map(lambda x: x[0][np.newaxis, :], data)),
+ axis=0).astype("float32")
pixel_tensor.set(pixel_data, place)
label_in_tensor = to_lodtensor(list(map(lambda x: x[1], data)), place)
label_out_tensor = to_lodtensor(list(map(lambda x: x[2], data)), place)
@@ -127,7 +133,8 @@
pixel_tensor = core.LoDTensor()
pixel_data = None
pixel_data = np.concatenate(
- list(map(lambda x: x[0][np.newaxis, :], data)), axis=0).astype("float32")
+ list(map(lambda x: x[0][np.newaxis, :], data)),
+ axis=0).astype("float32")
pixel_tensor.set(pixel_data, place)
return {
"pixel": pixel_tensor,
|
{"golden_diff": "diff --git a/PaddleCV/ocr_recognition/infer.py b/PaddleCV/ocr_recognition/infer.py\n--- a/PaddleCV/ocr_recognition/infer.py\n+++ b/PaddleCV/ocr_recognition/infer.py\n@@ -31,7 +31,7 @@\n \"\"\"OCR inference\"\"\"\n if args.model == \"crnn_ctc\":\n infer = ctc_infer\n- get_feeder_data = get_ctc_feeder_data\n+ get_feeder_data = get_ctc_feeder_for_infer\n else:\n infer = attention_infer\n get_feeder_data = get_attention_feeder_for_infer\n@@ -78,7 +78,7 @@\n batch_times = []\n iters = 0\n for data in infer_reader():\n- feed_dict = get_feeder_data(data, place, need_label=False)\n+ feed_dict = get_feeder_data(data, place)\n if args.iterations > 0 and iters == args.iterations + args.skip_batch_num:\n break\n if iters < args.skip_batch_num:\ndiff --git a/PaddleCV/ocr_recognition/utility.py b/PaddleCV/ocr_recognition/utility.py\n--- a/PaddleCV/ocr_recognition/utility.py\n+++ b/PaddleCV/ocr_recognition/utility.py\n@@ -83,7 +83,8 @@\n pixel_tensor = core.LoDTensor()\n pixel_data = None\n pixel_data = np.concatenate(\n- list(map(lambda x: x[0][np.newaxis, :], data)), axis=0).astype(\"float32\")\n+ list(map(lambda x: x[0][np.newaxis, :], data)),\n+ axis=0).astype(\"float32\")\n pixel_tensor.set(pixel_data, place)\n label_tensor = to_lodtensor(list(map(lambda x: x[1], data)), place)\n if need_label:\n@@ -92,11 +93,16 @@\n return {\"pixel\": pixel_tensor}\n \n \n+def get_ctc_feeder_for_infer(data, place):\n+ return get_ctc_feeder_data(data, place, need_label=False)\n+\n+\n def get_attention_feeder_data(data, place, need_label=True):\n pixel_tensor = core.LoDTensor()\n pixel_data = None\n pixel_data = np.concatenate(\n- list(map(lambda x: x[0][np.newaxis, :], data)), axis=0).astype(\"float32\")\n+ list(map(lambda x: x[0][np.newaxis, :], data)),\n+ axis=0).astype(\"float32\")\n pixel_tensor.set(pixel_data, place)\n label_in_tensor = to_lodtensor(list(map(lambda x: x[1], data)), place)\n label_out_tensor = to_lodtensor(list(map(lambda x: x[2], data)), place)\n@@ -127,7 +133,8 @@\n pixel_tensor = core.LoDTensor()\n pixel_data = None\n pixel_data = np.concatenate(\n- list(map(lambda x: x[0][np.newaxis, :], data)), axis=0).astype(\"float32\")\n+ list(map(lambda x: x[0][np.newaxis, :], data)),\n+ axis=0).astype(\"float32\")\n pixel_tensor.set(pixel_data, place)\n return {\n \"pixel\": pixel_tensor,\n", "issue": "ocr_recognition \u8bc4\u4f30\u3001\u9884\u6d4b\u9519\u8bef\n1\u3001ocr_recognition CTC model eval.py\u6709\u51fd\u6570\u7f3a\u5c11\u53c2\u6570\uff1a\r\n<img width=\"909\" alt=\"e8ec154b99d031a872f8f222e1088e92\" src=\"https://user-images.githubusercontent.com/46314656/53880163-453f1e00-404b-11e9-85c7-8dd84fe81020.png\">\r\n2\u3001\u9884\u6d4b\u65f6CTC\u548cattention\u6a21\u578b\u7684\u62a5\u9519\uff1a\r\n<img width=\"987\" alt=\"93b4cc1a5a0309688355e42a33bddad7\" src=\"https://user-images.githubusercontent.com/46314656/53880294-9e0eb680-404b-11e9-9619-47c6bdaff889.png\">\r\n\n", "before_files": [{"content": "from __future__ import print_function\nimport paddle.fluid as fluid\nfrom utility import add_arguments, print_arguments, to_lodtensor, get_ctc_feeder_data, get_attention_feeder_for_infer\nimport paddle.fluid.profiler as profiler\nfrom crnn_ctc_model import ctc_infer\nfrom attention_model import attention_infer\nimport numpy as np\nimport data_reader\nimport argparse\nimport functools\nimport os\nimport time\n\nparser = argparse.ArgumentParser(description=__doc__)\nadd_arg = functools.partial(add_arguments, argparser=parser)\n# yapf: disable\nadd_arg('model', str, \"crnn_ctc\", \"Which type of network to be used. 'crnn_ctc' or 'attention'\")\nadd_arg('model_path', str, None, \"The model path to be used for inference.\")\nadd_arg('input_images_dir', str, None, \"The directory of images.\")\nadd_arg('input_images_list', str, None, \"The list file of images.\")\nadd_arg('dict', str, None, \"The dictionary. The result of inference will be index sequence if the dictionary was None.\")\nadd_arg('use_gpu', bool, True, \"Whether use GPU to infer.\")\nadd_arg('iterations', int, 0, \"The number of iterations. Zero or less means whole test set. More than 0 means the test set might be looped until # of iterations is reached.\")\nadd_arg('profile', bool, False, \"Whether to use profiling.\")\nadd_arg('skip_batch_num', int, 0, \"The number of first minibatches to skip as warm-up for better performance test.\")\nadd_arg('batch_size', int, 1, \"The minibatch size.\")\n# yapf: enable\n\n\ndef inference(args):\n \"\"\"OCR inference\"\"\"\n if args.model == \"crnn_ctc\":\n infer = ctc_infer\n get_feeder_data = get_ctc_feeder_data\n else:\n infer = attention_infer\n get_feeder_data = get_attention_feeder_for_infer\n eos = 1\n sos = 0\n num_classes = data_reader.num_classes()\n data_shape = data_reader.data_shape()\n # define network\n images = fluid.layers.data(name='pixel', shape=data_shape, dtype='float32')\n ids = infer(images, num_classes, use_cudnn=True if args.use_gpu else False)\n # data reader\n infer_reader = data_reader.inference(\n batch_size=args.batch_size,\n infer_images_dir=args.input_images_dir,\n infer_list_file=args.input_images_list,\n cycle=True if args.iterations > 0 else False,\n model=args.model)\n # prepare environment\n place = fluid.CPUPlace()\n if args.use_gpu:\n place = fluid.CUDAPlace(0)\n\n exe = fluid.Executor(place)\n exe.run(fluid.default_startup_program())\n\n # load dictionary\n dict_map = None\n if args.dict is not None and os.path.isfile(args.dict):\n dict_map = {}\n with open(args.dict) as dict_file:\n for i, word in enumerate(dict_file):\n dict_map[i] = word.strip()\n print(\"Loaded dict from %s\" % args.dict)\n\n # load init model\n model_dir = args.model_path\n model_file_name = None\n if not os.path.isdir(args.model_path):\n model_dir = os.path.dirname(args.model_path)\n model_file_name = os.path.basename(args.model_path)\n fluid.io.load_params(exe, dirname=model_dir, filename=model_file_name)\n print(\"Init model from: %s.\" % args.model_path)\n\n batch_times = []\n iters = 0\n for data in infer_reader():\n feed_dict = get_feeder_data(data, place, need_label=False)\n if args.iterations > 0 and iters == args.iterations + args.skip_batch_num:\n break\n if iters < args.skip_batch_num:\n print(\"Warm-up itaration\")\n if iters == args.skip_batch_num:\n profiler.reset_profiler()\n\n start = time.time()\n result = exe.run(fluid.default_main_program(),\n feed=feed_dict,\n fetch_list=[ids],\n return_numpy=False)\n indexes = prune(np.array(result[0]).flatten(), 0, 1)\n batch_time = time.time() - start\n fps = args.batch_size / batch_time\n batch_times.append(batch_time)\n if dict_map is not None:\n print(\"Iteration %d, latency: %.5f s, fps: %f, result: %s\" % (\n iters,\n batch_time,\n fps,\n [dict_map[index] for index in indexes], ))\n else:\n print(\"Iteration %d, latency: %.5f s, fps: %f, result: %s\" % (\n iters,\n batch_time,\n fps,\n indexes, ))\n\n iters += 1\n\n latencies = batch_times[args.skip_batch_num:]\n latency_avg = np.average(latencies)\n latency_pc99 = np.percentile(latencies, 99)\n fpses = np.divide(args.batch_size, latencies)\n fps_avg = np.average(fpses)\n fps_pc99 = np.percentile(fpses, 1)\n\n # Benchmark output\n print('\\nTotal examples (incl. warm-up): %d' % (iters * args.batch_size))\n print('average latency: %.5f s, 99pc latency: %.5f s' % (latency_avg,\n latency_pc99))\n print('average fps: %.5f, fps for 99pc latency: %.5f' % (fps_avg, fps_pc99))\n\n\ndef prune(words, sos, eos):\n \"\"\"Remove unused tokens in prediction result.\"\"\"\n start_index = 0\n end_index = len(words)\n if sos in words:\n start_index = np.where(words == sos)[0][0] + 1\n if eos in words:\n end_index = np.where(words == eos)[0][0]\n return words[start_index:end_index]\n\n\ndef main():\n args = parser.parse_args()\n print_arguments(args)\n if args.profile:\n if args.use_gpu:\n with profiler.cuda_profiler(\"cuda_profiler.txt\", 'csv') as nvprof:\n inference(args)\n else:\n with profiler.profiler(\"CPU\", sorted_key='total') as cpuprof:\n inference(args)\n else:\n inference(args)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "PaddleCV/ocr_recognition/infer.py"}, {"content": "\"\"\"Contains common utility functions.\"\"\"\n# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserve.\n#\n#Licensed under the Apache License, Version 2.0 (the \"License\");\n#you may not use this file except in compliance with the License.\n#You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n#Unless required by applicable law or agreed to in writing, software\n#distributed under the License is distributed on an \"AS IS\" BASIS,\n#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#See the License for the specific language governing permissions and\n#limitations under the License.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nimport distutils.util\nimport numpy as np\nfrom paddle.fluid import core\nimport paddle.fluid as fluid\nimport six\n\n\ndef print_arguments(args):\n \"\"\"Print argparse's arguments.\n\n Usage:\n\n .. code-block:: python\n\n parser = argparse.ArgumentParser()\n parser.add_argument(\"name\", default=\"Jonh\", type=str, help=\"User name.\")\n args = parser.parse_args()\n print_arguments(args)\n\n :param args: Input argparse.Namespace for printing.\n :type args: argparse.Namespace\n \"\"\"\n print(\"----------- Configuration Arguments -----------\")\n for arg, value in sorted(six.iteritems(vars(args))):\n print(\"%s: %s\" % (arg, value))\n print(\"------------------------------------------------\")\n\n\ndef add_arguments(argname, type, default, help, argparser, **kwargs):\n \"\"\"Add argparse's argument.\n\n Usage:\n\n .. code-block:: python\n\n parser = argparse.ArgumentParser()\n add_argument(\"name\", str, \"Jonh\", \"User name.\", parser)\n args = parser.parse_args()\n \"\"\"\n type = distutils.util.strtobool if type == bool else type\n argparser.add_argument(\n \"--\" + argname,\n default=default,\n type=type,\n help=help + ' Default: %(default)s.',\n **kwargs)\n\n\ndef to_lodtensor(data, place):\n seq_lens = [len(seq) for seq in data]\n cur_len = 0\n lod = [cur_len]\n for l in seq_lens:\n cur_len += l\n lod.append(cur_len)\n flattened_data = np.concatenate(data, axis=0).astype(\"int32\")\n flattened_data = flattened_data.reshape([len(flattened_data), 1])\n res = core.LoDTensor()\n res.set(flattened_data, place)\n res.set_lod([lod])\n return res\n\n\ndef get_ctc_feeder_data(data, place, need_label=True):\n pixel_tensor = core.LoDTensor()\n pixel_data = None\n pixel_data = np.concatenate(\n list(map(lambda x: x[0][np.newaxis, :], data)), axis=0).astype(\"float32\")\n pixel_tensor.set(pixel_data, place)\n label_tensor = to_lodtensor(list(map(lambda x: x[1], data)), place)\n if need_label:\n return {\"pixel\": pixel_tensor, \"label\": label_tensor}\n else:\n return {\"pixel\": pixel_tensor}\n\n\ndef get_attention_feeder_data(data, place, need_label=True):\n pixel_tensor = core.LoDTensor()\n pixel_data = None\n pixel_data = np.concatenate(\n list(map(lambda x: x[0][np.newaxis, :], data)), axis=0).astype(\"float32\")\n pixel_tensor.set(pixel_data, place)\n label_in_tensor = to_lodtensor(list(map(lambda x: x[1], data)), place)\n label_out_tensor = to_lodtensor(list(map(lambda x: x[2], data)), place)\n if need_label:\n return {\n \"pixel\": pixel_tensor,\n \"label_in\": label_in_tensor,\n \"label_out\": label_out_tensor\n }\n else:\n return {\"pixel\": pixel_tensor}\n\n\ndef get_attention_feeder_for_infer(data, place):\n batch_size = len(data)\n init_ids_data = np.array([0 for _ in range(batch_size)], dtype='int64')\n init_scores_data = np.array(\n [1. for _ in range(batch_size)], dtype='float32')\n init_ids_data = init_ids_data.reshape((batch_size, 1))\n init_scores_data = init_scores_data.reshape((batch_size, 1))\n init_recursive_seq_lens = [1] * batch_size\n init_recursive_seq_lens = [init_recursive_seq_lens, init_recursive_seq_lens]\n init_ids = fluid.create_lod_tensor(init_ids_data, init_recursive_seq_lens,\n place)\n init_scores = fluid.create_lod_tensor(init_scores_data,\n init_recursive_seq_lens, place)\n\n pixel_tensor = core.LoDTensor()\n pixel_data = None\n pixel_data = np.concatenate(\n list(map(lambda x: x[0][np.newaxis, :], data)), axis=0).astype(\"float32\")\n pixel_tensor.set(pixel_data, place)\n return {\n \"pixel\": pixel_tensor,\n \"init_ids\": init_ids,\n \"init_scores\": init_scores\n }\n", "path": "PaddleCV/ocr_recognition/utility.py"}], "after_files": [{"content": "from __future__ import print_function\nimport paddle.fluid as fluid\nfrom utility import add_arguments, print_arguments, to_lodtensor, get_ctc_feeder_data, get_attention_feeder_for_infer\nimport paddle.fluid.profiler as profiler\nfrom crnn_ctc_model import ctc_infer\nfrom attention_model import attention_infer\nimport numpy as np\nimport data_reader\nimport argparse\nimport functools\nimport os\nimport time\n\nparser = argparse.ArgumentParser(description=__doc__)\nadd_arg = functools.partial(add_arguments, argparser=parser)\n# yapf: disable\nadd_arg('model', str, \"crnn_ctc\", \"Which type of network to be used. 'crnn_ctc' or 'attention'\")\nadd_arg('model_path', str, None, \"The model path to be used for inference.\")\nadd_arg('input_images_dir', str, None, \"The directory of images.\")\nadd_arg('input_images_list', str, None, \"The list file of images.\")\nadd_arg('dict', str, None, \"The dictionary. The result of inference will be index sequence if the dictionary was None.\")\nadd_arg('use_gpu', bool, True, \"Whether use GPU to infer.\")\nadd_arg('iterations', int, 0, \"The number of iterations. Zero or less means whole test set. More than 0 means the test set might be looped until # of iterations is reached.\")\nadd_arg('profile', bool, False, \"Whether to use profiling.\")\nadd_arg('skip_batch_num', int, 0, \"The number of first minibatches to skip as warm-up for better performance test.\")\nadd_arg('batch_size', int, 1, \"The minibatch size.\")\n# yapf: enable\n\n\ndef inference(args):\n \"\"\"OCR inference\"\"\"\n if args.model == \"crnn_ctc\":\n infer = ctc_infer\n get_feeder_data = get_ctc_feeder_for_infer\n else:\n infer = attention_infer\n get_feeder_data = get_attention_feeder_for_infer\n eos = 1\n sos = 0\n num_classes = data_reader.num_classes()\n data_shape = data_reader.data_shape()\n # define network\n images = fluid.layers.data(name='pixel', shape=data_shape, dtype='float32')\n ids = infer(images, num_classes, use_cudnn=True if args.use_gpu else False)\n # data reader\n infer_reader = data_reader.inference(\n batch_size=args.batch_size,\n infer_images_dir=args.input_images_dir,\n infer_list_file=args.input_images_list,\n cycle=True if args.iterations > 0 else False,\n model=args.model)\n # prepare environment\n place = fluid.CPUPlace()\n if args.use_gpu:\n place = fluid.CUDAPlace(0)\n\n exe = fluid.Executor(place)\n exe.run(fluid.default_startup_program())\n\n # load dictionary\n dict_map = None\n if args.dict is not None and os.path.isfile(args.dict):\n dict_map = {}\n with open(args.dict) as dict_file:\n for i, word in enumerate(dict_file):\n dict_map[i] = word.strip()\n print(\"Loaded dict from %s\" % args.dict)\n\n # load init model\n model_dir = args.model_path\n model_file_name = None\n if not os.path.isdir(args.model_path):\n model_dir = os.path.dirname(args.model_path)\n model_file_name = os.path.basename(args.model_path)\n fluid.io.load_params(exe, dirname=model_dir, filename=model_file_name)\n print(\"Init model from: %s.\" % args.model_path)\n\n batch_times = []\n iters = 0\n for data in infer_reader():\n feed_dict = get_feeder_data(data, place)\n if args.iterations > 0 and iters == args.iterations + args.skip_batch_num:\n break\n if iters < args.skip_batch_num:\n print(\"Warm-up itaration\")\n if iters == args.skip_batch_num:\n profiler.reset_profiler()\n\n start = time.time()\n result = exe.run(fluid.default_main_program(),\n feed=feed_dict,\n fetch_list=[ids],\n return_numpy=False)\n indexes = prune(np.array(result[0]).flatten(), 0, 1)\n batch_time = time.time() - start\n fps = args.batch_size / batch_time\n batch_times.append(batch_time)\n if dict_map is not None:\n print(\"Iteration %d, latency: %.5f s, fps: %f, result: %s\" % (\n iters,\n batch_time,\n fps,\n [dict_map[index] for index in indexes], ))\n else:\n print(\"Iteration %d, latency: %.5f s, fps: %f, result: %s\" % (\n iters,\n batch_time,\n fps,\n indexes, ))\n\n iters += 1\n\n latencies = batch_times[args.skip_batch_num:]\n latency_avg = np.average(latencies)\n latency_pc99 = np.percentile(latencies, 99)\n fpses = np.divide(args.batch_size, latencies)\n fps_avg = np.average(fpses)\n fps_pc99 = np.percentile(fpses, 1)\n\n # Benchmark output\n print('\\nTotal examples (incl. warm-up): %d' % (iters * args.batch_size))\n print('average latency: %.5f s, 99pc latency: %.5f s' % (latency_avg,\n latency_pc99))\n print('average fps: %.5f, fps for 99pc latency: %.5f' % (fps_avg, fps_pc99))\n\n\ndef prune(words, sos, eos):\n \"\"\"Remove unused tokens in prediction result.\"\"\"\n start_index = 0\n end_index = len(words)\n if sos in words:\n start_index = np.where(words == sos)[0][0] + 1\n if eos in words:\n end_index = np.where(words == eos)[0][0]\n return words[start_index:end_index]\n\n\ndef main():\n args = parser.parse_args()\n print_arguments(args)\n if args.profile:\n if args.use_gpu:\n with profiler.cuda_profiler(\"cuda_profiler.txt\", 'csv') as nvprof:\n inference(args)\n else:\n with profiler.profiler(\"CPU\", sorted_key='total') as cpuprof:\n inference(args)\n else:\n inference(args)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "PaddleCV/ocr_recognition/infer.py"}, {"content": "\"\"\"Contains common utility functions.\"\"\"\n# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserve.\n#\n#Licensed under the Apache License, Version 2.0 (the \"License\");\n#you may not use this file except in compliance with the License.\n#You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n#Unless required by applicable law or agreed to in writing, software\n#distributed under the License is distributed on an \"AS IS\" BASIS,\n#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#See the License for the specific language governing permissions and\n#limitations under the License.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nimport distutils.util\nimport numpy as np\nfrom paddle.fluid import core\nimport paddle.fluid as fluid\nimport six\n\n\ndef print_arguments(args):\n \"\"\"Print argparse's arguments.\n\n Usage:\n\n .. code-block:: python\n\n parser = argparse.ArgumentParser()\n parser.add_argument(\"name\", default=\"Jonh\", type=str, help=\"User name.\")\n args = parser.parse_args()\n print_arguments(args)\n\n :param args: Input argparse.Namespace for printing.\n :type args: argparse.Namespace\n \"\"\"\n print(\"----------- Configuration Arguments -----------\")\n for arg, value in sorted(six.iteritems(vars(args))):\n print(\"%s: %s\" % (arg, value))\n print(\"------------------------------------------------\")\n\n\ndef add_arguments(argname, type, default, help, argparser, **kwargs):\n \"\"\"Add argparse's argument.\n\n Usage:\n\n .. code-block:: python\n\n parser = argparse.ArgumentParser()\n add_argument(\"name\", str, \"Jonh\", \"User name.\", parser)\n args = parser.parse_args()\n \"\"\"\n type = distutils.util.strtobool if type == bool else type\n argparser.add_argument(\n \"--\" + argname,\n default=default,\n type=type,\n help=help + ' Default: %(default)s.',\n **kwargs)\n\n\ndef to_lodtensor(data, place):\n seq_lens = [len(seq) for seq in data]\n cur_len = 0\n lod = [cur_len]\n for l in seq_lens:\n cur_len += l\n lod.append(cur_len)\n flattened_data = np.concatenate(data, axis=0).astype(\"int32\")\n flattened_data = flattened_data.reshape([len(flattened_data), 1])\n res = core.LoDTensor()\n res.set(flattened_data, place)\n res.set_lod([lod])\n return res\n\n\ndef get_ctc_feeder_data(data, place, need_label=True):\n pixel_tensor = core.LoDTensor()\n pixel_data = None\n pixel_data = np.concatenate(\n list(map(lambda x: x[0][np.newaxis, :], data)),\n axis=0).astype(\"float32\")\n pixel_tensor.set(pixel_data, place)\n label_tensor = to_lodtensor(list(map(lambda x: x[1], data)), place)\n if need_label:\n return {\"pixel\": pixel_tensor, \"label\": label_tensor}\n else:\n return {\"pixel\": pixel_tensor}\n\n\ndef get_ctc_feeder_for_infer(data, place):\n return get_ctc_feeder_data(data, place, need_label=False)\n\n\ndef get_attention_feeder_data(data, place, need_label=True):\n pixel_tensor = core.LoDTensor()\n pixel_data = None\n pixel_data = np.concatenate(\n list(map(lambda x: x[0][np.newaxis, :], data)),\n axis=0).astype(\"float32\")\n pixel_tensor.set(pixel_data, place)\n label_in_tensor = to_lodtensor(list(map(lambda x: x[1], data)), place)\n label_out_tensor = to_lodtensor(list(map(lambda x: x[2], data)), place)\n if need_label:\n return {\n \"pixel\": pixel_tensor,\n \"label_in\": label_in_tensor,\n \"label_out\": label_out_tensor\n }\n else:\n return {\"pixel\": pixel_tensor}\n\n\ndef get_attention_feeder_for_infer(data, place):\n batch_size = len(data)\n init_ids_data = np.array([0 for _ in range(batch_size)], dtype='int64')\n init_scores_data = np.array(\n [1. for _ in range(batch_size)], dtype='float32')\n init_ids_data = init_ids_data.reshape((batch_size, 1))\n init_scores_data = init_scores_data.reshape((batch_size, 1))\n init_recursive_seq_lens = [1] * batch_size\n init_recursive_seq_lens = [init_recursive_seq_lens, init_recursive_seq_lens]\n init_ids = fluid.create_lod_tensor(init_ids_data, init_recursive_seq_lens,\n place)\n init_scores = fluid.create_lod_tensor(init_scores_data,\n init_recursive_seq_lens, place)\n\n pixel_tensor = core.LoDTensor()\n pixel_data = None\n pixel_data = np.concatenate(\n list(map(lambda x: x[0][np.newaxis, :], data)),\n axis=0).astype(\"float32\")\n pixel_tensor.set(pixel_data, place)\n return {\n \"pixel\": pixel_tensor,\n \"init_ids\": init_ids,\n \"init_scores\": init_scores\n }\n", "path": "PaddleCV/ocr_recognition/utility.py"}]}
| 3,742 | 722 |
gh_patches_debug_13410
|
rasdani/github-patches
|
git_diff
|
plotly__plotly.py-958
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Not all dependecies are listed
Would it be possible to add pandas (and numpy) to the list of dependencies? They're both imported in various places but not listed. This occasionally causes problems for packages importing plotly, since `pip` may not install all dependencies.
Xref: https://github.com/deeptools/deepTools/issues/669
Not all dependecies are listed
Would it be possible to add pandas (and numpy) to the list of dependencies? They're both imported in various places but not listed. This occasionally causes problems for packages importing plotly, since `pip` may not install all dependencies.
Xref: https://github.com/deeptools/deepTools/issues/669
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plotly/figure_factory/__init__.py`
Content:
```
1 from __future__ import absolute_import
2
3 # Require that numpy exists for figure_factory
4 import numpy
5
6 from plotly.figure_factory._2d_density import create_2d_density
7 from plotly.figure_factory._annotated_heatmap import create_annotated_heatmap
8 from plotly.figure_factory._bullet import create_bullet
9 from plotly.figure_factory._candlestick import create_candlestick
10 from plotly.figure_factory._dendrogram import create_dendrogram
11 from plotly.figure_factory._distplot import create_distplot
12 from plotly.figure_factory._facet_grid import create_facet_grid
13 from plotly.figure_factory._gantt import create_gantt
14 from plotly.figure_factory._ohlc import create_ohlc
15 from plotly.figure_factory._quiver import create_quiver
16 from plotly.figure_factory._scatterplot import create_scatterplotmatrix
17 from plotly.figure_factory._streamline import create_streamline
18 from plotly.figure_factory._table import create_table
19 from plotly.figure_factory._trisurf import create_trisurf
20 from plotly.figure_factory._violin import create_violin
21 from plotly.figure_factory._county_choropleth import create_choropleth
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plotly/figure_factory/__init__.py b/plotly/figure_factory/__init__.py
--- a/plotly/figure_factory/__init__.py
+++ b/plotly/figure_factory/__init__.py
@@ -1,5 +1,7 @@
from __future__ import absolute_import
+from plotly import optional_imports
+
# Require that numpy exists for figure_factory
import numpy
@@ -18,4 +20,5 @@
from plotly.figure_factory._table import create_table
from plotly.figure_factory._trisurf import create_trisurf
from plotly.figure_factory._violin import create_violin
-from plotly.figure_factory._county_choropleth import create_choropleth
\ No newline at end of file
+if optional_imports.get_module('pandas') is not None:
+ from plotly.figure_factory._county_choropleth import create_choropleth
|
{"golden_diff": "diff --git a/plotly/figure_factory/__init__.py b/plotly/figure_factory/__init__.py\n--- a/plotly/figure_factory/__init__.py\n+++ b/plotly/figure_factory/__init__.py\n@@ -1,5 +1,7 @@\n from __future__ import absolute_import\n \n+from plotly import optional_imports\n+\n # Require that numpy exists for figure_factory\n import numpy\n \n@@ -18,4 +20,5 @@\n from plotly.figure_factory._table import create_table\n from plotly.figure_factory._trisurf import create_trisurf\n from plotly.figure_factory._violin import create_violin\n-from plotly.figure_factory._county_choropleth import create_choropleth\n\\ No newline at end of file\n+if optional_imports.get_module('pandas') is not None:\n+ from plotly.figure_factory._county_choropleth import create_choropleth\n", "issue": "Not all dependecies are listed\nWould it be possible to add pandas (and numpy) to the list of dependencies? They're both imported in various places but not listed. This occasionally causes problems for packages importing plotly, since `pip` may not install all dependencies.\r\n\r\nXref: https://github.com/deeptools/deepTools/issues/669\nNot all dependecies are listed\nWould it be possible to add pandas (and numpy) to the list of dependencies? They're both imported in various places but not listed. This occasionally causes problems for packages importing plotly, since `pip` may not install all dependencies.\r\n\r\nXref: https://github.com/deeptools/deepTools/issues/669\n", "before_files": [{"content": "from __future__ import absolute_import\n\n# Require that numpy exists for figure_factory\nimport numpy\n\nfrom plotly.figure_factory._2d_density import create_2d_density\nfrom plotly.figure_factory._annotated_heatmap import create_annotated_heatmap\nfrom plotly.figure_factory._bullet import create_bullet\nfrom plotly.figure_factory._candlestick import create_candlestick\nfrom plotly.figure_factory._dendrogram import create_dendrogram\nfrom plotly.figure_factory._distplot import create_distplot\nfrom plotly.figure_factory._facet_grid import create_facet_grid\nfrom plotly.figure_factory._gantt import create_gantt\nfrom plotly.figure_factory._ohlc import create_ohlc\nfrom plotly.figure_factory._quiver import create_quiver\nfrom plotly.figure_factory._scatterplot import create_scatterplotmatrix\nfrom plotly.figure_factory._streamline import create_streamline\nfrom plotly.figure_factory._table import create_table\nfrom plotly.figure_factory._trisurf import create_trisurf\nfrom plotly.figure_factory._violin import create_violin\nfrom plotly.figure_factory._county_choropleth import create_choropleth", "path": "plotly/figure_factory/__init__.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nfrom plotly import optional_imports\n\n# Require that numpy exists for figure_factory\nimport numpy\n\nfrom plotly.figure_factory._2d_density import create_2d_density\nfrom plotly.figure_factory._annotated_heatmap import create_annotated_heatmap\nfrom plotly.figure_factory._bullet import create_bullet\nfrom plotly.figure_factory._candlestick import create_candlestick\nfrom plotly.figure_factory._dendrogram import create_dendrogram\nfrom plotly.figure_factory._distplot import create_distplot\nfrom plotly.figure_factory._facet_grid import create_facet_grid\nfrom plotly.figure_factory._gantt import create_gantt\nfrom plotly.figure_factory._ohlc import create_ohlc\nfrom plotly.figure_factory._quiver import create_quiver\nfrom plotly.figure_factory._scatterplot import create_scatterplotmatrix\nfrom plotly.figure_factory._streamline import create_streamline\nfrom plotly.figure_factory._table import create_table\nfrom plotly.figure_factory._trisurf import create_trisurf\nfrom plotly.figure_factory._violin import create_violin\nif optional_imports.get_module('pandas') is not None:\n from plotly.figure_factory._county_choropleth import create_choropleth\n", "path": "plotly/figure_factory/__init__.py"}]}
| 697 | 205 |
gh_patches_debug_21813
|
rasdani/github-patches
|
git_diff
|
openstates__openstates-scrapers-2507
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PR failing since at least 2018-09-06
PR has been failing since 2018-09-06
Based on automated runs it appears that PR has not run successfully in 3 days (2018-09-06).
```
00:38:11 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Carlos J. Rodr\u00edguez Mateo"}
00:38:11 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Roberto Rivera Ruiz De Porras"}
00:38:11 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Guillermo Miranda Rivera"}
00:38:11 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Antonio L. Soto Torres"}
00:38:11 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Jos\u00e9 J. P\u00e9rez Cordero"}
00:38:11 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Jos\u00e9 F. Aponte Hern\u00e1ndez"}
00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Nelson Del Valle Col\u00f3n"}
00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Evelyn V\u00e1zquez Nieves"}
00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Mar\u00eda de Lourdes Ramos Rivera"}
00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Migdalia Padilla Alvelo"}
00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Jes\u00fas Manuel Ortiz Gonz\u00e1lez"}
00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Jos\u00e9 L. Dalmau Santiago"}
00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Mar\u00eda M. Charbonier Laureano"}
00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Juan O. Morales Rodr\u00edguez"}
00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Angel J. Mart\u00ednez Santiago"}
00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Abel Nazario Qui\u00f1ones"}
00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Axel (Chino) Roque Gracia"}
00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{"name": "Jorge Navarro Su\u00e1rez"}
loaded Open States pupa settings...
pr (scrape, import)
people: {}
committees: {}
bills: {}
import jurisdictions...
import organizations...
import people...
import posts...
import memberships...
Traceback (most recent call last):
File "/opt/openstates/venv-pupa//bin/pupa", line 11, in <module>
load_entry_point('pupa', 'console_scripts', 'pupa')()
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/__main__.py", line 68, in main
subcommands[args.subcommand].handle(args, other)
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py", line 260, in handle
return self.do_handle(args, other, juris)
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py", line 307, in do_handle
report['import'] = self.do_import(juris, args)
File "/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py", line 211, in do_import
report.update(membership_importer.import_directory(datadir))
File "/opt/openstates/venv-pupa/src/pupa/pupa/importers/base.py", line 196, in import_directory
return self.import_data(json_stream())
File "/opt/openstates/venv-pupa/src/pupa/pupa/importers/base.py", line 233, in import_data
obj_id, what = self.import_item(data)
File "/opt/openstates/venv-pupa/src/pupa/pupa/importers/base.py", line 254, in import_item
data = self.prepare_for_db(data)
File "/opt/openstates/venv-pupa/src/pupa/pupa/importers/memberships.py", line 50, in prepare_for_db
data['post_id'] = self.post_importer.resolve_json_id(data['post_id'])
File "/opt/openstates/venv-pupa/src/pupa/pupa/importers/base.py", line 171, in resolve_json_id
raise UnresolvedIdError(errmsg)
pupa.exceptions.UnresolvedIdError: cannot resolve pseudo id to Post: ~{"label": "At-Large", "organization__classification": "lower"}
```
Visit http://bobsled.openstates.org for more info.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openstates/pr/__init__.py`
Content:
```
1 from pupa.scrape import Jurisdiction, Organization
2 from .people import PRPersonScraper
3 from .committees import PRCommitteeScraper
4 from .bills import PRBillScraper
5
6 settings = dict(SCRAPELIB_TIMEOUT=300)
7
8
9 class PuertoRico(Jurisdiction):
10 division_id = "ocd-division/country:us/territory:pr"
11 classification = "government"
12 name = "Puerto Rico"
13 url = "http://www.oslpr.org/"
14 scrapers = {
15 'people': PRPersonScraper,
16 'committees': PRCommitteeScraper,
17 'bills': PRBillScraper,
18 }
19 legislative_sessions = [
20 {
21 "_scraped_name": "2009-2012",
22 "identifier": "2009-2012",
23 "name": "2009-2012 Session"
24 },
25 {
26 "_scraped_name": "2013-2016",
27 "identifier": "2013-2016",
28 "name": "2013-2016 Session"
29 },
30 {
31 "_scraped_name": "2017-2020",
32 "identifier": "2017-2020",
33 "name": "2017-2020 Session",
34 "start_date": "2017-01-02",
35 "end_date": "2021-01-01",
36 }
37 ]
38 ignored_scraped_sessions = [
39 "2005-2008",
40 "2001-2004",
41 "1997-2000",
42 "1993-1996"
43 ]
44
45 def get_organizations(self):
46 legislature_name = "Legislative Assembly of Puerto Rico"
47 lower_chamber_name = "House"
48 lower_title = "Senator"
49 upper_chamber_name = "Senate"
50 upper_title = "Senator"
51
52 legislature = Organization(name=legislature_name,
53 classification="legislature")
54 upper = Organization(upper_chamber_name, classification='upper',
55 parent_id=legislature._id)
56 lower = Organization(lower_chamber_name, classification='lower',
57 parent_id=legislature._id)
58
59 # 8 districts w/ 2 members, + 11 at larg
60 for i, d in enumerate(('I', 'II', 'III', 'IV', 'V', 'VI', 'VII', 'VIII')):
61 upper.add_post(label=d, role=upper_title,
62 division_id='{}/sldu:{}'.format(self.division_id, i + 1))
63
64 # lower house is 40 seats, + 11 at large
65 for n in range(1, 41):
66 lower.add_post(
67 label=str(n), role=lower_title,
68 division_id='{}/sldl:{}'.format(self.division_id, n))
69
70 # TODO: add at large
71
72 yield Organization(name='Office of the Governor', classification='executive')
73 yield legislature
74 yield upper
75 yield lower
76
77 def get_session_list(self):
78 from openstates.utils import url_xpath
79 # this URL should work even for future sessions
80 return url_xpath('http://www.oslpr.org/legislatura/tl2013/buscar_2013.asp',
81 '//select[@name="URL"]/option/text()')
82
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/openstates/pr/__init__.py b/openstates/pr/__init__.py
--- a/openstates/pr/__init__.py
+++ b/openstates/pr/__init__.py
@@ -60,14 +60,16 @@
for i, d in enumerate(('I', 'II', 'III', 'IV', 'V', 'VI', 'VII', 'VIII')):
upper.add_post(label=d, role=upper_title,
division_id='{}/sldu:{}'.format(self.division_id, i + 1))
+ upper.add_post(label='At-Large', role=upper_title,
+ division_id='{}/sldu'.format(self.division_id))
# lower house is 40 seats, + 11 at large
for n in range(1, 41):
lower.add_post(
label=str(n), role=lower_title,
division_id='{}/sldl:{}'.format(self.division_id, n))
-
- # TODO: add at large
+ lower.add_post(label='At-Large', role=lower_title,
+ division_id='{}/sldl'.format(self.division_id))
yield Organization(name='Office of the Governor', classification='executive')
yield legislature
|
{"golden_diff": "diff --git a/openstates/pr/__init__.py b/openstates/pr/__init__.py\n--- a/openstates/pr/__init__.py\n+++ b/openstates/pr/__init__.py\n@@ -60,14 +60,16 @@\n for i, d in enumerate(('I', 'II', 'III', 'IV', 'V', 'VI', 'VII', 'VIII')):\n upper.add_post(label=d, role=upper_title,\n division_id='{}/sldu:{}'.format(self.division_id, i + 1))\n+ upper.add_post(label='At-Large', role=upper_title,\n+ division_id='{}/sldu'.format(self.division_id))\n \n # lower house is 40 seats, + 11 at large\n for n in range(1, 41):\n lower.add_post(\n label=str(n), role=lower_title,\n division_id='{}/sldl:{}'.format(self.division_id, n))\n-\n- # TODO: add at large\n+ lower.add_post(label='At-Large', role=lower_title,\n+ division_id='{}/sldl'.format(self.division_id))\n \n yield Organization(name='Office of the Governor', classification='executive')\n yield legislature\n", "issue": "PR failing since at least 2018-09-06\nPR has been failing since 2018-09-06\n\nBased on automated runs it appears that PR has not run successfully in 3 days (2018-09-06).\n\n\n```\n 00:38:11 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Carlos J. Rodr\\u00edguez Mateo\"}\n00:38:11 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Roberto Rivera Ruiz De Porras\"}\n00:38:11 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Guillermo Miranda Rivera\"}\n00:38:11 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Antonio L. Soto Torres\"}\n00:38:11 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Jos\\u00e9 J. P\\u00e9rez Cordero\"}\n00:38:11 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Jos\\u00e9 F. Aponte Hern\\u00e1ndez\"}\n00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Nelson Del Valle Col\\u00f3n\"}\n00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Evelyn V\\u00e1zquez Nieves\"}\n00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Mar\\u00eda de Lourdes Ramos Rivera\"}\n00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Migdalia Padilla Alvelo\"}\n00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Jes\\u00fas Manuel Ortiz Gonz\\u00e1lez\"}\n00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Jos\\u00e9 L. Dalmau Santiago\"}\n00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Mar\\u00eda M. Charbonier Laureano\"}\n00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Juan O. Morales Rodr\\u00edguez\"}\n00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Angel J. Mart\\u00ednez Santiago\"}\n00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Abel Nazario Qui\\u00f1ones\"}\n00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Axel (Chino) Roque Gracia\"}\n00:38:12 ERROR pupa: cannot resolve pseudo id to Person: ~{\"name\": \"Jorge Navarro Su\\u00e1rez\"}\nloaded Open States pupa settings...\npr (scrape, import)\n people: {}\n committees: {}\n bills: {}\nimport jurisdictions...\nimport organizations...\nimport people...\nimport posts...\nimport memberships...\nTraceback (most recent call last):\n File \"/opt/openstates/venv-pupa//bin/pupa\", line 11, in <module>\n load_entry_point('pupa', 'console_scripts', 'pupa')()\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/__main__.py\", line 68, in main\n subcommands[args.subcommand].handle(args, other)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py\", line 260, in handle\n return self.do_handle(args, other, juris)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py\", line 307, in do_handle\n report['import'] = self.do_import(juris, args)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/cli/commands/update.py\", line 211, in do_import\n report.update(membership_importer.import_directory(datadir))\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/importers/base.py\", line 196, in import_directory\n return self.import_data(json_stream())\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/importers/base.py\", line 233, in import_data\n obj_id, what = self.import_item(data)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/importers/base.py\", line 254, in import_item\n data = self.prepare_for_db(data)\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/importers/memberships.py\", line 50, in prepare_for_db\n data['post_id'] = self.post_importer.resolve_json_id(data['post_id'])\n File \"/opt/openstates/venv-pupa/src/pupa/pupa/importers/base.py\", line 171, in resolve_json_id\n raise UnresolvedIdError(errmsg)\npupa.exceptions.UnresolvedIdError: cannot resolve pseudo id to Post: ~{\"label\": \"At-Large\", \"organization__classification\": \"lower\"}\n```\n\nVisit http://bobsled.openstates.org for more info.\n\n", "before_files": [{"content": "from pupa.scrape import Jurisdiction, Organization\nfrom .people import PRPersonScraper\nfrom .committees import PRCommitteeScraper\nfrom .bills import PRBillScraper\n\nsettings = dict(SCRAPELIB_TIMEOUT=300)\n\n\nclass PuertoRico(Jurisdiction):\n division_id = \"ocd-division/country:us/territory:pr\"\n classification = \"government\"\n name = \"Puerto Rico\"\n url = \"http://www.oslpr.org/\"\n scrapers = {\n 'people': PRPersonScraper,\n 'committees': PRCommitteeScraper,\n 'bills': PRBillScraper,\n }\n legislative_sessions = [\n {\n \"_scraped_name\": \"2009-2012\",\n \"identifier\": \"2009-2012\",\n \"name\": \"2009-2012 Session\"\n },\n {\n \"_scraped_name\": \"2013-2016\",\n \"identifier\": \"2013-2016\",\n \"name\": \"2013-2016 Session\"\n },\n {\n \"_scraped_name\": \"2017-2020\",\n \"identifier\": \"2017-2020\",\n \"name\": \"2017-2020 Session\",\n \"start_date\": \"2017-01-02\",\n \"end_date\": \"2021-01-01\",\n }\n ]\n ignored_scraped_sessions = [\n \"2005-2008\",\n \"2001-2004\",\n \"1997-2000\",\n \"1993-1996\"\n ]\n\n def get_organizations(self):\n legislature_name = \"Legislative Assembly of Puerto Rico\"\n lower_chamber_name = \"House\"\n lower_title = \"Senator\"\n upper_chamber_name = \"Senate\"\n upper_title = \"Senator\"\n\n legislature = Organization(name=legislature_name,\n classification=\"legislature\")\n upper = Organization(upper_chamber_name, classification='upper',\n parent_id=legislature._id)\n lower = Organization(lower_chamber_name, classification='lower',\n parent_id=legislature._id)\n\n # 8 districts w/ 2 members, + 11 at larg\n for i, d in enumerate(('I', 'II', 'III', 'IV', 'V', 'VI', 'VII', 'VIII')):\n upper.add_post(label=d, role=upper_title,\n division_id='{}/sldu:{}'.format(self.division_id, i + 1))\n\n # lower house is 40 seats, + 11 at large\n for n in range(1, 41):\n lower.add_post(\n label=str(n), role=lower_title,\n division_id='{}/sldl:{}'.format(self.division_id, n))\n\n # TODO: add at large\n\n yield Organization(name='Office of the Governor', classification='executive')\n yield legislature\n yield upper\n yield lower\n\n def get_session_list(self):\n from openstates.utils import url_xpath\n # this URL should work even for future sessions\n return url_xpath('http://www.oslpr.org/legislatura/tl2013/buscar_2013.asp',\n '//select[@name=\"URL\"]/option/text()')\n", "path": "openstates/pr/__init__.py"}], "after_files": [{"content": "from pupa.scrape import Jurisdiction, Organization\nfrom .people import PRPersonScraper\nfrom .committees import PRCommitteeScraper\nfrom .bills import PRBillScraper\n\nsettings = dict(SCRAPELIB_TIMEOUT=300)\n\n\nclass PuertoRico(Jurisdiction):\n division_id = \"ocd-division/country:us/territory:pr\"\n classification = \"government\"\n name = \"Puerto Rico\"\n url = \"http://www.oslpr.org/\"\n scrapers = {\n 'people': PRPersonScraper,\n 'committees': PRCommitteeScraper,\n 'bills': PRBillScraper,\n }\n legislative_sessions = [\n {\n \"_scraped_name\": \"2009-2012\",\n \"identifier\": \"2009-2012\",\n \"name\": \"2009-2012 Session\"\n },\n {\n \"_scraped_name\": \"2013-2016\",\n \"identifier\": \"2013-2016\",\n \"name\": \"2013-2016 Session\"\n },\n {\n \"_scraped_name\": \"2017-2020\",\n \"identifier\": \"2017-2020\",\n \"name\": \"2017-2020 Session\",\n \"start_date\": \"2017-01-02\",\n \"end_date\": \"2021-01-01\",\n }\n ]\n ignored_scraped_sessions = [\n \"2005-2008\",\n \"2001-2004\",\n \"1997-2000\",\n \"1993-1996\"\n ]\n\n def get_organizations(self):\n legislature_name = \"Legislative Assembly of Puerto Rico\"\n lower_chamber_name = \"House\"\n lower_title = \"Senator\"\n upper_chamber_name = \"Senate\"\n upper_title = \"Senator\"\n\n legislature = Organization(name=legislature_name,\n classification=\"legislature\")\n upper = Organization(upper_chamber_name, classification='upper',\n parent_id=legislature._id)\n lower = Organization(lower_chamber_name, classification='lower',\n parent_id=legislature._id)\n\n # 8 districts w/ 2 members, + 11 at larg\n for i, d in enumerate(('I', 'II', 'III', 'IV', 'V', 'VI', 'VII', 'VIII')):\n upper.add_post(label=d, role=upper_title,\n division_id='{}/sldu:{}'.format(self.division_id, i + 1))\n upper.add_post(label='At-Large', role=upper_title,\n division_id='{}/sldu'.format(self.division_id))\n\n # lower house is 40 seats, + 11 at large\n for n in range(1, 41):\n lower.add_post(\n label=str(n), role=lower_title,\n division_id='{}/sldl:{}'.format(self.division_id, n))\n lower.add_post(label='At-Large', role=lower_title,\n division_id='{}/sldl'.format(self.division_id))\n\n yield Organization(name='Office of the Governor', classification='executive')\n yield legislature\n yield upper\n yield lower\n\n def get_session_list(self):\n from openstates.utils import url_xpath\n # this URL should work even for future sessions\n return url_xpath('http://www.oslpr.org/legislatura/tl2013/buscar_2013.asp',\n '//select[@name=\"URL\"]/option/text()')\n", "path": "openstates/pr/__init__.py"}]}
| 2,443 | 277 |
gh_patches_debug_40053
|
rasdani/github-patches
|
git_diff
|
getmoto__moto-242
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Multiple region support
We'd like to support multiple regions. For example, an SNS instance could pass messages to SQS instances located in different regions.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `moto/ec2/responses/vpcs.py`
Content:
```
1 from __future__ import unicode_literals
2 from jinja2 import Template
3
4 from moto.core.responses import BaseResponse
5 from moto.ec2.models import ec2_backend
6 from moto.ec2.utils import filters_from_querystring, vpc_ids_from_querystring
7
8
9 class VPCs(BaseResponse):
10 def create_vpc(self):
11 cidr_block = self.querystring.get('CidrBlock')[0]
12 vpc = ec2_backend.create_vpc(cidr_block)
13 template = Template(CREATE_VPC_RESPONSE)
14 return template.render(vpc=vpc)
15
16 def delete_vpc(self):
17 vpc_id = self.querystring.get('VpcId')[0]
18 vpc = ec2_backend.delete_vpc(vpc_id)
19 template = Template(DELETE_VPC_RESPONSE)
20 return template.render(vpc=vpc)
21
22 def describe_vpcs(self):
23 vpc_ids = vpc_ids_from_querystring(self.querystring)
24 filters = filters_from_querystring(self.querystring)
25 vpcs = ec2_backend.get_all_vpcs(vpc_ids=vpc_ids, filters=filters)
26 template = Template(DESCRIBE_VPCS_RESPONSE)
27 return template.render(vpcs=vpcs)
28
29
30 CREATE_VPC_RESPONSE = """
31 <CreateVpcResponse xmlns="http://ec2.amazonaws.com/doc/2012-12-01/">
32 <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>
33 <vpc>
34 <vpcId>{{ vpc.id }}</vpcId>
35 <state>pending</state>
36 <cidrBlock>{{ vpc.cidr_block }}</cidrBlock>
37 <dhcpOptionsId>dopt-1a2b3c4d2</dhcpOptionsId>
38 <instanceTenancy>default</instanceTenancy>
39 <tagSet>
40 {% for tag in vpc.get_tags() %}
41 <item>
42 <resourceId>{{ tag.resource_id }}</resourceId>
43 <resourceType>{{ tag.resource_type }}</resourceType>
44 <key>{{ tag.key }}</key>
45 <value>{{ tag.value }}</value>
46 </item>
47 {% endfor %}
48 </tagSet>
49 </vpc>
50 </CreateVpcResponse>"""
51
52 DESCRIBE_VPCS_RESPONSE = """
53 <DescribeVpcsResponse xmlns="http://ec2.amazonaws.com/doc/2012-12-01/">
54 <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>
55 <vpcSet>
56 {% for vpc in vpcs %}
57 <item>
58 <vpcId>{{ vpc.id }}</vpcId>
59 <state>{{ vpc.state }}</state>
60 <cidrBlock>{{ vpc.cidr_block }}</cidrBlock>
61 <dhcpOptionsId>dopt-7a8b9c2d</dhcpOptionsId>
62 <instanceTenancy>default</instanceTenancy>
63 <tagSet>
64 {% for tag in vpc.get_tags() %}
65 <item>
66 <resourceId>{{ tag.resource_id }}</resourceId>
67 <resourceType>{{ tag.resource_type }}</resourceType>
68 <key>{{ tag.key }}</key>
69 <value>{{ tag.value }}</value>
70 </item>
71 {% endfor %}
72 </tagSet>
73 </item>
74 {% endfor %}
75 </vpcSet>
76 </DescribeVpcsResponse>"""
77
78 DELETE_VPC_RESPONSE = """
79 <DeleteVpcResponse xmlns="http://ec2.amazonaws.com/doc/2012-12-01/">
80 <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>
81 <return>true</return>
82 </DeleteVpcResponse>
83 """
84
```
Path: `moto/ec2/responses/subnets.py`
Content:
```
1 from __future__ import unicode_literals
2 from jinja2 import Template
3
4 from moto.core.responses import BaseResponse
5 from moto.ec2.models import ec2_backend
6 from moto.ec2.utils import filters_from_querystring
7
8
9 class Subnets(BaseResponse):
10 def create_subnet(self):
11 vpc_id = self.querystring.get('VpcId')[0]
12 cidr_block = self.querystring.get('CidrBlock')[0]
13 subnet = ec2_backend.create_subnet(vpc_id, cidr_block)
14 template = Template(CREATE_SUBNET_RESPONSE)
15 return template.render(subnet=subnet)
16
17 def delete_subnet(self):
18 subnet_id = self.querystring.get('SubnetId')[0]
19 subnet = ec2_backend.delete_subnet(subnet_id)
20 template = Template(DELETE_SUBNET_RESPONSE)
21 return template.render(subnet=subnet)
22
23 def describe_subnets(self):
24 filters = filters_from_querystring(self.querystring)
25 subnets = ec2_backend.get_all_subnets(filters)
26 template = Template(DESCRIBE_SUBNETS_RESPONSE)
27 return template.render(subnets=subnets)
28
29
30 CREATE_SUBNET_RESPONSE = """
31 <CreateSubnetResponse xmlns="http://ec2.amazonaws.com/doc/2012-12-01/">
32 <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>
33 <subnet>
34 <subnetId>{{ subnet.id }}</subnetId>
35 <state>pending</state>
36 <vpcId>{{ subnet.vpc_id }}</vpcId>
37 <cidrBlock>{{ subnet.cidr_block }}</cidrBlock>
38 <availableIpAddressCount>251</availableIpAddressCount>
39 <availabilityZone>us-east-1a</availabilityZone>
40 <tagSet>
41 {% for tag in subnet.get_tags() %}
42 <item>
43 <resourceId>{{ tag.resource_id }}</resourceId>
44 <resourceType>{{ tag.resource_type }}</resourceType>
45 <key>{{ tag.key }}</key>
46 <value>{{ tag.value }}</value>
47 </item>
48 {% endfor %}
49 </tagSet>
50 </subnet>
51 </CreateSubnetResponse>"""
52
53 DELETE_SUBNET_RESPONSE = """
54 <DeleteSubnetResponse xmlns="http://ec2.amazonaws.com/doc/2012-12-01/">
55 <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>
56 <return>true</return>
57 </DeleteSubnetResponse>"""
58
59 DESCRIBE_SUBNETS_RESPONSE = """
60 <DescribeSubnetsResponse xmlns="http://ec2.amazonaws.com/doc/2012-12-01/">
61 <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>
62 <subnetSet>
63 {% for subnet in subnets %}
64 <item>
65 <subnetId>{{ subnet.id }}</subnetId>
66 <state>available</state>
67 <vpcId>{{ subnet.vpc_id }}</vpcId>
68 <cidrBlock>{{ subnet.cidr_block }}</cidrBlock>
69 <availableIpAddressCount>251</availableIpAddressCount>
70 <availabilityZone>us-east-1a</availabilityZone>
71 <tagSet>
72 {% for tag in subnet.get_tags() %}
73 <item>
74 <resourceId>{{ tag.resource_id }}</resourceId>
75 <resourceType>{{ tag.resource_type }}</resourceType>
76 <key>{{ tag.key }}</key>
77 <value>{{ tag.value }}</value>
78 </item>
79 {% endfor %}
80 </tagSet>
81 </item>
82 {% endfor %}
83 </subnetSet>
84 </DescribeSubnetsResponse>"""
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/moto/ec2/responses/subnets.py b/moto/ec2/responses/subnets.py
--- a/moto/ec2/responses/subnets.py
+++ b/moto/ec2/responses/subnets.py
@@ -2,7 +2,6 @@
from jinja2 import Template
from moto.core.responses import BaseResponse
-from moto.ec2.models import ec2_backend
from moto.ec2.utils import filters_from_querystring
@@ -10,19 +9,19 @@
def create_subnet(self):
vpc_id = self.querystring.get('VpcId')[0]
cidr_block = self.querystring.get('CidrBlock')[0]
- subnet = ec2_backend.create_subnet(vpc_id, cidr_block)
+ subnet = self.ec2_backend.create_subnet(vpc_id, cidr_block)
template = Template(CREATE_SUBNET_RESPONSE)
return template.render(subnet=subnet)
def delete_subnet(self):
subnet_id = self.querystring.get('SubnetId')[0]
- subnet = ec2_backend.delete_subnet(subnet_id)
+ subnet = self.ec2_backend.delete_subnet(subnet_id)
template = Template(DELETE_SUBNET_RESPONSE)
return template.render(subnet=subnet)
def describe_subnets(self):
filters = filters_from_querystring(self.querystring)
- subnets = ec2_backend.get_all_subnets(filters)
+ subnets = self.ec2_backend.get_all_subnets(filters)
template = Template(DESCRIBE_SUBNETS_RESPONSE)
return template.render(subnets=subnets)
diff --git a/moto/ec2/responses/vpcs.py b/moto/ec2/responses/vpcs.py
--- a/moto/ec2/responses/vpcs.py
+++ b/moto/ec2/responses/vpcs.py
@@ -2,27 +2,26 @@
from jinja2 import Template
from moto.core.responses import BaseResponse
-from moto.ec2.models import ec2_backend
from moto.ec2.utils import filters_from_querystring, vpc_ids_from_querystring
class VPCs(BaseResponse):
def create_vpc(self):
cidr_block = self.querystring.get('CidrBlock')[0]
- vpc = ec2_backend.create_vpc(cidr_block)
+ vpc = self.ec2_backend.create_vpc(cidr_block)
template = Template(CREATE_VPC_RESPONSE)
return template.render(vpc=vpc)
def delete_vpc(self):
vpc_id = self.querystring.get('VpcId')[0]
- vpc = ec2_backend.delete_vpc(vpc_id)
+ vpc = self.ec2_backend.delete_vpc(vpc_id)
template = Template(DELETE_VPC_RESPONSE)
return template.render(vpc=vpc)
def describe_vpcs(self):
vpc_ids = vpc_ids_from_querystring(self.querystring)
filters = filters_from_querystring(self.querystring)
- vpcs = ec2_backend.get_all_vpcs(vpc_ids=vpc_ids, filters=filters)
+ vpcs = self.ec2_backend.get_all_vpcs(vpc_ids=vpc_ids, filters=filters)
template = Template(DESCRIBE_VPCS_RESPONSE)
return template.render(vpcs=vpcs)
|
{"golden_diff": "diff --git a/moto/ec2/responses/subnets.py b/moto/ec2/responses/subnets.py\n--- a/moto/ec2/responses/subnets.py\n+++ b/moto/ec2/responses/subnets.py\n@@ -2,7 +2,6 @@\n from jinja2 import Template\n \n from moto.core.responses import BaseResponse\n-from moto.ec2.models import ec2_backend\n from moto.ec2.utils import filters_from_querystring\n \n \n@@ -10,19 +9,19 @@\n def create_subnet(self):\n vpc_id = self.querystring.get('VpcId')[0]\n cidr_block = self.querystring.get('CidrBlock')[0]\n- subnet = ec2_backend.create_subnet(vpc_id, cidr_block)\n+ subnet = self.ec2_backend.create_subnet(vpc_id, cidr_block)\n template = Template(CREATE_SUBNET_RESPONSE)\n return template.render(subnet=subnet)\n \n def delete_subnet(self):\n subnet_id = self.querystring.get('SubnetId')[0]\n- subnet = ec2_backend.delete_subnet(subnet_id)\n+ subnet = self.ec2_backend.delete_subnet(subnet_id)\n template = Template(DELETE_SUBNET_RESPONSE)\n return template.render(subnet=subnet)\n \n def describe_subnets(self):\n filters = filters_from_querystring(self.querystring)\n- subnets = ec2_backend.get_all_subnets(filters)\n+ subnets = self.ec2_backend.get_all_subnets(filters)\n template = Template(DESCRIBE_SUBNETS_RESPONSE)\n return template.render(subnets=subnets)\n \ndiff --git a/moto/ec2/responses/vpcs.py b/moto/ec2/responses/vpcs.py\n--- a/moto/ec2/responses/vpcs.py\n+++ b/moto/ec2/responses/vpcs.py\n@@ -2,27 +2,26 @@\n from jinja2 import Template\n \n from moto.core.responses import BaseResponse\n-from moto.ec2.models import ec2_backend\n from moto.ec2.utils import filters_from_querystring, vpc_ids_from_querystring\n \n \n class VPCs(BaseResponse):\n def create_vpc(self):\n cidr_block = self.querystring.get('CidrBlock')[0]\n- vpc = ec2_backend.create_vpc(cidr_block)\n+ vpc = self.ec2_backend.create_vpc(cidr_block)\n template = Template(CREATE_VPC_RESPONSE)\n return template.render(vpc=vpc)\n \n def delete_vpc(self):\n vpc_id = self.querystring.get('VpcId')[0]\n- vpc = ec2_backend.delete_vpc(vpc_id)\n+ vpc = self.ec2_backend.delete_vpc(vpc_id)\n template = Template(DELETE_VPC_RESPONSE)\n return template.render(vpc=vpc)\n \n def describe_vpcs(self):\n vpc_ids = vpc_ids_from_querystring(self.querystring)\n filters = filters_from_querystring(self.querystring)\n- vpcs = ec2_backend.get_all_vpcs(vpc_ids=vpc_ids, filters=filters)\n+ vpcs = self.ec2_backend.get_all_vpcs(vpc_ids=vpc_ids, filters=filters)\n template = Template(DESCRIBE_VPCS_RESPONSE)\n return template.render(vpcs=vpcs)\n", "issue": "Multiple region support\nWe'd like to support multiple regions. For example, an SNS instance could pass messages to SQS instances located in different regions.\n\n", "before_files": [{"content": "from __future__ import unicode_literals\nfrom jinja2 import Template\n\nfrom moto.core.responses import BaseResponse\nfrom moto.ec2.models import ec2_backend\nfrom moto.ec2.utils import filters_from_querystring, vpc_ids_from_querystring\n\n\nclass VPCs(BaseResponse):\n def create_vpc(self):\n cidr_block = self.querystring.get('CidrBlock')[0]\n vpc = ec2_backend.create_vpc(cidr_block)\n template = Template(CREATE_VPC_RESPONSE)\n return template.render(vpc=vpc)\n\n def delete_vpc(self):\n vpc_id = self.querystring.get('VpcId')[0]\n vpc = ec2_backend.delete_vpc(vpc_id)\n template = Template(DELETE_VPC_RESPONSE)\n return template.render(vpc=vpc)\n\n def describe_vpcs(self):\n vpc_ids = vpc_ids_from_querystring(self.querystring)\n filters = filters_from_querystring(self.querystring)\n vpcs = ec2_backend.get_all_vpcs(vpc_ids=vpc_ids, filters=filters)\n template = Template(DESCRIBE_VPCS_RESPONSE)\n return template.render(vpcs=vpcs)\n\n\nCREATE_VPC_RESPONSE = \"\"\"\n<CreateVpcResponse xmlns=\"http://ec2.amazonaws.com/doc/2012-12-01/\">\n <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>\n <vpc>\n <vpcId>{{ vpc.id }}</vpcId>\n <state>pending</state>\n <cidrBlock>{{ vpc.cidr_block }}</cidrBlock>\n <dhcpOptionsId>dopt-1a2b3c4d2</dhcpOptionsId>\n <instanceTenancy>default</instanceTenancy>\n <tagSet>\n {% for tag in vpc.get_tags() %}\n <item>\n <resourceId>{{ tag.resource_id }}</resourceId>\n <resourceType>{{ tag.resource_type }}</resourceType>\n <key>{{ tag.key }}</key>\n <value>{{ tag.value }}</value>\n </item>\n {% endfor %}\n </tagSet>\n </vpc>\n</CreateVpcResponse>\"\"\"\n\nDESCRIBE_VPCS_RESPONSE = \"\"\"\n<DescribeVpcsResponse xmlns=\"http://ec2.amazonaws.com/doc/2012-12-01/\">\n <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>\n <vpcSet>\n {% for vpc in vpcs %}\n <item>\n <vpcId>{{ vpc.id }}</vpcId>\n <state>{{ vpc.state }}</state>\n <cidrBlock>{{ vpc.cidr_block }}</cidrBlock>\n <dhcpOptionsId>dopt-7a8b9c2d</dhcpOptionsId>\n <instanceTenancy>default</instanceTenancy>\n <tagSet>\n {% for tag in vpc.get_tags() %}\n <item>\n <resourceId>{{ tag.resource_id }}</resourceId>\n <resourceType>{{ tag.resource_type }}</resourceType>\n <key>{{ tag.key }}</key>\n <value>{{ tag.value }}</value>\n </item>\n {% endfor %}\n </tagSet>\n </item>\n {% endfor %}\n </vpcSet>\n</DescribeVpcsResponse>\"\"\"\n\nDELETE_VPC_RESPONSE = \"\"\"\n<DeleteVpcResponse xmlns=\"http://ec2.amazonaws.com/doc/2012-12-01/\">\n <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>\n <return>true</return>\n</DeleteVpcResponse>\n\"\"\"\n", "path": "moto/ec2/responses/vpcs.py"}, {"content": "from __future__ import unicode_literals\nfrom jinja2 import Template\n\nfrom moto.core.responses import BaseResponse\nfrom moto.ec2.models import ec2_backend\nfrom moto.ec2.utils import filters_from_querystring\n\n\nclass Subnets(BaseResponse):\n def create_subnet(self):\n vpc_id = self.querystring.get('VpcId')[0]\n cidr_block = self.querystring.get('CidrBlock')[0]\n subnet = ec2_backend.create_subnet(vpc_id, cidr_block)\n template = Template(CREATE_SUBNET_RESPONSE)\n return template.render(subnet=subnet)\n\n def delete_subnet(self):\n subnet_id = self.querystring.get('SubnetId')[0]\n subnet = ec2_backend.delete_subnet(subnet_id)\n template = Template(DELETE_SUBNET_RESPONSE)\n return template.render(subnet=subnet)\n\n def describe_subnets(self):\n filters = filters_from_querystring(self.querystring)\n subnets = ec2_backend.get_all_subnets(filters)\n template = Template(DESCRIBE_SUBNETS_RESPONSE)\n return template.render(subnets=subnets)\n\n\nCREATE_SUBNET_RESPONSE = \"\"\"\n<CreateSubnetResponse xmlns=\"http://ec2.amazonaws.com/doc/2012-12-01/\">\n <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>\n <subnet>\n <subnetId>{{ subnet.id }}</subnetId>\n <state>pending</state>\n <vpcId>{{ subnet.vpc_id }}</vpcId>\n <cidrBlock>{{ subnet.cidr_block }}</cidrBlock>\n <availableIpAddressCount>251</availableIpAddressCount>\n <availabilityZone>us-east-1a</availabilityZone>\n <tagSet>\n {% for tag in subnet.get_tags() %}\n <item>\n <resourceId>{{ tag.resource_id }}</resourceId>\n <resourceType>{{ tag.resource_type }}</resourceType>\n <key>{{ tag.key }}</key>\n <value>{{ tag.value }}</value>\n </item>\n {% endfor %}\n </tagSet>\n </subnet>\n</CreateSubnetResponse>\"\"\"\n\nDELETE_SUBNET_RESPONSE = \"\"\"\n<DeleteSubnetResponse xmlns=\"http://ec2.amazonaws.com/doc/2012-12-01/\">\n <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>\n <return>true</return>\n</DeleteSubnetResponse>\"\"\"\n\nDESCRIBE_SUBNETS_RESPONSE = \"\"\"\n<DescribeSubnetsResponse xmlns=\"http://ec2.amazonaws.com/doc/2012-12-01/\">\n <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>\n <subnetSet>\n {% for subnet in subnets %}\n <item>\n <subnetId>{{ subnet.id }}</subnetId>\n <state>available</state>\n <vpcId>{{ subnet.vpc_id }}</vpcId>\n <cidrBlock>{{ subnet.cidr_block }}</cidrBlock>\n <availableIpAddressCount>251</availableIpAddressCount>\n <availabilityZone>us-east-1a</availabilityZone>\n <tagSet>\n {% for tag in subnet.get_tags() %}\n <item>\n <resourceId>{{ tag.resource_id }}</resourceId>\n <resourceType>{{ tag.resource_type }}</resourceType>\n <key>{{ tag.key }}</key>\n <value>{{ tag.value }}</value>\n </item>\n {% endfor %}\n </tagSet>\n </item>\n {% endfor %}\n </subnetSet>\n</DescribeSubnetsResponse>\"\"\"\n", "path": "moto/ec2/responses/subnets.py"}], "after_files": [{"content": "from __future__ import unicode_literals\nfrom jinja2 import Template\n\nfrom moto.core.responses import BaseResponse\nfrom moto.ec2.utils import filters_from_querystring, vpc_ids_from_querystring\n\n\nclass VPCs(BaseResponse):\n def create_vpc(self):\n cidr_block = self.querystring.get('CidrBlock')[0]\n vpc = self.ec2_backend.create_vpc(cidr_block)\n template = Template(CREATE_VPC_RESPONSE)\n return template.render(vpc=vpc)\n\n def delete_vpc(self):\n vpc_id = self.querystring.get('VpcId')[0]\n vpc = self.ec2_backend.delete_vpc(vpc_id)\n template = Template(DELETE_VPC_RESPONSE)\n return template.render(vpc=vpc)\n\n def describe_vpcs(self):\n vpc_ids = vpc_ids_from_querystring(self.querystring)\n filters = filters_from_querystring(self.querystring)\n vpcs = self.ec2_backend.get_all_vpcs(vpc_ids=vpc_ids, filters=filters)\n template = Template(DESCRIBE_VPCS_RESPONSE)\n return template.render(vpcs=vpcs)\n\n\nCREATE_VPC_RESPONSE = \"\"\"\n<CreateVpcResponse xmlns=\"http://ec2.amazonaws.com/doc/2012-12-01/\">\n <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>\n <vpc>\n <vpcId>{{ vpc.id }}</vpcId>\n <state>pending</state>\n <cidrBlock>{{ vpc.cidr_block }}</cidrBlock>\n <dhcpOptionsId>dopt-1a2b3c4d2</dhcpOptionsId>\n <instanceTenancy>default</instanceTenancy>\n <tagSet>\n {% for tag in vpc.get_tags() %}\n <item>\n <resourceId>{{ tag.resource_id }}</resourceId>\n <resourceType>{{ tag.resource_type }}</resourceType>\n <key>{{ tag.key }}</key>\n <value>{{ tag.value }}</value>\n </item>\n {% endfor %}\n </tagSet>\n </vpc>\n</CreateVpcResponse>\"\"\"\n\nDESCRIBE_VPCS_RESPONSE = \"\"\"\n<DescribeVpcsResponse xmlns=\"http://ec2.amazonaws.com/doc/2012-12-01/\">\n <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>\n <vpcSet>\n {% for vpc in vpcs %}\n <item>\n <vpcId>{{ vpc.id }}</vpcId>\n <state>{{ vpc.state }}</state>\n <cidrBlock>{{ vpc.cidr_block }}</cidrBlock>\n <dhcpOptionsId>dopt-7a8b9c2d</dhcpOptionsId>\n <instanceTenancy>default</instanceTenancy>\n <tagSet>\n {% for tag in vpc.get_tags() %}\n <item>\n <resourceId>{{ tag.resource_id }}</resourceId>\n <resourceType>{{ tag.resource_type }}</resourceType>\n <key>{{ tag.key }}</key>\n <value>{{ tag.value }}</value>\n </item>\n {% endfor %}\n </tagSet>\n </item>\n {% endfor %}\n </vpcSet>\n</DescribeVpcsResponse>\"\"\"\n\nDELETE_VPC_RESPONSE = \"\"\"\n<DeleteVpcResponse xmlns=\"http://ec2.amazonaws.com/doc/2012-12-01/\">\n <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>\n <return>true</return>\n</DeleteVpcResponse>\n\"\"\"\n", "path": "moto/ec2/responses/vpcs.py"}, {"content": "from __future__ import unicode_literals\nfrom jinja2 import Template\n\nfrom moto.core.responses import BaseResponse\nfrom moto.ec2.utils import filters_from_querystring\n\n\nclass Subnets(BaseResponse):\n def create_subnet(self):\n vpc_id = self.querystring.get('VpcId')[0]\n cidr_block = self.querystring.get('CidrBlock')[0]\n subnet = self.ec2_backend.create_subnet(vpc_id, cidr_block)\n template = Template(CREATE_SUBNET_RESPONSE)\n return template.render(subnet=subnet)\n\n def delete_subnet(self):\n subnet_id = self.querystring.get('SubnetId')[0]\n subnet = self.ec2_backend.delete_subnet(subnet_id)\n template = Template(DELETE_SUBNET_RESPONSE)\n return template.render(subnet=subnet)\n\n def describe_subnets(self):\n filters = filters_from_querystring(self.querystring)\n subnets = self.ec2_backend.get_all_subnets(filters)\n template = Template(DESCRIBE_SUBNETS_RESPONSE)\n return template.render(subnets=subnets)\n\n\nCREATE_SUBNET_RESPONSE = \"\"\"\n<CreateSubnetResponse xmlns=\"http://ec2.amazonaws.com/doc/2012-12-01/\">\n <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>\n <subnet>\n <subnetId>{{ subnet.id }}</subnetId>\n <state>pending</state>\n <vpcId>{{ subnet.vpc_id }}</vpcId>\n <cidrBlock>{{ subnet.cidr_block }}</cidrBlock>\n <availableIpAddressCount>251</availableIpAddressCount>\n <availabilityZone>us-east-1a</availabilityZone>\n <tagSet>\n {% for tag in subnet.get_tags() %}\n <item>\n <resourceId>{{ tag.resource_id }}</resourceId>\n <resourceType>{{ tag.resource_type }}</resourceType>\n <key>{{ tag.key }}</key>\n <value>{{ tag.value }}</value>\n </item>\n {% endfor %}\n </tagSet>\n </subnet>\n</CreateSubnetResponse>\"\"\"\n\nDELETE_SUBNET_RESPONSE = \"\"\"\n<DeleteSubnetResponse xmlns=\"http://ec2.amazonaws.com/doc/2012-12-01/\">\n <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>\n <return>true</return>\n</DeleteSubnetResponse>\"\"\"\n\nDESCRIBE_SUBNETS_RESPONSE = \"\"\"\n<DescribeSubnetsResponse xmlns=\"http://ec2.amazonaws.com/doc/2012-12-01/\">\n <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>\n <subnetSet>\n {% for subnet in subnets %}\n <item>\n <subnetId>{{ subnet.id }}</subnetId>\n <state>available</state>\n <vpcId>{{ subnet.vpc_id }}</vpcId>\n <cidrBlock>{{ subnet.cidr_block }}</cidrBlock>\n <availableIpAddressCount>251</availableIpAddressCount>\n <availabilityZone>us-east-1a</availabilityZone>\n <tagSet>\n {% for tag in subnet.get_tags() %}\n <item>\n <resourceId>{{ tag.resource_id }}</resourceId>\n <resourceType>{{ tag.resource_type }}</resourceType>\n <key>{{ tag.key }}</key>\n <value>{{ tag.value }}</value>\n </item>\n {% endfor %}\n </tagSet>\n </item>\n {% endfor %}\n </subnetSet>\n</DescribeSubnetsResponse>\"\"\"\n", "path": "moto/ec2/responses/subnets.py"}]}
| 2,294 | 701 |
gh_patches_debug_23590
|
rasdani/github-patches
|
git_diff
|
cupy__cupy-7176
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cupyx.profiler benchmark implicitly converts from micro to milliseconds depending on object methods.
### Description
When reading the official CuPY documentation, a note on benchmarking says that it is important to use the built-in ```cupyx.profiler``` benchmark tool. However, this function returns a type ```_PerfCaseResult```. This object has several, very sparsely documented methods. If one calls ```print()``` on the entire object, the object is printed to console and runs are reported in microseconds. This is fairly standard for benchmarking. However, what is also extremely important in benchmarking is that the units stay consistent. If one wishes to access the ```cpu_times``` or ```gpu_times``` directly from the object, an array containing the run-times is printed containing the run-times in MILLIseconds. There are no warnings or documentation of this implicit conversion, as the official documentation is:
```
cpu_times
Returns an array of CPU times of size n_repeat.
gpu_times
Returns an array of GPU times of size n_repeat.
```
### To Reproduce
```py
import cupy as cp
import sys
from helpers import *
from cupyx.profiler import benchmark
def generate_array(size, work_units): #Split this function up for timing the work only
rand = cp.random.default_rng() #This is the fast way of creating large arrays with cp
x = rand.integers(0, 100_000, (size, work_units)) #Create array
return x
def minmax(size, work_units):
x_min = cp.min(arr)
x_max = cp.max(arr)
y = (arr-x_min)/(x_max-x_min)
size = len(y) #Force execution
return size
if __name__ == '__main__':
sz = int(sys.argv[1])
wk = int(sys.argv[2])
arr = generate_array(sz, wk)
y = benchmark(minmax, (sz, wk), n_repeat=1, n_warmup=0)
print(y.cpu_times)
print(y)
```
### Installation
_No response_
### Environment
```
OS : Linux-5.15.0-52-generic-x86_64-with-glibc2.35
Python Version : 3.9.13
CuPy Version : 10.6.0
CuPy Platform : NVIDIA CUDA
NumPy Version : 1.22.4
SciPy Version : 1.9.1
Cython Build Version : 0.29.30
Cython Runtime Version : None
CUDA Root : /home/joachim/anaconda3/envs/rps
nvcc PATH : None
CUDA Build Version : 11020
CUDA Driver Version : 11070
CUDA Runtime Version : 11050
cuBLAS Version : (available)
cuFFT Version : 10600
cuRAND Version : 10207
cuSOLVER Version : (11, 3, 2)
cuSPARSE Version : (available)
NVRTC Version : (11, 5)
Thrust Version : 101000
CUB Build Version : 101000
Jitify Build Version : 87d8ccd
cuDNN Build Version : None
cuDNN Version : None
NCCL Build Version : 21212
NCCL Runtime Version : 21403
cuTENSOR Version : None
cuSPARSELt Build Version : None
Device 0 Name : NVIDIA GeForce GTX 1660 SUPER
Device 0 Compute Capability : 75
Device 0 PCI Bus ID : 0000:01:00.0
```
### Additional Information
Printed to console:
python3 minmax.py 100 100
[0.00204923]
minmax : CPU: 2049.225 us GPU-0: 2057.504 us
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cupyx/profiler/_time.py`
Content:
```
1 import math as _math
2 import time as _time
3
4 import numpy as _numpy
5
6 import cupy as _cupy
7 from cupy_backends.cuda.api import runtime
8
9
10 class _PerfCaseResult:
11 """ An obscure object encompassing timing results recorded by
12 :func:`~cupyx.profiler.benchmark`. Simple statistics can be obtained by
13 converting an instance of this class to a string.
14
15 .. warning::
16 This API is currently experimental and subject to change in future
17 releases.
18
19 """
20
21 def __init__(self, name, ts, devices):
22 assert ts.ndim == 2
23 assert ts.shape[0] == len(devices) + 1
24 assert ts.shape[1] > 0
25 self.name = name
26 self._ts = ts
27 self._devices = devices
28
29 def __repr__(self) -> str:
30 """ Returns a string representation of the object.
31
32 Returns:
33 str: A string representation of the object.
34 """
35 return self.to_str(show_gpu=True)
36
37 @property
38 def cpu_times(self):
39 """ Returns an array of CPU times of size ``n_repeat``. """
40 return self._ts[0]
41
42 @property
43 def gpu_times(self):
44 """ Returns an array of GPU times of size ``n_repeat``. """
45 return self._ts[1:]
46
47 @staticmethod
48 def _to_str_per_item(device_name, t):
49 assert t.ndim == 1
50 assert t.size > 0
51 t_us = t * 1e6
52
53 s = ' {}:{:9.03f} us'.format(device_name, t_us.mean())
54 if t.size > 1:
55 s += ' +/-{:6.03f} (min:{:9.03f} / max:{:9.03f}) us'.format(
56 t_us.std(), t_us.min(), t_us.max())
57 return s
58
59 def to_str(self, show_gpu=False):
60 results = [self._to_str_per_item('CPU', self._ts[0])]
61 if show_gpu:
62 for i, d in enumerate(self._devices):
63 results.append(
64 self._to_str_per_item('GPU-{}'.format(d),
65 self._ts[1 + i]))
66 return '{:<20s}:{}'.format(self.name, ' '.join(results))
67
68 def __str__(self):
69 return self.to_str(show_gpu=True)
70
71
72 def benchmark(
73 func, args=(), kwargs={}, n_repeat=10000, *,
74 name=None, n_warmup=10, max_duration=_math.inf, devices=None):
75 """ Timing utility for measuring time spent by both CPU and GPU.
76
77 This function is a very convenient helper for setting up a timing test. The
78 GPU time is properly recorded by synchronizing internal streams. As a
79 result, to time a multi-GPU function all participating devices must be
80 passed as the ``devices`` argument so that this helper knows which devices
81 to record. A simple example is given as follows:
82
83 .. code-block:: py
84
85 import cupy as cp
86 from cupyx.profiler import benchmark
87
88 def f(a, b):
89 return 3 * cp.sin(-a) * b
90
91 a = 0.5 - cp.random.random((100,))
92 b = cp.random.random((100,))
93 print(benchmark(f, (a, b), n_repeat=1000))
94
95
96 Args:
97 func (callable): a callable object to be timed.
98 args (tuple): positional argumens to be passed to the callable.
99 kwargs (dict): keyword arguments to be passed to the callable.
100 n_repeat (int): number of times the callable is called. Increasing
101 this value would improve the collected statistics at the cost
102 of longer test time.
103 name (str): the function name to be reported. If not given, the
104 callable's ``__name__`` attribute is used.
105 n_warmup (int): number of times the callable is called. The warm-up
106 runs are not timed.
107 max_duration (float): the maximum time (in seconds) that the entire
108 test can use. If the taken time is longer than this limit, the test
109 is stopped and the statistics collected up to the breakpoint is
110 reported.
111 devices (tuple): a tuple of device IDs (int) that will be timed during
112 the timing test. If not given, the current device is used.
113
114 Returns:
115 :class:`~cupyx.profiler._time._PerfCaseResult`:
116 an object collecting all test results.
117
118 """
119
120 if name is None:
121 name = func.__name__
122
123 if devices is None:
124 devices = (_cupy.cuda.get_device_id(),)
125
126 if not callable(func):
127 raise ValueError('`func` should be a callable object.')
128 if not isinstance(args, tuple):
129 raise ValueError('`args` should be of tuple type.')
130 if not isinstance(kwargs, dict):
131 raise ValueError('`kwargs` should be of dict type.')
132 if not isinstance(n_repeat, int):
133 raise ValueError('`n_repeat` should be an integer.')
134 if not isinstance(name, str):
135 raise ValueError('`name` should be a string.')
136 if not isinstance(n_warmup, int):
137 raise ValueError('`n_warmup` should be an integer.')
138 if not _numpy.isreal(max_duration):
139 raise ValueError('`max_duration` should be given in seconds')
140 if not isinstance(devices, tuple):
141 raise ValueError('`devices` should be of tuple type')
142
143 return _repeat(
144 func, args, kwargs, n_repeat, name, n_warmup, max_duration, devices)
145
146
147 def _repeat(
148 func, args, kwargs, n_repeat, name, n_warmup, max_duration, devices):
149
150 events_1 = []
151 events_2 = []
152
153 for i in devices:
154 prev_device = runtime.getDevice()
155 try:
156 runtime.setDevice(i)
157 events_1.append(_cupy.cuda.stream.Event())
158 events_2.append(_cupy.cuda.stream.Event())
159 finally:
160 runtime.setDevice(prev_device)
161
162 ev1 = _cupy.cuda.stream.Event()
163 ev2 = _cupy.cuda.stream.Event()
164
165 for i in range(n_warmup):
166 func(*args, **kwargs)
167
168 for event, device in zip(events_1, devices):
169 prev_device = runtime.getDevice()
170 try:
171 runtime.setDevice(device)
172 event.record()
173 finally:
174 runtime.setDevice(prev_device)
175 event.synchronize()
176
177 cpu_times = []
178 gpu_times = [[] for i in events_1]
179 duration = 0
180 for i in range(n_repeat):
181 for event, device in zip(events_1, devices):
182 prev_device = runtime.getDevice()
183 try:
184 runtime.setDevice(device)
185 event.record()
186 finally:
187 runtime.setDevice(prev_device)
188
189 t1 = _time.perf_counter()
190
191 func(*args, **kwargs)
192
193 t2 = _time.perf_counter()
194 cpu_time = t2 - t1
195 cpu_times.append(cpu_time)
196
197 for event, device in zip(events_2, devices):
198 prev_device = runtime.getDevice()
199 try:
200 runtime.setDevice(device)
201 event.record()
202 finally:
203 runtime.setDevice(prev_device)
204 for event, device in zip(events_2, devices):
205 prev_device = runtime.getDevice()
206 try:
207 runtime.setDevice(device)
208 event.synchronize()
209 finally:
210 runtime.setDevice(prev_device)
211 for i, (ev1, ev2) in enumerate(zip(events_1, events_2)):
212 gpu_time = _cupy.cuda.get_elapsed_time(ev1, ev2) * 1e-3
213 gpu_times[i].append(gpu_time)
214
215 duration += _time.perf_counter() - t1
216 if duration > max_duration:
217 break
218
219 ts = _numpy.asarray([cpu_times] + gpu_times, dtype=_numpy.float64)
220 return _PerfCaseResult(name, ts, devices=devices)
221
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/cupyx/profiler/_time.py b/cupyx/profiler/_time.py
--- a/cupyx/profiler/_time.py
+++ b/cupyx/profiler/_time.py
@@ -35,13 +35,23 @@
return self.to_str(show_gpu=True)
@property
- def cpu_times(self):
- """ Returns an array of CPU times of size ``n_repeat``. """
+ def cpu_times(self) -> _numpy.ndarray:
+ """A :class:`numpy.ndarray` of shape ``(n_repeat,)``, holding times spent
+ on CPU in milliseconds.
+
+ These values are delta of the host-side performance counter
+ (:func:`time.perf_counter`) between each repeat step.
+ """
return self._ts[0]
@property
- def gpu_times(self):
- """ Returns an array of GPU times of size ``n_repeat``. """
+ def gpu_times(self) -> _numpy.ndarray:
+ """A :class:`numpy.ndarray` of shape ``(len(devices), n_repeat)``,
+ holding times spent on GPU in milliseconds.
+
+ These values are measured using ``cudaEventElapsedTime`` with events
+ recoreded before/after each repeat step.
+ """
return self._ts[1:]
@staticmethod
@@ -159,9 +169,6 @@
finally:
runtime.setDevice(prev_device)
- ev1 = _cupy.cuda.stream.Event()
- ev2 = _cupy.cuda.stream.Event()
-
for i in range(n_warmup):
func(*args, **kwargs)
|
{"golden_diff": "diff --git a/cupyx/profiler/_time.py b/cupyx/profiler/_time.py\n--- a/cupyx/profiler/_time.py\n+++ b/cupyx/profiler/_time.py\n@@ -35,13 +35,23 @@\n return self.to_str(show_gpu=True)\n \n @property\n- def cpu_times(self):\n- \"\"\" Returns an array of CPU times of size ``n_repeat``. \"\"\"\n+ def cpu_times(self) -> _numpy.ndarray:\n+ \"\"\"A :class:`numpy.ndarray` of shape ``(n_repeat,)``, holding times spent\n+ on CPU in milliseconds.\n+\n+ These values are delta of the host-side performance counter\n+ (:func:`time.perf_counter`) between each repeat step.\n+ \"\"\"\n return self._ts[0]\n \n @property\n- def gpu_times(self):\n- \"\"\" Returns an array of GPU times of size ``n_repeat``. \"\"\"\n+ def gpu_times(self) -> _numpy.ndarray:\n+ \"\"\"A :class:`numpy.ndarray` of shape ``(len(devices), n_repeat)``,\n+ holding times spent on GPU in milliseconds.\n+\n+ These values are measured using ``cudaEventElapsedTime`` with events\n+ recoreded before/after each repeat step.\n+ \"\"\"\n return self._ts[1:]\n \n @staticmethod\n@@ -159,9 +169,6 @@\n finally:\n runtime.setDevice(prev_device)\n \n- ev1 = _cupy.cuda.stream.Event()\n- ev2 = _cupy.cuda.stream.Event()\n-\n for i in range(n_warmup):\n func(*args, **kwargs)\n", "issue": "Cupyx.profiler benchmark implicitly converts from micro to milliseconds depending on object methods.\n### Description\r\n\r\nWhen reading the official CuPY documentation, a note on benchmarking says that it is important to use the built-in ```cupyx.profiler``` benchmark tool. However, this function returns a type ```_PerfCaseResult```. This object has several, very sparsely documented methods. If one calls ```print()``` on the entire object, the object is printed to console and runs are reported in microseconds. This is fairly standard for benchmarking. However, what is also extremely important in benchmarking is that the units stay consistent. If one wishes to access the ```cpu_times``` or ```gpu_times``` directly from the object, an array containing the run-times is printed containing the run-times in MILLIseconds. There are no warnings or documentation of this implicit conversion, as the official documentation is:\r\n\r\n```\r\ncpu_times\r\n\r\n Returns an array of CPU times of size n_repeat.\r\n\r\ngpu_times\r\n\r\n Returns an array of GPU times of size n_repeat.\r\n```\r\n\r\n### To Reproduce\r\n\r\n```py\r\nimport cupy as cp\r\nimport sys\r\nfrom helpers import *\r\nfrom cupyx.profiler import benchmark\r\n\r\ndef generate_array(size, work_units): #Split this function up for timing the work only\r\n rand = cp.random.default_rng() #This is the fast way of creating large arrays with cp\r\n x = rand.integers(0, 100_000, (size, work_units)) #Create array\r\n return x \r\n\r\n\r\ndef minmax(size, work_units):\r\n x_min = cp.min(arr)\r\n x_max = cp.max(arr)\r\n y = (arr-x_min)/(x_max-x_min)\r\n size = len(y) #Force execution\r\n return size\r\n \r\nif __name__ == '__main__':\r\n sz = int(sys.argv[1])\r\n wk = int(sys.argv[2])\r\n arr = generate_array(sz, wk) \r\n y = benchmark(minmax, (sz, wk), n_repeat=1, n_warmup=0)\r\n \r\n print(y.cpu_times)\r\n print(y)\r\n```\r\n\r\n\r\n### Installation\r\n\r\n_No response_\r\n\r\n### Environment\r\n\r\n```\r\nOS : Linux-5.15.0-52-generic-x86_64-with-glibc2.35\r\nPython Version : 3.9.13\r\nCuPy Version : 10.6.0\r\nCuPy Platform : NVIDIA CUDA\r\nNumPy Version : 1.22.4\r\nSciPy Version : 1.9.1\r\nCython Build Version : 0.29.30\r\nCython Runtime Version : None\r\nCUDA Root : /home/joachim/anaconda3/envs/rps\r\nnvcc PATH : None\r\nCUDA Build Version : 11020\r\nCUDA Driver Version : 11070\r\nCUDA Runtime Version : 11050\r\ncuBLAS Version : (available)\r\ncuFFT Version : 10600\r\ncuRAND Version : 10207\r\ncuSOLVER Version : (11, 3, 2)\r\ncuSPARSE Version : (available)\r\nNVRTC Version : (11, 5)\r\nThrust Version : 101000\r\nCUB Build Version : 101000\r\nJitify Build Version : 87d8ccd\r\ncuDNN Build Version : None\r\ncuDNN Version : None\r\nNCCL Build Version : 21212\r\nNCCL Runtime Version : 21403\r\ncuTENSOR Version : None\r\ncuSPARSELt Build Version : None\r\nDevice 0 Name : NVIDIA GeForce GTX 1660 SUPER\r\nDevice 0 Compute Capability : 75\r\nDevice 0 PCI Bus ID : 0000:01:00.0\r\n```\r\n\r\n\r\n### Additional Information\r\n\r\nPrinted to console:\r\n\r\npython3 minmax.py 100 100\r\n[0.00204923]\r\nminmax : CPU: 2049.225 us GPU-0: 2057.504 us\n", "before_files": [{"content": "import math as _math\nimport time as _time\n\nimport numpy as _numpy\n\nimport cupy as _cupy\nfrom cupy_backends.cuda.api import runtime\n\n\nclass _PerfCaseResult:\n \"\"\" An obscure object encompassing timing results recorded by\n :func:`~cupyx.profiler.benchmark`. Simple statistics can be obtained by\n converting an instance of this class to a string.\n\n .. warning::\n This API is currently experimental and subject to change in future\n releases.\n\n \"\"\"\n\n def __init__(self, name, ts, devices):\n assert ts.ndim == 2\n assert ts.shape[0] == len(devices) + 1\n assert ts.shape[1] > 0\n self.name = name\n self._ts = ts\n self._devices = devices\n\n def __repr__(self) -> str:\n \"\"\" Returns a string representation of the object.\n\n Returns:\n str: A string representation of the object.\n \"\"\"\n return self.to_str(show_gpu=True)\n\n @property\n def cpu_times(self):\n \"\"\" Returns an array of CPU times of size ``n_repeat``. \"\"\"\n return self._ts[0]\n\n @property\n def gpu_times(self):\n \"\"\" Returns an array of GPU times of size ``n_repeat``. \"\"\"\n return self._ts[1:]\n\n @staticmethod\n def _to_str_per_item(device_name, t):\n assert t.ndim == 1\n assert t.size > 0\n t_us = t * 1e6\n\n s = ' {}:{:9.03f} us'.format(device_name, t_us.mean())\n if t.size > 1:\n s += ' +/-{:6.03f} (min:{:9.03f} / max:{:9.03f}) us'.format(\n t_us.std(), t_us.min(), t_us.max())\n return s\n\n def to_str(self, show_gpu=False):\n results = [self._to_str_per_item('CPU', self._ts[0])]\n if show_gpu:\n for i, d in enumerate(self._devices):\n results.append(\n self._to_str_per_item('GPU-{}'.format(d),\n self._ts[1 + i]))\n return '{:<20s}:{}'.format(self.name, ' '.join(results))\n\n def __str__(self):\n return self.to_str(show_gpu=True)\n\n\ndef benchmark(\n func, args=(), kwargs={}, n_repeat=10000, *,\n name=None, n_warmup=10, max_duration=_math.inf, devices=None):\n \"\"\" Timing utility for measuring time spent by both CPU and GPU.\n\n This function is a very convenient helper for setting up a timing test. The\n GPU time is properly recorded by synchronizing internal streams. As a\n result, to time a multi-GPU function all participating devices must be\n passed as the ``devices`` argument so that this helper knows which devices\n to record. A simple example is given as follows:\n\n .. code-block:: py\n\n import cupy as cp\n from cupyx.profiler import benchmark\n\n def f(a, b):\n return 3 * cp.sin(-a) * b\n\n a = 0.5 - cp.random.random((100,))\n b = cp.random.random((100,))\n print(benchmark(f, (a, b), n_repeat=1000))\n\n\n Args:\n func (callable): a callable object to be timed.\n args (tuple): positional argumens to be passed to the callable.\n kwargs (dict): keyword arguments to be passed to the callable.\n n_repeat (int): number of times the callable is called. Increasing\n this value would improve the collected statistics at the cost\n of longer test time.\n name (str): the function name to be reported. If not given, the\n callable's ``__name__`` attribute is used.\n n_warmup (int): number of times the callable is called. The warm-up\n runs are not timed.\n max_duration (float): the maximum time (in seconds) that the entire\n test can use. If the taken time is longer than this limit, the test\n is stopped and the statistics collected up to the breakpoint is\n reported.\n devices (tuple): a tuple of device IDs (int) that will be timed during\n the timing test. If not given, the current device is used.\n\n Returns:\n :class:`~cupyx.profiler._time._PerfCaseResult`:\n an object collecting all test results.\n\n \"\"\"\n\n if name is None:\n name = func.__name__\n\n if devices is None:\n devices = (_cupy.cuda.get_device_id(),)\n\n if not callable(func):\n raise ValueError('`func` should be a callable object.')\n if not isinstance(args, tuple):\n raise ValueError('`args` should be of tuple type.')\n if not isinstance(kwargs, dict):\n raise ValueError('`kwargs` should be of dict type.')\n if not isinstance(n_repeat, int):\n raise ValueError('`n_repeat` should be an integer.')\n if not isinstance(name, str):\n raise ValueError('`name` should be a string.')\n if not isinstance(n_warmup, int):\n raise ValueError('`n_warmup` should be an integer.')\n if not _numpy.isreal(max_duration):\n raise ValueError('`max_duration` should be given in seconds')\n if not isinstance(devices, tuple):\n raise ValueError('`devices` should be of tuple type')\n\n return _repeat(\n func, args, kwargs, n_repeat, name, n_warmup, max_duration, devices)\n\n\ndef _repeat(\n func, args, kwargs, n_repeat, name, n_warmup, max_duration, devices):\n\n events_1 = []\n events_2 = []\n\n for i in devices:\n prev_device = runtime.getDevice()\n try:\n runtime.setDevice(i)\n events_1.append(_cupy.cuda.stream.Event())\n events_2.append(_cupy.cuda.stream.Event())\n finally:\n runtime.setDevice(prev_device)\n\n ev1 = _cupy.cuda.stream.Event()\n ev2 = _cupy.cuda.stream.Event()\n\n for i in range(n_warmup):\n func(*args, **kwargs)\n\n for event, device in zip(events_1, devices):\n prev_device = runtime.getDevice()\n try:\n runtime.setDevice(device)\n event.record()\n finally:\n runtime.setDevice(prev_device)\n event.synchronize()\n\n cpu_times = []\n gpu_times = [[] for i in events_1]\n duration = 0\n for i in range(n_repeat):\n for event, device in zip(events_1, devices):\n prev_device = runtime.getDevice()\n try:\n runtime.setDevice(device)\n event.record()\n finally:\n runtime.setDevice(prev_device)\n\n t1 = _time.perf_counter()\n\n func(*args, **kwargs)\n\n t2 = _time.perf_counter()\n cpu_time = t2 - t1\n cpu_times.append(cpu_time)\n\n for event, device in zip(events_2, devices):\n prev_device = runtime.getDevice()\n try:\n runtime.setDevice(device)\n event.record()\n finally:\n runtime.setDevice(prev_device)\n for event, device in zip(events_2, devices):\n prev_device = runtime.getDevice()\n try:\n runtime.setDevice(device)\n event.synchronize()\n finally:\n runtime.setDevice(prev_device)\n for i, (ev1, ev2) in enumerate(zip(events_1, events_2)):\n gpu_time = _cupy.cuda.get_elapsed_time(ev1, ev2) * 1e-3\n gpu_times[i].append(gpu_time)\n\n duration += _time.perf_counter() - t1\n if duration > max_duration:\n break\n\n ts = _numpy.asarray([cpu_times] + gpu_times, dtype=_numpy.float64)\n return _PerfCaseResult(name, ts, devices=devices)\n", "path": "cupyx/profiler/_time.py"}], "after_files": [{"content": "import math as _math\nimport time as _time\n\nimport numpy as _numpy\n\nimport cupy as _cupy\nfrom cupy_backends.cuda.api import runtime\n\n\nclass _PerfCaseResult:\n \"\"\" An obscure object encompassing timing results recorded by\n :func:`~cupyx.profiler.benchmark`. Simple statistics can be obtained by\n converting an instance of this class to a string.\n\n .. warning::\n This API is currently experimental and subject to change in future\n releases.\n\n \"\"\"\n\n def __init__(self, name, ts, devices):\n assert ts.ndim == 2\n assert ts.shape[0] == len(devices) + 1\n assert ts.shape[1] > 0\n self.name = name\n self._ts = ts\n self._devices = devices\n\n def __repr__(self) -> str:\n \"\"\" Returns a string representation of the object.\n\n Returns:\n str: A string representation of the object.\n \"\"\"\n return self.to_str(show_gpu=True)\n\n @property\n def cpu_times(self) -> _numpy.ndarray:\n \"\"\"A :class:`numpy.ndarray` of shape ``(n_repeat,)``, holding times spent\n on CPU in milliseconds.\n\n These values are delta of the host-side performance counter\n (:func:`time.perf_counter`) between each repeat step.\n \"\"\"\n return self._ts[0]\n\n @property\n def gpu_times(self) -> _numpy.ndarray:\n \"\"\"A :class:`numpy.ndarray` of shape ``(len(devices), n_repeat)``,\n holding times spent on GPU in milliseconds.\n\n These values are measured using ``cudaEventElapsedTime`` with events\n recoreded before/after each repeat step.\n \"\"\"\n return self._ts[1:]\n\n @staticmethod\n def _to_str_per_item(device_name, t):\n assert t.ndim == 1\n assert t.size > 0\n t_us = t * 1e6\n\n s = ' {}:{:9.03f} us'.format(device_name, t_us.mean())\n if t.size > 1:\n s += ' +/-{:6.03f} (min:{:9.03f} / max:{:9.03f}) us'.format(\n t_us.std(), t_us.min(), t_us.max())\n return s\n\n def to_str(self, show_gpu=False):\n results = [self._to_str_per_item('CPU', self._ts[0])]\n if show_gpu:\n for i, d in enumerate(self._devices):\n results.append(\n self._to_str_per_item('GPU-{}'.format(d),\n self._ts[1 + i]))\n return '{:<20s}:{}'.format(self.name, ' '.join(results))\n\n def __str__(self):\n return self.to_str(show_gpu=True)\n\n\ndef benchmark(\n func, args=(), kwargs={}, n_repeat=10000, *,\n name=None, n_warmup=10, max_duration=_math.inf, devices=None):\n \"\"\" Timing utility for measuring time spent by both CPU and GPU.\n\n This function is a very convenient helper for setting up a timing test. The\n GPU time is properly recorded by synchronizing internal streams. As a\n result, to time a multi-GPU function all participating devices must be\n passed as the ``devices`` argument so that this helper knows which devices\n to record. A simple example is given as follows:\n\n .. code-block:: py\n\n import cupy as cp\n from cupyx.profiler import benchmark\n\n def f(a, b):\n return 3 * cp.sin(-a) * b\n\n a = 0.5 - cp.random.random((100,))\n b = cp.random.random((100,))\n print(benchmark(f, (a, b), n_repeat=1000))\n\n\n Args:\n func (callable): a callable object to be timed.\n args (tuple): positional argumens to be passed to the callable.\n kwargs (dict): keyword arguments to be passed to the callable.\n n_repeat (int): number of times the callable is called. Increasing\n this value would improve the collected statistics at the cost\n of longer test time.\n name (str): the function name to be reported. If not given, the\n callable's ``__name__`` attribute is used.\n n_warmup (int): number of times the callable is called. The warm-up\n runs are not timed.\n max_duration (float): the maximum time (in seconds) that the entire\n test can use. If the taken time is longer than this limit, the test\n is stopped and the statistics collected up to the breakpoint is\n reported.\n devices (tuple): a tuple of device IDs (int) that will be timed during\n the timing test. If not given, the current device is used.\n\n Returns:\n :class:`~cupyx.profiler._time._PerfCaseResult`:\n an object collecting all test results.\n\n \"\"\"\n\n if name is None:\n name = func.__name__\n\n if devices is None:\n devices = (_cupy.cuda.get_device_id(),)\n\n if not callable(func):\n raise ValueError('`func` should be a callable object.')\n if not isinstance(args, tuple):\n raise ValueError('`args` should be of tuple type.')\n if not isinstance(kwargs, dict):\n raise ValueError('`kwargs` should be of dict type.')\n if not isinstance(n_repeat, int):\n raise ValueError('`n_repeat` should be an integer.')\n if not isinstance(name, str):\n raise ValueError('`name` should be a string.')\n if not isinstance(n_warmup, int):\n raise ValueError('`n_warmup` should be an integer.')\n if not _numpy.isreal(max_duration):\n raise ValueError('`max_duration` should be given in seconds')\n if not isinstance(devices, tuple):\n raise ValueError('`devices` should be of tuple type')\n\n return _repeat(\n func, args, kwargs, n_repeat, name, n_warmup, max_duration, devices)\n\n\ndef _repeat(\n func, args, kwargs, n_repeat, name, n_warmup, max_duration, devices):\n\n events_1 = []\n events_2 = []\n\n for i in devices:\n prev_device = runtime.getDevice()\n try:\n runtime.setDevice(i)\n events_1.append(_cupy.cuda.stream.Event())\n events_2.append(_cupy.cuda.stream.Event())\n finally:\n runtime.setDevice(prev_device)\n\n for i in range(n_warmup):\n func(*args, **kwargs)\n\n for event, device in zip(events_1, devices):\n prev_device = runtime.getDevice()\n try:\n runtime.setDevice(device)\n event.record()\n finally:\n runtime.setDevice(prev_device)\n event.synchronize()\n\n cpu_times = []\n gpu_times = [[] for i in events_1]\n duration = 0\n for i in range(n_repeat):\n for event, device in zip(events_1, devices):\n prev_device = runtime.getDevice()\n try:\n runtime.setDevice(device)\n event.record()\n finally:\n runtime.setDevice(prev_device)\n\n t1 = _time.perf_counter()\n\n func(*args, **kwargs)\n\n t2 = _time.perf_counter()\n cpu_time = t2 - t1\n cpu_times.append(cpu_time)\n\n for event, device in zip(events_2, devices):\n prev_device = runtime.getDevice()\n try:\n runtime.setDevice(device)\n event.record()\n finally:\n runtime.setDevice(prev_device)\n for event, device in zip(events_2, devices):\n prev_device = runtime.getDevice()\n try:\n runtime.setDevice(device)\n event.synchronize()\n finally:\n runtime.setDevice(prev_device)\n for i, (ev1, ev2) in enumerate(zip(events_1, events_2)):\n gpu_time = _cupy.cuda.get_elapsed_time(ev1, ev2) * 1e-3\n gpu_times[i].append(gpu_time)\n\n duration += _time.perf_counter() - t1\n if duration > max_duration:\n break\n\n ts = _numpy.asarray([cpu_times] + gpu_times, dtype=_numpy.float64)\n return _PerfCaseResult(name, ts, devices=devices)\n", "path": "cupyx/profiler/_time.py"}]}
| 3,517 | 360 |
gh_patches_debug_28031
|
rasdani/github-patches
|
git_diff
|
sonic-net__sonic-mgmt-2921
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[dualtor] Missing Loopback2 interface in device minigraph
<!--
If you are reporting a new issue, make sure that we do not have any duplicates
already open. You can ensure this by searching the issue list for this
repository. If there is a duplicate, please close your issue and add a comment
to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to
include the BUG REPORT INFORMATION shown below. If you fail to provide this
information within 7 days, we cannot debug your issue and will close it. We
will, however, reopen it if you later provide the information.
For more information about reporting issues, see
https://github.com/Azure/SONiC/wiki#report-issues
---------------------------------------------------
GENERAL SUPPORT INFORMATION
---------------------------------------------------
The GitHub issue tracker is for bug reports and feature requests.
General support can be found at the following locations:
- SONiC Support Forums - https://groups.google.com/forum/#!forum/sonicproject
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST
-->
**Description**
Minigraph generated for Dual ToR devices does not have `Loopback2` interface.
The `linkmrgd` waits on `Loopback2` interface to perform the probe and to indicate subsequent switchover tasks.
Further, APP DB is missing *MUX* keys.
<!--
Briefly describe the problem you are having in a few paragraphs.
-->
**Steps to reproduce the issue:**
1. Manual switchover using CLI in dual TOR on standby TOR.
2. The switchover fails to happen.
3.
**Describe the results you received:**
ToR switchover failed.
**Describe the results you expected:**
Switchover should work. APP DB should contain *MUX* keys.
**Additional information you deem important:**
<!--
software/ASIC/Hardware Flatform version and info
-->
**Output of `show version`:**
```
(paste your output here)
```
**Attach debug file `sudo generate_dump`:**
```
(paste your output here)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ansible/library/dual_tor_facts.py`
Content:
```
1 from collections import defaultdict
2 class DualTorParser:
3
4 def __init__(self, hostname, testbed_facts, host_vars, vm_config, port_alias, vlan_intfs):
5 self.hostname = hostname
6 self.testbed_facts = testbed_facts
7 self.host_vars = host_vars
8 self.vm_config = vm_config
9 self.port_alias = port_alias
10 self.vlan_intfs = vlan_intfs
11 self.dual_tor_facts = {}
12
13 def parse_neighbor_tor(self):
14 '''
15 Parses information about the other ToR in a dual ToR pair
16 '''
17 neighbor = {}
18 neighbor['hostname'] = [dut for dut in self.testbed_facts['duts'] if dut != self.hostname][0]
19 neighbor['ip'] = self.host_vars[neighbor['hostname']]['ansible_host']
20 neighbor['hwsku'] = self.host_vars[neighbor['hostname']]['hwsku']
21
22 self.dual_tor_facts['neighbor'] = neighbor
23
24 def parse_tor_position(self):
25 '''
26 Determines the position ('U' for upper and 'L' for lower) of the ToR.
27
28 The upper ToR is always the first ToR listed in the testbed file
29 '''
30 self.dual_tor_facts['positions'] = {'upper': self.testbed_facts['duts'][0], 'lower': self.testbed_facts['duts'][1]}
31
32 def parse_loopback_ips(self):
33 '''
34 Parses the IPv4 and IPv6 loopback IPs for the DUTs
35
36 Similar to `parse_tor_position`, the ToR which comes first alphabetically is always assigned the first IP
37 '''
38
39 loopback_ips = defaultdict(dict)
40
41 ipv4_loopbacks = sorted(self.vm_config['DUT']['loopback']['ipv4'])
42 ipv6_loopbacks = sorted(self.vm_config['DUT']['loopback']['ipv6'])
43
44 for i, dut in enumerate(sorted(self.testbed_facts['duts'])):
45 loopback_ips[dut]['ipv4'] = ipv4_loopbacks[i]
46 loopback_ips[dut]['ipv6'] = ipv6_loopbacks[i]
47
48 self.dual_tor_facts['loopback'] = loopback_ips
49
50 def generate_cable_names(self):
51 cables = []
52
53 for server_num, dut_intf in enumerate(self.vlan_intfs):
54 name = '{}-Servers{}-SC'.format(self.hostname, server_num)
55 cable = {"hostname": name, "dut_intf": dut_intf}
56 cables.append(cable)
57
58 self.dual_tor_facts['cables'] = cables
59
60 def get_dual_tor_facts(self):
61 '''
62 Gathers facts related to a dual ToR configuration
63 '''
64 if 'dualtor' in self.testbed_facts['topo']:
65 self.parse_neighbor_tor()
66 self.parse_tor_position()
67 self.generate_cable_names()
68 self.parse_loopback_ips()
69
70 return self.dual_tor_facts
71
72
73 def main():
74 module = AnsibleModule(
75 argument_spec=dict(
76 hostname=dict(required=True, default=None, type='str'),
77 testbed_facts=dict(required=True, default=None, type='dict'),
78 hostvars=dict(required=True, default=None, type='dict'),
79 vm_config=dict(required=True, default=None, type='dict'),
80 port_alias=dict(required=True, default=None, type='list'),
81 vlan_intfs=dict(required=True, default=None, type='list')
82 ),
83 supports_check_mode=True
84 )
85 m_args = module.params
86 # testbed_facts ={u'comment': u'Dual-TOR testbed', u'conf-name': u'vms-kvm-dual-t0', u'ptf_ip': u'10.250.0.109', u'ptf_netmask': u'255.255.255.0', u'ptf_ipv6': u'fec0::ffff:afa:9', u'vm_base': u'VM0108', u'server': u'server_1', u'topo': u'dualtor', u'group-name': u'vms6-4', u'ptf': u'ptf-04', u'duts_map': {u'vlab-06': 1, u'vlab-05': 0}, u'ptf_netmask_v6': u'ffff:ffff:ffff:ffff::', u'ptf_image_name': u'docker-ptf', u'duts': [u'vlab-05', u'vlab-06']}
87 hostname = m_args['hostname']
88 testbed_facts = m_args['testbed_facts']
89 host_vars = m_args['hostvars']
90 vm_config = m_args['vm_config']
91 port_alias = m_args['port_alias']
92 vlan_intfs = m_args['vlan_intfs']
93 try:
94 dual_tor_parser = DualTorParser(hostname, testbed_facts, host_vars, vm_config, port_alias, vlan_intfs)
95 module.exit_json(ansible_facts={'dual_tor_facts': dual_tor_parser.get_dual_tor_facts()})
96 except Exception as e:
97 module.fail_json(msg=traceback.format_exc())
98
99 from ansible.module_utils.basic import *
100 if __name__== "__main__":
101 main()
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ansible/library/dual_tor_facts.py b/ansible/library/dual_tor_facts.py
--- a/ansible/library/dual_tor_facts.py
+++ b/ansible/library/dual_tor_facts.py
@@ -33,19 +33,26 @@
'''
Parses the IPv4 and IPv6 loopback IPs for the DUTs
- Similar to `parse_tor_position`, the ToR which comes first alphabetically is always assigned the first IP
+ Similar to `parse_tor_position`, the ToR which comes first in the testbed file is always assigned the first IP
'''
loopback_ips = defaultdict(dict)
-
- ipv4_loopbacks = sorted(self.vm_config['DUT']['loopback']['ipv4'])
- ipv6_loopbacks = sorted(self.vm_config['DUT']['loopback']['ipv6'])
-
- for i, dut in enumerate(sorted(self.testbed_facts['duts'])):
- loopback_ips[dut]['ipv4'] = ipv4_loopbacks[i]
- loopback_ips[dut]['ipv6'] = ipv6_loopbacks[i]
-
- self.dual_tor_facts['loopback'] = loopback_ips
+ addl_loopback_ips = defaultdict(dict)
+
+ for dut_num, dut in enumerate(self.testbed_facts['duts']):
+ loopback_ips[dut]['ipv4'] = self.vm_config['DUT']['loopback']['ipv4'][dut_num]
+ loopback_ips[dut]['ipv6'] = self.vm_config['DUT']['loopback']['ipv6'][dut_num]
+
+ for loopback_num in range(1, 3): # Generate two additional loopback IPs, Loopback1 and Loopback2
+ loopback_key = 'loopback{}'.format(loopback_num)
+ loopback_dict = {}
+ loopback_dict['ipv4'] = self.vm_config['DUT'][loopback_key]['ipv4'][dut_num]
+ loopback_dict['ipv6'] = self.vm_config['DUT'][loopback_key]['ipv6'][dut_num]
+ loopback_dict['host_ip_base_index'] = loopback_num * 2
+ addl_loopback_ips[dut][loopback_num] = loopback_dict
+
+ self.dual_tor_facts['loopback'] = loopback_ips
+ self.dual_tor_facts['addl_loopbacks'] = addl_loopback_ips
def generate_cable_names(self):
cables = []
|
{"golden_diff": "diff --git a/ansible/library/dual_tor_facts.py b/ansible/library/dual_tor_facts.py\n--- a/ansible/library/dual_tor_facts.py\n+++ b/ansible/library/dual_tor_facts.py\n@@ -33,19 +33,26 @@\n '''\n Parses the IPv4 and IPv6 loopback IPs for the DUTs\n \n- Similar to `parse_tor_position`, the ToR which comes first alphabetically is always assigned the first IP\n+ Similar to `parse_tor_position`, the ToR which comes first in the testbed file is always assigned the first IP\n '''\n \n loopback_ips = defaultdict(dict)\n-\n- ipv4_loopbacks = sorted(self.vm_config['DUT']['loopback']['ipv4'])\n- ipv6_loopbacks = sorted(self.vm_config['DUT']['loopback']['ipv6'])\n-\n- for i, dut in enumerate(sorted(self.testbed_facts['duts'])):\n- loopback_ips[dut]['ipv4'] = ipv4_loopbacks[i]\n- loopback_ips[dut]['ipv6'] = ipv6_loopbacks[i] \n-\n- self.dual_tor_facts['loopback'] = loopback_ips \n+ addl_loopback_ips = defaultdict(dict)\n+\n+ for dut_num, dut in enumerate(self.testbed_facts['duts']):\n+ loopback_ips[dut]['ipv4'] = self.vm_config['DUT']['loopback']['ipv4'][dut_num]\n+ loopback_ips[dut]['ipv6'] = self.vm_config['DUT']['loopback']['ipv6'][dut_num] \n+\n+ for loopback_num in range(1, 3): # Generate two additional loopback IPs, Loopback1 and Loopback2\n+ loopback_key = 'loopback{}'.format(loopback_num)\n+ loopback_dict = {}\n+ loopback_dict['ipv4'] = self.vm_config['DUT'][loopback_key]['ipv4'][dut_num]\n+ loopback_dict['ipv6'] = self.vm_config['DUT'][loopback_key]['ipv6'][dut_num]\n+ loopback_dict['host_ip_base_index'] = loopback_num * 2\n+ addl_loopback_ips[dut][loopback_num] = loopback_dict\n+\n+ self.dual_tor_facts['loopback'] = loopback_ips \n+ self.dual_tor_facts['addl_loopbacks'] = addl_loopback_ips\n \n def generate_cable_names(self):\n cables = []\n", "issue": "[dualtor] Missing Loopback2 interface in device minigraph\n<!--\r\nIf you are reporting a new issue, make sure that we do not have any duplicates\r\nalready open. You can ensure this by searching the issue list for this\r\nrepository. If there is a duplicate, please close your issue and add a comment\r\nto the existing issue instead.\r\n\r\nIf you suspect your issue is a bug, please edit your issue description to\r\ninclude the BUG REPORT INFORMATION shown below. If you fail to provide this\r\ninformation within 7 days, we cannot debug your issue and will close it. We\r\nwill, however, reopen it if you later provide the information.\r\n\r\nFor more information about reporting issues, see\r\nhttps://github.com/Azure/SONiC/wiki#report-issues\r\n\r\n---------------------------------------------------\r\nGENERAL SUPPORT INFORMATION\r\n---------------------------------------------------\r\n\r\nThe GitHub issue tracker is for bug reports and feature requests.\r\nGeneral support can be found at the following locations:\r\n\r\n- SONiC Support Forums - https://groups.google.com/forum/#!forum/sonicproject\r\n\r\n---------------------------------------------------\r\nBUG REPORT INFORMATION\r\n---------------------------------------------------\r\nUse the commands below to provide key information from your environment:\r\nYou do NOT have to include this information if this is a FEATURE REQUEST\r\n-->\r\n\r\n**Description**\r\nMinigraph generated for Dual ToR devices does not have `Loopback2` interface.\r\nThe `linkmrgd` waits on `Loopback2` interface to perform the probe and to indicate subsequent switchover tasks.\r\nFurther, APP DB is missing *MUX* keys.\r\n\r\n<!--\r\nBriefly describe the problem you are having in a few paragraphs.\r\n-->\r\n\r\n**Steps to reproduce the issue:**\r\n1. Manual switchover using CLI in dual TOR on standby TOR.\r\n2. The switchover fails to happen.\r\n3.\r\n\r\n**Describe the results you received:**\r\nToR switchover failed.\r\n\r\n**Describe the results you expected:**\r\nSwitchover should work. APP DB should contain *MUX* keys.\r\n\r\n**Additional information you deem important:**\r\n<!--\r\nsoftware/ASIC/Hardware Flatform version and info\r\n-->\r\n **Output of `show version`:**\r\n\r\n ```\r\n (paste your output here)\r\n ```\r\n\r\n **Attach debug file `sudo generate_dump`:**\r\n\r\n ```\r\n (paste your output here)\r\n ```\r\n\n", "before_files": [{"content": "from collections import defaultdict\nclass DualTorParser:\n\n def __init__(self, hostname, testbed_facts, host_vars, vm_config, port_alias, vlan_intfs):\n self.hostname = hostname\n self.testbed_facts = testbed_facts\n self.host_vars = host_vars\n self.vm_config = vm_config\n self.port_alias = port_alias\n self.vlan_intfs = vlan_intfs\n self.dual_tor_facts = {}\n\n def parse_neighbor_tor(self):\n '''\n Parses information about the other ToR in a dual ToR pair\n '''\n neighbor = {}\n neighbor['hostname'] = [dut for dut in self.testbed_facts['duts'] if dut != self.hostname][0]\n neighbor['ip'] = self.host_vars[neighbor['hostname']]['ansible_host']\n neighbor['hwsku'] = self.host_vars[neighbor['hostname']]['hwsku']\n\n self.dual_tor_facts['neighbor'] = neighbor\n\n def parse_tor_position(self):\n '''\n Determines the position ('U' for upper and 'L' for lower) of the ToR.\n\n The upper ToR is always the first ToR listed in the testbed file\n '''\n self.dual_tor_facts['positions'] = {'upper': self.testbed_facts['duts'][0], 'lower': self.testbed_facts['duts'][1]}\n\n def parse_loopback_ips(self):\n '''\n Parses the IPv4 and IPv6 loopback IPs for the DUTs\n\n Similar to `parse_tor_position`, the ToR which comes first alphabetically is always assigned the first IP\n '''\n\n loopback_ips = defaultdict(dict)\n\n ipv4_loopbacks = sorted(self.vm_config['DUT']['loopback']['ipv4'])\n ipv6_loopbacks = sorted(self.vm_config['DUT']['loopback']['ipv6'])\n\n for i, dut in enumerate(sorted(self.testbed_facts['duts'])):\n loopback_ips[dut]['ipv4'] = ipv4_loopbacks[i]\n loopback_ips[dut]['ipv6'] = ipv6_loopbacks[i] \n\n self.dual_tor_facts['loopback'] = loopback_ips \n\n def generate_cable_names(self):\n cables = []\n\n for server_num, dut_intf in enumerate(self.vlan_intfs):\n name = '{}-Servers{}-SC'.format(self.hostname, server_num)\n cable = {\"hostname\": name, \"dut_intf\": dut_intf}\n cables.append(cable)\n\n self.dual_tor_facts['cables'] = cables\n\n def get_dual_tor_facts(self):\n '''\n Gathers facts related to a dual ToR configuration\n '''\n if 'dualtor' in self.testbed_facts['topo']:\n self.parse_neighbor_tor()\n self.parse_tor_position()\n self.generate_cable_names()\n self.parse_loopback_ips()\n\n return self.dual_tor_facts\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=dict(\n hostname=dict(required=True, default=None, type='str'),\n testbed_facts=dict(required=True, default=None, type='dict'),\n hostvars=dict(required=True, default=None, type='dict'),\n vm_config=dict(required=True, default=None, type='dict'),\n port_alias=dict(required=True, default=None, type='list'),\n vlan_intfs=dict(required=True, default=None, type='list')\n ),\n supports_check_mode=True\n )\n m_args = module.params\n # testbed_facts ={u'comment': u'Dual-TOR testbed', u'conf-name': u'vms-kvm-dual-t0', u'ptf_ip': u'10.250.0.109', u'ptf_netmask': u'255.255.255.0', u'ptf_ipv6': u'fec0::ffff:afa:9', u'vm_base': u'VM0108', u'server': u'server_1', u'topo': u'dualtor', u'group-name': u'vms6-4', u'ptf': u'ptf-04', u'duts_map': {u'vlab-06': 1, u'vlab-05': 0}, u'ptf_netmask_v6': u'ffff:ffff:ffff:ffff::', u'ptf_image_name': u'docker-ptf', u'duts': [u'vlab-05', u'vlab-06']}\n hostname = m_args['hostname']\n testbed_facts = m_args['testbed_facts']\n host_vars = m_args['hostvars']\n vm_config = m_args['vm_config']\n port_alias = m_args['port_alias']\n vlan_intfs = m_args['vlan_intfs']\n try:\n dual_tor_parser = DualTorParser(hostname, testbed_facts, host_vars, vm_config, port_alias, vlan_intfs)\n module.exit_json(ansible_facts={'dual_tor_facts': dual_tor_parser.get_dual_tor_facts()})\n except Exception as e:\n module.fail_json(msg=traceback.format_exc())\n\nfrom ansible.module_utils.basic import *\nif __name__== \"__main__\":\n main()\n", "path": "ansible/library/dual_tor_facts.py"}], "after_files": [{"content": "from collections import defaultdict\nclass DualTorParser:\n\n def __init__(self, hostname, testbed_facts, host_vars, vm_config, port_alias, vlan_intfs):\n self.hostname = hostname\n self.testbed_facts = testbed_facts\n self.host_vars = host_vars\n self.vm_config = vm_config\n self.port_alias = port_alias\n self.vlan_intfs = vlan_intfs\n self.dual_tor_facts = {}\n\n def parse_neighbor_tor(self):\n '''\n Parses information about the other ToR in a dual ToR pair\n '''\n neighbor = {}\n neighbor['hostname'] = [dut for dut in self.testbed_facts['duts'] if dut != self.hostname][0]\n neighbor['ip'] = self.host_vars[neighbor['hostname']]['ansible_host']\n neighbor['hwsku'] = self.host_vars[neighbor['hostname']]['hwsku']\n\n self.dual_tor_facts['neighbor'] = neighbor\n\n def parse_tor_position(self):\n '''\n Determines the position ('U' for upper and 'L' for lower) of the ToR.\n\n The upper ToR is always the first ToR listed in the testbed file\n '''\n self.dual_tor_facts['positions'] = {'upper': self.testbed_facts['duts'][0], 'lower': self.testbed_facts['duts'][1]}\n\n def parse_loopback_ips(self):\n '''\n Parses the IPv4 and IPv6 loopback IPs for the DUTs\n\n Similar to `parse_tor_position`, the ToR which comes first in the testbed file is always assigned the first IP\n '''\n\n loopback_ips = defaultdict(dict)\n addl_loopback_ips = defaultdict(dict)\n\n for dut_num, dut in enumerate(self.testbed_facts['duts']):\n loopback_ips[dut]['ipv4'] = self.vm_config['DUT']['loopback']['ipv4'][dut_num]\n loopback_ips[dut]['ipv6'] = self.vm_config['DUT']['loopback']['ipv6'][dut_num] \n\n for loopback_num in range(1, 3): # Generate two additional loopback IPs, Loopback1 and Loopback2\n loopback_key = 'loopback{}'.format(loopback_num)\n loopback_dict = {}\n loopback_dict['ipv4'] = self.vm_config['DUT'][loopback_key]['ipv4'][dut_num]\n loopback_dict['ipv6'] = self.vm_config['DUT'][loopback_key]['ipv6'][dut_num]\n loopback_dict['host_ip_base_index'] = loopback_num * 2\n addl_loopback_ips[dut][loopback_num] = loopback_dict\n\n self.dual_tor_facts['loopback'] = loopback_ips \n self.dual_tor_facts['addl_loopbacks'] = addl_loopback_ips\n\n def generate_cable_names(self):\n cables = []\n\n for server_num, dut_intf in enumerate(self.vlan_intfs):\n name = '{}-Servers{}-SC'.format(self.hostname, server_num)\n cable = {\"hostname\": name, \"dut_intf\": dut_intf}\n cables.append(cable)\n\n self.dual_tor_facts['cables'] = cables\n\n def get_dual_tor_facts(self):\n '''\n Gathers facts related to a dual ToR configuration\n '''\n if 'dualtor' in self.testbed_facts['topo']:\n self.parse_neighbor_tor()\n self.parse_tor_position()\n self.generate_cable_names()\n self.parse_loopback_ips()\n\n return self.dual_tor_facts\n\n\ndef main():\n module = AnsibleModule(\n argument_spec=dict(\n hostname=dict(required=True, default=None, type='str'),\n testbed_facts=dict(required=True, default=None, type='dict'),\n hostvars=dict(required=True, default=None, type='dict'),\n vm_config=dict(required=True, default=None, type='dict'),\n port_alias=dict(required=True, default=None, type='list'),\n vlan_intfs=dict(required=True, default=None, type='list')\n ),\n supports_check_mode=True\n )\n m_args = module.params\n # testbed_facts ={u'comment': u'Dual-TOR testbed', u'conf-name': u'vms-kvm-dual-t0', u'ptf_ip': u'10.250.0.109', u'ptf_netmask': u'255.255.255.0', u'ptf_ipv6': u'fec0::ffff:afa:9', u'vm_base': u'VM0108', u'server': u'server_1', u'topo': u'dualtor', u'group-name': u'vms6-4', u'ptf': u'ptf-04', u'duts_map': {u'vlab-06': 1, u'vlab-05': 0}, u'ptf_netmask_v6': u'ffff:ffff:ffff:ffff::', u'ptf_image_name': u'docker-ptf', u'duts': [u'vlab-05', u'vlab-06']}\n hostname = m_args['hostname']\n testbed_facts = m_args['testbed_facts']\n host_vars = m_args['hostvars']\n vm_config = m_args['vm_config']\n port_alias = m_args['port_alias']\n vlan_intfs = m_args['vlan_intfs']\n try:\n dual_tor_parser = DualTorParser(hostname, testbed_facts, host_vars, vm_config, port_alias, vlan_intfs)\n module.exit_json(ansible_facts={'dual_tor_facts': dual_tor_parser.get_dual_tor_facts()})\n except Exception as e:\n module.fail_json(msg=traceback.format_exc())\n\nfrom ansible.module_utils.basic import *\nif __name__== \"__main__\":\n main()\n", "path": "ansible/library/dual_tor_facts.py"}]}
| 2,100 | 557 |
gh_patches_debug_20290
|
rasdani/github-patches
|
git_diff
|
WeblateOrg__weblate-3602
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unchanged check should honor placeholders
**Is your feature request related to a problem? Please describe.**
It is possible to define placeholders using [`placeholder:` flag](https://docs.weblate.org/en/latest/admin/checks.html#customizing-behavior). The [Unchanged translation check](https://docs.weblate.org/en/latest/user/checks.html#unchanged-translation) should deal with those correctly.
**Describe the solution you'd like**
The check strip custom placeholders from the string similarly like we do with format strings:
https://github.com/WeblateOrg/weblate/blob/ddfbd298a31a95229fa4487cfb2a5a4878da7b2b/weblate/checks/same.py#L107-L137
**Additional context**
Getting `placeholder:` flag value:
https://github.com/WeblateOrg/weblate/blob/256fa9370edc90c2404df6178d1daa002d441e2f/weblate/checks/base.py#L179
Building regexp for placeholders:
https://github.com/WeblateOrg/weblate/blob/256fa9370edc90c2404df6178d1daa002d441e2f/weblate/checks/placeholders.py#L57
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/88807457-unchanged-check-should-honor-placeholders?utm_campaign=plugin&utm_content=tracker%2F253393&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F253393&utm_medium=issues&utm_source=github).
</bountysource-plugin>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `weblate/checks/same.py`
Content:
```
1 #
2 # Copyright © 2012 - 2020 Michal Čihař <[email protected]>
3 #
4 # This file is part of Weblate <https://weblate.org/>
5 #
6 # This program is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # This program is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with this program. If not, see <https://www.gnu.org/licenses/>.
18 #
19
20
21 import re
22
23 from django.utils.html import strip_tags
24 from django.utils.translation import gettext_lazy as _
25
26 from weblate.checks.base import TargetCheck
27 from weblate.checks.data import SAME_BLACKLIST
28 from weblate.checks.format import (
29 C_PRINTF_MATCH,
30 PHP_PRINTF_MATCH,
31 PYTHON_BRACE_MATCH,
32 PYTHON_PRINTF_MATCH,
33 )
34 from weblate.checks.languages import LANGUAGES
35 from weblate.checks.qt import QT_FORMAT_MATCH, QT_PLURAL_MATCH
36 from weblate.checks.ruby import RUBY_FORMAT_MATCH
37
38 # Email address to ignore
39 EMAIL_RE = re.compile(r"[a-z0-9_.-]+@[a-z0-9_.-]+\.[a-z0-9-]{2,}", re.IGNORECASE)
40
41 URL_RE = re.compile(
42 r"(?:http|ftp)s?://" # http:// or https://
43 r"(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+"
44 r"(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|" # domain...
45 r"localhost|" # localhost...
46 r"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})" # ...or ip
47 r"(?::\d+)?" # optional port
48 r"(?:/?|[/?]\S+)$",
49 re.IGNORECASE,
50 )
51
52 HASH_RE = re.compile(r"#[A-Za-z0-9_-]*")
53
54 DOMAIN_RE = re.compile(
55 r"(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+"
56 r"(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)",
57 re.IGNORECASE,
58 )
59
60 PATH_RE = re.compile(r"(^|[ ])(/[a-zA-Z0-9=:?._-]+)+")
61
62 TEMPLATE_RE = re.compile(r"{[a-z_-]+}|@[A-Z_]@", re.IGNORECASE)
63
64 RST_MATCH = re.compile(r"(?::(ref|config:option|file|guilabel):`[^`]+`|``[^`]+``)")
65
66 SPLIT_RE = re.compile(
67 r"(?:\&(?:nbsp|rsaquo|lt|gt|amp|ldquo|rdquo|times|quot);|"
68 + r'[() ,.^`"\'\\/_<>!?;:|{}*^@%#&~=+\r\n✓—‑…\[\]0-9-])+',
69 re.IGNORECASE,
70 )
71
72 EMOJI_RE = re.compile("[\U00002600-\U000027BF]|[\U0001f000-\U0001fffd]")
73
74 # Docbook tags to ignore
75 DB_TAGS = ("screen", "indexterm", "programlisting")
76
77
78 def strip_format(msg, flags):
79 """Remove format strings from the strings.
80
81 These are quite often not changed by translators.
82 """
83 if "python-format" in flags:
84 regex = PYTHON_PRINTF_MATCH
85 elif "python-brace-format" in flags:
86 regex = PYTHON_BRACE_MATCH
87 elif "php-format" in flags:
88 regex = PHP_PRINTF_MATCH
89 elif "c-format" in flags:
90 regex = C_PRINTF_MATCH
91 elif "qt-format" in flags:
92 regex = QT_FORMAT_MATCH
93 elif "qt-plural-format" in flags:
94 regex = QT_PLURAL_MATCH
95 elif "ruby-format" in flags:
96 regex = RUBY_FORMAT_MATCH
97 elif "rst-text" in flags:
98 regex = RST_MATCH
99 else:
100 return msg
101 stripped = regex.sub("", msg)
102 return stripped
103
104
105 def strip_string(msg, flags):
106 """Strip (usually) not translated parts from the string."""
107 # Strip HTML markup
108 stripped = strip_tags(msg)
109
110 # Strip format strings
111 stripped = strip_format(stripped, flags)
112
113 # Remove emojis
114 stripped = EMOJI_RE.sub(" ", stripped)
115
116 # Remove email addresses
117 stripped = EMAIL_RE.sub("", stripped)
118
119 # Strip full URLs
120 stripped = URL_RE.sub("", stripped)
121
122 # Strip hash tags / IRC channels
123 stripped = HASH_RE.sub("", stripped)
124
125 # Strip domain names/URLs
126 stripped = DOMAIN_RE.sub("", stripped)
127
128 # Strip file/URL paths
129 stripped = PATH_RE.sub("", stripped)
130
131 # Strip template markup
132 stripped = TEMPLATE_RE.sub("", stripped)
133
134 # Cleanup trailing/leading chars
135 return stripped
136
137
138 def test_word(word):
139 """Test whether word should be ignored."""
140 return len(word) <= 2 or word in SAME_BLACKLIST or word in LANGUAGES
141
142
143 class SameCheck(TargetCheck):
144 """Check for not translated entries."""
145
146 check_id = "same"
147 name = _("Unchanged translation")
148 description = _("Source and translation are identical")
149 severity = "warning"
150
151 def should_ignore(self, source, unit):
152 """Check whether given unit should be ignored."""
153 if "strict-same" in unit.all_flags:
154 return False
155 # Ignore some docbook tags
156 if unit.note.startswith("Tag: ") and unit.note[5:] in DB_TAGS:
157 return True
158
159 # Lower case source
160 lower_source = source.lower()
161
162 # Check special things like 1:4 1/2 or copyright
163 if (
164 len(source.strip("0123456789:/,.")) <= 1
165 or "(c) copyright" in lower_source
166 or "©" in source
167 ):
168 return True
169 # Strip format strings
170 stripped = strip_string(source, unit.all_flags)
171
172 # Ignore strings which don't contain any string to translate
173 # or just single letter (usually unit or something like that)
174 # or are whole uppercase (abbreviations)
175 if len(stripped) <= 1 or stripped.isupper():
176 return True
177 # Check if we have any word which is not in blacklist
178 # (words which are often same in foreign language)
179 for word in SPLIT_RE.split(stripped.lower()):
180 if not test_word(word):
181 return False
182 return True
183
184 def should_skip(self, unit):
185 if super().should_skip(unit):
186 return True
187
188 source_language = unit.translation.component.project.source_language.base_code
189
190 # Ignore the check for source language,
191 # English variants will have most things not translated
192 # Interlingua is also quite often similar to English
193 if self.is_language(unit, source_language) or (
194 source_language == "en" and self.is_language(unit, ("en", "ia"))
195 ):
196 return True
197
198 return False
199
200 def check_single(self, source, target, unit):
201 # One letter things are usually labels or decimal/thousand separators
202 if len(source) <= 1 and len(target) <= 1:
203 return False
204
205 # Check for ignoring
206 if self.should_ignore(source, unit):
207 return False
208
209 return source == target
210
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/weblate/checks/same.py b/weblate/checks/same.py
--- a/weblate/checks/same.py
+++ b/weblate/checks/same.py
@@ -140,6 +140,17 @@
return len(word) <= 2 or word in SAME_BLACKLIST or word in LANGUAGES
+def strip_placeholders(msg, unit):
+
+ return re.sub(
+ "|".join(
+ re.escape(param) for param in unit.all_flags.get_value("placeholders")
+ ),
+ "",
+ msg,
+ )
+
+
class SameCheck(TargetCheck):
"""Check for not translated entries."""
@@ -169,6 +180,10 @@
# Strip format strings
stripped = strip_string(source, unit.all_flags)
+ # Strip placeholder strings
+ if "placeholders" in unit.all_flags:
+ stripped = strip_placeholders(stripped, unit)
+
# Ignore strings which don't contain any string to translate
# or just single letter (usually unit or something like that)
# or are whole uppercase (abbreviations)
|
{"golden_diff": "diff --git a/weblate/checks/same.py b/weblate/checks/same.py\n--- a/weblate/checks/same.py\n+++ b/weblate/checks/same.py\n@@ -140,6 +140,17 @@\n return len(word) <= 2 or word in SAME_BLACKLIST or word in LANGUAGES\n \n \n+def strip_placeholders(msg, unit):\n+\n+ return re.sub(\n+ \"|\".join(\n+ re.escape(param) for param in unit.all_flags.get_value(\"placeholders\")\n+ ),\n+ \"\",\n+ msg,\n+ )\n+\n+\n class SameCheck(TargetCheck):\n \"\"\"Check for not translated entries.\"\"\"\n \n@@ -169,6 +180,10 @@\n # Strip format strings\n stripped = strip_string(source, unit.all_flags)\n \n+ # Strip placeholder strings\n+ if \"placeholders\" in unit.all_flags:\n+ stripped = strip_placeholders(stripped, unit)\n+\n # Ignore strings which don't contain any string to translate\n # or just single letter (usually unit or something like that)\n # or are whole uppercase (abbreviations)\n", "issue": "Unchanged check should honor placeholders\n**Is your feature request related to a problem? Please describe.**\r\nIt is possible to define placeholders using [`placeholder:` flag](https://docs.weblate.org/en/latest/admin/checks.html#customizing-behavior). The [Unchanged translation check](https://docs.weblate.org/en/latest/user/checks.html#unchanged-translation) should deal with those correctly.\r\n\r\n**Describe the solution you'd like**\r\nThe check strip custom placeholders from the string similarly like we do with format strings:\r\n\r\nhttps://github.com/WeblateOrg/weblate/blob/ddfbd298a31a95229fa4487cfb2a5a4878da7b2b/weblate/checks/same.py#L107-L137\r\n\r\n**Additional context**\r\n\r\nGetting `placeholder:` flag value:\r\n\r\nhttps://github.com/WeblateOrg/weblate/blob/256fa9370edc90c2404df6178d1daa002d441e2f/weblate/checks/base.py#L179\r\n\r\nBuilding regexp for placeholders:\r\n\r\nhttps://github.com/WeblateOrg/weblate/blob/256fa9370edc90c2404df6178d1daa002d441e2f/weblate/checks/placeholders.py#L57 \r\n\r\n<bountysource-plugin>\r\n\r\n---\r\nWant to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/88807457-unchanged-check-should-honor-placeholders?utm_campaign=plugin&utm_content=tracker%2F253393&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F253393&utm_medium=issues&utm_source=github).\r\n</bountysource-plugin>\n", "before_files": [{"content": "#\n# Copyright \u00a9 2012 - 2020 Michal \u010ciha\u0159 <[email protected]>\n#\n# This file is part of Weblate <https://weblate.org/>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <https://www.gnu.org/licenses/>.\n#\n\n\nimport re\n\nfrom django.utils.html import strip_tags\nfrom django.utils.translation import gettext_lazy as _\n\nfrom weblate.checks.base import TargetCheck\nfrom weblate.checks.data import SAME_BLACKLIST\nfrom weblate.checks.format import (\n C_PRINTF_MATCH,\n PHP_PRINTF_MATCH,\n PYTHON_BRACE_MATCH,\n PYTHON_PRINTF_MATCH,\n)\nfrom weblate.checks.languages import LANGUAGES\nfrom weblate.checks.qt import QT_FORMAT_MATCH, QT_PLURAL_MATCH\nfrom weblate.checks.ruby import RUBY_FORMAT_MATCH\n\n# Email address to ignore\nEMAIL_RE = re.compile(r\"[a-z0-9_.-]+@[a-z0-9_.-]+\\.[a-z0-9-]{2,}\", re.IGNORECASE)\n\nURL_RE = re.compile(\n r\"(?:http|ftp)s?://\" # http:// or https://\n r\"(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\\.)+\"\n r\"(?:[A-Z]{2,6}\\.?|[A-Z0-9-]{2,}\\.?)|\" # domain...\n r\"localhost|\" # localhost...\n r\"\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})\" # ...or ip\n r\"(?::\\d+)?\" # optional port\n r\"(?:/?|[/?]\\S+)$\",\n re.IGNORECASE,\n)\n\nHASH_RE = re.compile(r\"#[A-Za-z0-9_-]*\")\n\nDOMAIN_RE = re.compile(\n r\"(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\\.)+\"\n r\"(?:[A-Z]{2,6}\\.?|[A-Z0-9-]{2,}\\.?)\",\n re.IGNORECASE,\n)\n\nPATH_RE = re.compile(r\"(^|[ ])(/[a-zA-Z0-9=:?._-]+)+\")\n\nTEMPLATE_RE = re.compile(r\"{[a-z_-]+}|@[A-Z_]@\", re.IGNORECASE)\n\nRST_MATCH = re.compile(r\"(?::(ref|config:option|file|guilabel):`[^`]+`|``[^`]+``)\")\n\nSPLIT_RE = re.compile(\n r\"(?:\\&(?:nbsp|rsaquo|lt|gt|amp|ldquo|rdquo|times|quot);|\"\n + r'[() ,.^`\"\\'\\\\/_<>!?;:|{}*^@%#&~=+\\r\\n\u2713\u2014\u2011\u2026\\[\\]0-9-])+',\n re.IGNORECASE,\n)\n\nEMOJI_RE = re.compile(\"[\\U00002600-\\U000027BF]|[\\U0001f000-\\U0001fffd]\")\n\n# Docbook tags to ignore\nDB_TAGS = (\"screen\", \"indexterm\", \"programlisting\")\n\n\ndef strip_format(msg, flags):\n \"\"\"Remove format strings from the strings.\n\n These are quite often not changed by translators.\n \"\"\"\n if \"python-format\" in flags:\n regex = PYTHON_PRINTF_MATCH\n elif \"python-brace-format\" in flags:\n regex = PYTHON_BRACE_MATCH\n elif \"php-format\" in flags:\n regex = PHP_PRINTF_MATCH\n elif \"c-format\" in flags:\n regex = C_PRINTF_MATCH\n elif \"qt-format\" in flags:\n regex = QT_FORMAT_MATCH\n elif \"qt-plural-format\" in flags:\n regex = QT_PLURAL_MATCH\n elif \"ruby-format\" in flags:\n regex = RUBY_FORMAT_MATCH\n elif \"rst-text\" in flags:\n regex = RST_MATCH\n else:\n return msg\n stripped = regex.sub(\"\", msg)\n return stripped\n\n\ndef strip_string(msg, flags):\n \"\"\"Strip (usually) not translated parts from the string.\"\"\"\n # Strip HTML markup\n stripped = strip_tags(msg)\n\n # Strip format strings\n stripped = strip_format(stripped, flags)\n\n # Remove emojis\n stripped = EMOJI_RE.sub(\" \", stripped)\n\n # Remove email addresses\n stripped = EMAIL_RE.sub(\"\", stripped)\n\n # Strip full URLs\n stripped = URL_RE.sub(\"\", stripped)\n\n # Strip hash tags / IRC channels\n stripped = HASH_RE.sub(\"\", stripped)\n\n # Strip domain names/URLs\n stripped = DOMAIN_RE.sub(\"\", stripped)\n\n # Strip file/URL paths\n stripped = PATH_RE.sub(\"\", stripped)\n\n # Strip template markup\n stripped = TEMPLATE_RE.sub(\"\", stripped)\n\n # Cleanup trailing/leading chars\n return stripped\n\n\ndef test_word(word):\n \"\"\"Test whether word should be ignored.\"\"\"\n return len(word) <= 2 or word in SAME_BLACKLIST or word in LANGUAGES\n\n\nclass SameCheck(TargetCheck):\n \"\"\"Check for not translated entries.\"\"\"\n\n check_id = \"same\"\n name = _(\"Unchanged translation\")\n description = _(\"Source and translation are identical\")\n severity = \"warning\"\n\n def should_ignore(self, source, unit):\n \"\"\"Check whether given unit should be ignored.\"\"\"\n if \"strict-same\" in unit.all_flags:\n return False\n # Ignore some docbook tags\n if unit.note.startswith(\"Tag: \") and unit.note[5:] in DB_TAGS:\n return True\n\n # Lower case source\n lower_source = source.lower()\n\n # Check special things like 1:4 1/2 or copyright\n if (\n len(source.strip(\"0123456789:/,.\")) <= 1\n or \"(c) copyright\" in lower_source\n or \"\u00a9\" in source\n ):\n return True\n # Strip format strings\n stripped = strip_string(source, unit.all_flags)\n\n # Ignore strings which don't contain any string to translate\n # or just single letter (usually unit or something like that)\n # or are whole uppercase (abbreviations)\n if len(stripped) <= 1 or stripped.isupper():\n return True\n # Check if we have any word which is not in blacklist\n # (words which are often same in foreign language)\n for word in SPLIT_RE.split(stripped.lower()):\n if not test_word(word):\n return False\n return True\n\n def should_skip(self, unit):\n if super().should_skip(unit):\n return True\n\n source_language = unit.translation.component.project.source_language.base_code\n\n # Ignore the check for source language,\n # English variants will have most things not translated\n # Interlingua is also quite often similar to English\n if self.is_language(unit, source_language) or (\n source_language == \"en\" and self.is_language(unit, (\"en\", \"ia\"))\n ):\n return True\n\n return False\n\n def check_single(self, source, target, unit):\n # One letter things are usually labels or decimal/thousand separators\n if len(source) <= 1 and len(target) <= 1:\n return False\n\n # Check for ignoring\n if self.should_ignore(source, unit):\n return False\n\n return source == target\n", "path": "weblate/checks/same.py"}], "after_files": [{"content": "#\n# Copyright \u00a9 2012 - 2020 Michal \u010ciha\u0159 <[email protected]>\n#\n# This file is part of Weblate <https://weblate.org/>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <https://www.gnu.org/licenses/>.\n#\n\n\nimport re\n\nfrom django.utils.html import strip_tags\nfrom django.utils.translation import gettext_lazy as _\n\nfrom weblate.checks.base import TargetCheck\nfrom weblate.checks.data import SAME_BLACKLIST\nfrom weblate.checks.format import (\n C_PRINTF_MATCH,\n PHP_PRINTF_MATCH,\n PYTHON_BRACE_MATCH,\n PYTHON_PRINTF_MATCH,\n)\nfrom weblate.checks.languages import LANGUAGES\nfrom weblate.checks.qt import QT_FORMAT_MATCH, QT_PLURAL_MATCH\nfrom weblate.checks.ruby import RUBY_FORMAT_MATCH\n\n# Email address to ignore\nEMAIL_RE = re.compile(r\"[a-z0-9_.-]+@[a-z0-9_.-]+\\.[a-z0-9-]{2,}\", re.IGNORECASE)\n\nURL_RE = re.compile(\n r\"(?:http|ftp)s?://\" # http:// or https://\n r\"(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\\.)+\"\n r\"(?:[A-Z]{2,6}\\.?|[A-Z0-9-]{2,}\\.?)|\" # domain...\n r\"localhost|\" # localhost...\n r\"\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})\" # ...or ip\n r\"(?::\\d+)?\" # optional port\n r\"(?:/?|[/?]\\S+)$\",\n re.IGNORECASE,\n)\n\nHASH_RE = re.compile(r\"#[A-Za-z0-9_-]*\")\n\nDOMAIN_RE = re.compile(\n r\"(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\\.)+\"\n r\"(?:[A-Z]{2,6}\\.?|[A-Z0-9-]{2,}\\.?)\",\n re.IGNORECASE,\n)\n\nPATH_RE = re.compile(r\"(^|[ ])(/[a-zA-Z0-9=:?._-]+)+\")\n\nTEMPLATE_RE = re.compile(r\"{[a-z_-]+}|@[A-Z_]@\", re.IGNORECASE)\n\nRST_MATCH = re.compile(r\"(?::(ref|config:option|file|guilabel):`[^`]+`|``[^`]+``)\")\n\nSPLIT_RE = re.compile(\n r\"(?:\\&(?:nbsp|rsaquo|lt|gt|amp|ldquo|rdquo|times|quot);|\"\n + r'[() ,.^`\"\\'\\\\/_<>!?;:|{}*^@%#&~=+\\r\\n\u2713\u2014\u2011\u2026\\[\\]0-9-])+',\n re.IGNORECASE,\n)\n\nEMOJI_RE = re.compile(\"[\\U00002600-\\U000027BF]|[\\U0001f000-\\U0001fffd]\")\n\n# Docbook tags to ignore\nDB_TAGS = (\"screen\", \"indexterm\", \"programlisting\")\n\n\ndef strip_format(msg, flags):\n \"\"\"Remove format strings from the strings.\n\n These are quite often not changed by translators.\n \"\"\"\n if \"python-format\" in flags:\n regex = PYTHON_PRINTF_MATCH\n elif \"python-brace-format\" in flags:\n regex = PYTHON_BRACE_MATCH\n elif \"php-format\" in flags:\n regex = PHP_PRINTF_MATCH\n elif \"c-format\" in flags:\n regex = C_PRINTF_MATCH\n elif \"qt-format\" in flags:\n regex = QT_FORMAT_MATCH\n elif \"qt-plural-format\" in flags:\n regex = QT_PLURAL_MATCH\n elif \"ruby-format\" in flags:\n regex = RUBY_FORMAT_MATCH\n elif \"rst-text\" in flags:\n regex = RST_MATCH\n else:\n return msg\n stripped = regex.sub(\"\", msg)\n return stripped\n\n\ndef strip_string(msg, flags):\n \"\"\"Strip (usually) not translated parts from the string.\"\"\"\n # Strip HTML markup\n stripped = strip_tags(msg)\n\n # Strip format strings\n stripped = strip_format(stripped, flags)\n\n # Remove emojis\n stripped = EMOJI_RE.sub(\" \", stripped)\n\n # Remove email addresses\n stripped = EMAIL_RE.sub(\"\", stripped)\n\n # Strip full URLs\n stripped = URL_RE.sub(\"\", stripped)\n\n # Strip hash tags / IRC channels\n stripped = HASH_RE.sub(\"\", stripped)\n\n # Strip domain names/URLs\n stripped = DOMAIN_RE.sub(\"\", stripped)\n\n # Strip file/URL paths\n stripped = PATH_RE.sub(\"\", stripped)\n\n # Strip template markup\n stripped = TEMPLATE_RE.sub(\"\", stripped)\n\n # Cleanup trailing/leading chars\n return stripped\n\n\ndef test_word(word):\n \"\"\"Test whether word should be ignored.\"\"\"\n return len(word) <= 2 or word in SAME_BLACKLIST or word in LANGUAGES\n\n\ndef strip_placeholders(msg, unit):\n\n return re.sub(\n \"|\".join(\n re.escape(param) for param in unit.all_flags.get_value(\"placeholders\")\n ),\n \"\",\n msg,\n )\n\n\nclass SameCheck(TargetCheck):\n \"\"\"Check for not translated entries.\"\"\"\n\n check_id = \"same\"\n name = _(\"Unchanged translation\")\n description = _(\"Source and translation are identical\")\n severity = \"warning\"\n\n def should_ignore(self, source, unit):\n \"\"\"Check whether given unit should be ignored.\"\"\"\n if \"strict-same\" in unit.all_flags:\n return False\n # Ignore some docbook tags\n if unit.note.startswith(\"Tag: \") and unit.note[5:] in DB_TAGS:\n return True\n\n # Lower case source\n lower_source = source.lower()\n\n # Check special things like 1:4 1/2 or copyright\n if (\n len(source.strip(\"0123456789:/,.\")) <= 1\n or \"(c) copyright\" in lower_source\n or \"\u00a9\" in source\n ):\n return True\n # Strip format strings\n stripped = strip_string(source, unit.all_flags)\n\n # Strip placeholder strings\n if \"placeholders\" in unit.all_flags:\n stripped = strip_placeholders(stripped, unit)\n\n # Ignore strings which don't contain any string to translate\n # or just single letter (usually unit or something like that)\n # or are whole uppercase (abbreviations)\n if len(stripped) <= 1 or stripped.isupper():\n return True\n # Check if we have any word which is not in blacklist\n # (words which are often same in foreign language)\n for word in SPLIT_RE.split(stripped.lower()):\n if not test_word(word):\n return False\n return True\n\n def should_skip(self, unit):\n if super().should_skip(unit):\n return True\n\n source_language = unit.translation.component.project.source_language.base_code\n\n # Ignore the check for source language,\n # English variants will have most things not translated\n # Interlingua is also quite often similar to English\n if self.is_language(unit, source_language) or (\n source_language == \"en\" and self.is_language(unit, (\"en\", \"ia\"))\n ):\n return True\n\n return False\n\n def check_single(self, source, target, unit):\n # One letter things are usually labels or decimal/thousand separators\n if len(source) <= 1 and len(target) <= 1:\n return False\n\n # Check for ignoring\n if self.should_ignore(source, unit):\n return False\n\n return source == target\n", "path": "weblate/checks/same.py"}]}
| 3,018 | 252 |
gh_patches_debug_5509
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-788
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Install.py script assumes database to exist
## Description
The install.py script has a hard check on the hostname to check if we're using the docker created postgres container and only creates the databases if we do.
`if DATABASES[database_key]["HOST"] == "db":`
In https://github.com/centerofci/mathesar/pull/747, we gave our containers persistent names. The db container's name was set as `mathesar_db`.
This leads to all fresh setup of mathesar to fail.
## Expected behavior
Following the instructions in readme should work without any error.
## To Reproduce
<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->
* Stop mathesar, if you have it running
* Remove your database
* Start mathesar and run migrations and install.py script
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `install.py`
Content:
```
1 """
2 This script installs functions and types for Mathesar onto the configured DB.
3 """
4 from config.settings import DATABASES
5 from db import install
6 import sys
7 import getopt
8
9
10 def main():
11 skip_confirm = False
12 (opts, _) = getopt.getopt(sys.argv[1:], ":s", ["skip-confirm"])
13 for (opt, value) in opts:
14 if (opt == "-s") or (opt == "--skip-confirm"):
15 skip_confirm = True
16 for database_key in [key for key in DATABASES if key != "default"]:
17 install_on_db_with_key(database_key, skip_confirm)
18
19
20 def install_on_db_with_key(database_key, skip_confirm):
21 if DATABASES[database_key]["HOST"] == "db":
22 # if we're going to install on the docker-created Postgres, we'll
23 # create the DB
24 print("Creating Mathesar DB on docker-created PostgreSQL instance")
25 install.create_mathesar_database(
26 DATABASES[database_key]["NAME"],
27 DATABASES["default"]["USER"],
28 DATABASES["default"]["PASSWORD"],
29 DATABASES["default"]["HOST"],
30 DATABASES["default"]["NAME"],
31 DATABASES["default"]["PORT"],
32 )
33 print(f"Created DB is {DATABASES['mathesar_tables']['NAME']}")
34 else:
35 # if we're installing anywhere else, we require the DB to exist in
36 # advance.
37 username = DATABASES[database_key]["USER"]
38 password = DATABASES[database_key]["PASSWORD"]
39 host = DATABASES[database_key]["HOST"]
40 db_name = DATABASES[database_key]["NAME"]
41 port = DATABASES[database_key]["PORT"]
42 print("Installing Mathesar DB on preexisting PostgreSQL instance...")
43 if skip_confirm is False:
44 confirmation = input(
45 f"Mathesar will be installed on DB {db_name} at host {host}."
46 "Confirm? (y/n) > "
47 )
48 if (confirmation.lower() in ["y", "yes"]) or (skip_confirm is True):
49 print("Installing...")
50 install.install_mathesar_on_preexisting_database(
51 username,
52 password,
53 host,
54 db_name,
55 port,
56 )
57 else:
58 print("Skipping DB with key {database_key}.")
59
60
61 if __name__ == "__main__":
62 main()
63
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/install.py b/install.py
--- a/install.py
+++ b/install.py
@@ -18,7 +18,7 @@
def install_on_db_with_key(database_key, skip_confirm):
- if DATABASES[database_key]["HOST"] == "db":
+ if DATABASES[database_key]["HOST"] == "mathesar_db":
# if we're going to install on the docker-created Postgres, we'll
# create the DB
print("Creating Mathesar DB on docker-created PostgreSQL instance")
|
{"golden_diff": "diff --git a/install.py b/install.py\n--- a/install.py\n+++ b/install.py\n@@ -18,7 +18,7 @@\n \n \n def install_on_db_with_key(database_key, skip_confirm):\n- if DATABASES[database_key][\"HOST\"] == \"db\":\n+ if DATABASES[database_key][\"HOST\"] == \"mathesar_db\":\n # if we're going to install on the docker-created Postgres, we'll\n # create the DB\n print(\"Creating Mathesar DB on docker-created PostgreSQL instance\")\n", "issue": "Install.py script assumes database to exist\n## Description\r\nThe install.py script has a hard check on the hostname to check if we're using the docker created postgres container and only creates the databases if we do.\r\n\r\n`if DATABASES[database_key][\"HOST\"] == \"db\":`\r\n\r\nIn https://github.com/centerofci/mathesar/pull/747, we gave our containers persistent names. The db container's name was set as `mathesar_db`.\r\n\r\nThis leads to all fresh setup of mathesar to fail.\r\n\r\n## Expected behavior\r\nFollowing the instructions in readme should work without any error.\r\n\r\n## To Reproduce\r\n<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->\r\n* Stop mathesar, if you have it running\r\n* Remove your database\r\n* Start mathesar and run migrations and install.py script \n", "before_files": [{"content": "\"\"\"\nThis script installs functions and types for Mathesar onto the configured DB.\n\"\"\"\nfrom config.settings import DATABASES\nfrom db import install\nimport sys\nimport getopt\n\n\ndef main():\n skip_confirm = False\n (opts, _) = getopt.getopt(sys.argv[1:], \":s\", [\"skip-confirm\"])\n for (opt, value) in opts:\n if (opt == \"-s\") or (opt == \"--skip-confirm\"):\n skip_confirm = True\n for database_key in [key for key in DATABASES if key != \"default\"]:\n install_on_db_with_key(database_key, skip_confirm)\n\n\ndef install_on_db_with_key(database_key, skip_confirm):\n if DATABASES[database_key][\"HOST\"] == \"db\":\n # if we're going to install on the docker-created Postgres, we'll\n # create the DB\n print(\"Creating Mathesar DB on docker-created PostgreSQL instance\")\n install.create_mathesar_database(\n DATABASES[database_key][\"NAME\"],\n DATABASES[\"default\"][\"USER\"],\n DATABASES[\"default\"][\"PASSWORD\"],\n DATABASES[\"default\"][\"HOST\"],\n DATABASES[\"default\"][\"NAME\"],\n DATABASES[\"default\"][\"PORT\"],\n )\n print(f\"Created DB is {DATABASES['mathesar_tables']['NAME']}\")\n else:\n # if we're installing anywhere else, we require the DB to exist in\n # advance.\n username = DATABASES[database_key][\"USER\"]\n password = DATABASES[database_key][\"PASSWORD\"]\n host = DATABASES[database_key][\"HOST\"]\n db_name = DATABASES[database_key][\"NAME\"]\n port = DATABASES[database_key][\"PORT\"]\n print(\"Installing Mathesar DB on preexisting PostgreSQL instance...\")\n if skip_confirm is False:\n confirmation = input(\n f\"Mathesar will be installed on DB {db_name} at host {host}.\"\n \"Confirm? (y/n) > \"\n )\n if (confirmation.lower() in [\"y\", \"yes\"]) or (skip_confirm is True):\n print(\"Installing...\")\n install.install_mathesar_on_preexisting_database(\n username,\n password,\n host,\n db_name,\n port,\n )\n else:\n print(\"Skipping DB with key {database_key}.\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "install.py"}], "after_files": [{"content": "\"\"\"\nThis script installs functions and types for Mathesar onto the configured DB.\n\"\"\"\nfrom config.settings import DATABASES\nfrom db import install\nimport sys\nimport getopt\n\n\ndef main():\n skip_confirm = False\n (opts, _) = getopt.getopt(sys.argv[1:], \":s\", [\"skip-confirm\"])\n for (opt, value) in opts:\n if (opt == \"-s\") or (opt == \"--skip-confirm\"):\n skip_confirm = True\n for database_key in [key for key in DATABASES if key != \"default\"]:\n install_on_db_with_key(database_key, skip_confirm)\n\n\ndef install_on_db_with_key(database_key, skip_confirm):\n if DATABASES[database_key][\"HOST\"] == \"mathesar_db\":\n # if we're going to install on the docker-created Postgres, we'll\n # create the DB\n print(\"Creating Mathesar DB on docker-created PostgreSQL instance\")\n install.create_mathesar_database(\n DATABASES[database_key][\"NAME\"],\n DATABASES[\"default\"][\"USER\"],\n DATABASES[\"default\"][\"PASSWORD\"],\n DATABASES[\"default\"][\"HOST\"],\n DATABASES[\"default\"][\"NAME\"],\n DATABASES[\"default\"][\"PORT\"],\n )\n print(f\"Created DB is {DATABASES['mathesar_tables']['NAME']}\")\n else:\n # if we're installing anywhere else, we require the DB to exist in\n # advance.\n username = DATABASES[database_key][\"USER\"]\n password = DATABASES[database_key][\"PASSWORD\"]\n host = DATABASES[database_key][\"HOST\"]\n db_name = DATABASES[database_key][\"NAME\"]\n port = DATABASES[database_key][\"PORT\"]\n print(\"Installing Mathesar DB on preexisting PostgreSQL instance...\")\n if skip_confirm is False:\n confirmation = input(\n f\"Mathesar will be installed on DB {db_name} at host {host}.\"\n \"Confirm? (y/n) > \"\n )\n if (confirmation.lower() in [\"y\", \"yes\"]) or (skip_confirm is True):\n print(\"Installing...\")\n install.install_mathesar_on_preexisting_database(\n username,\n password,\n host,\n db_name,\n port,\n )\n else:\n print(\"Skipping DB with key {database_key}.\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "install.py"}]}
| 1,054 | 114 |
gh_patches_debug_1109
|
rasdani/github-patches
|
git_diff
|
flairNLP__flair-435
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot install allennlp due to matplotlib dependency conflict
Hello, thanks for the great package. I want to play with ELMoEmbeddings, which requires package allennlp, not installed by default with Flair. However, installing latest allennlp fails because it requires matplotlib==2.2.3, while Flair requires >=3.0.0. When I install allennlp==0.7.0 (without the matplotlib requirement), it still fails because of PyTorch conflicts. Makes me wonder what versions I should use that are actually compatible?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import setup, find_packages
2
3 setup(
4 name='flair',
5 version='0.4.0',
6 description='A very simple framework for state-of-the-art NLP',
7 long_description=open("README.md", encoding='utf-8').read(),
8 long_description_content_type="text/markdown",
9 author='Alan Akbik',
10 author_email='[email protected]',
11 url='https://github.com/zalandoresearch/flair',
12 packages=find_packages(exclude='test'), # same as name
13 license='MIT',
14 install_requires=[
15 'torch>=1.0.0',
16 'gensim>=3.4.0',
17 'tqdm>=4.26.0',
18 'segtok>=1.5.7',
19 'matplotlib>=3.0.0',
20 'mpld3>=0.3',
21 'sklearn',
22 'sqlitedict>=1.6.0',
23 'deprecated>=1.2.4',
24 'hyperopt>=0.1.1',
25 'pytorch-pretrained-bert>=0.3.0'
26 ],
27 include_package_data=True,
28 python_requires='>=3.6',
29 )
30
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -16,7 +16,7 @@
'gensim>=3.4.0',
'tqdm>=4.26.0',
'segtok>=1.5.7',
- 'matplotlib>=3.0.0',
+ 'matplotlib>=2.2.3',
'mpld3>=0.3',
'sklearn',
'sqlitedict>=1.6.0',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -16,7 +16,7 @@\n 'gensim>=3.4.0',\n 'tqdm>=4.26.0',\n 'segtok>=1.5.7',\n- 'matplotlib>=3.0.0',\n+ 'matplotlib>=2.2.3',\n 'mpld3>=0.3',\n 'sklearn',\n 'sqlitedict>=1.6.0',\n", "issue": "Cannot install allennlp due to matplotlib dependency conflict\nHello, thanks for the great package. I want to play with ELMoEmbeddings, which requires package allennlp, not installed by default with Flair. However, installing latest allennlp fails because it requires matplotlib==2.2.3, while Flair requires >=3.0.0. When I install allennlp==0.7.0 (without the matplotlib requirement), it still fails because of PyTorch conflicts. Makes me wonder what versions I should use that are actually compatible?\r\n\n", "before_files": [{"content": "from setuptools import setup, find_packages\n\nsetup(\n name='flair',\n version='0.4.0',\n description='A very simple framework for state-of-the-art NLP',\n long_description=open(\"README.md\", encoding='utf-8').read(),\n long_description_content_type=\"text/markdown\",\n author='Alan Akbik',\n author_email='[email protected]',\n url='https://github.com/zalandoresearch/flair',\n packages=find_packages(exclude='test'), # same as name\n license='MIT',\n install_requires=[\n 'torch>=1.0.0',\n 'gensim>=3.4.0',\n 'tqdm>=4.26.0',\n 'segtok>=1.5.7',\n 'matplotlib>=3.0.0',\n 'mpld3>=0.3',\n 'sklearn',\n 'sqlitedict>=1.6.0',\n 'deprecated>=1.2.4',\n 'hyperopt>=0.1.1',\n 'pytorch-pretrained-bert>=0.3.0'\n ],\n include_package_data=True,\n python_requires='>=3.6',\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup, find_packages\n\nsetup(\n name='flair',\n version='0.4.0',\n description='A very simple framework for state-of-the-art NLP',\n long_description=open(\"README.md\", encoding='utf-8').read(),\n long_description_content_type=\"text/markdown\",\n author='Alan Akbik',\n author_email='[email protected]',\n url='https://github.com/zalandoresearch/flair',\n packages=find_packages(exclude='test'), # same as name\n license='MIT',\n install_requires=[\n 'torch>=1.0.0',\n 'gensim>=3.4.0',\n 'tqdm>=4.26.0',\n 'segtok>=1.5.7',\n 'matplotlib>=2.2.3',\n 'mpld3>=0.3',\n 'sklearn',\n 'sqlitedict>=1.6.0',\n 'deprecated>=1.2.4',\n 'hyperopt>=0.1.1',\n 'pytorch-pretrained-bert>=0.3.0'\n ],\n include_package_data=True,\n python_requires='>=3.6',\n)\n", "path": "setup.py"}]}
| 689 | 117 |
gh_patches_debug_11244
|
rasdani/github-patches
|
git_diff
|
scikit-hep__awkward-1401
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`with_name` fails for `ak._v2.Record`
### Version of Awkward Array
278dccee
### Description and code to reproduce
```python
import awkward._v2 as ak
ak.Record({'x': 10.0}, with_name="X")
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/awkward/_v2/record.py`
Content:
```
1 # BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE
2
3 import copy
4 from collections.abc import Iterable
5
6 import awkward as ak
7 from awkward._v2.contents.content import Content
8
9 np = ak.nplike.NumpyMetadata.instance()
10
11
12 class Record:
13 def __init__(self, array, at):
14 if not isinstance(array, ak._v2.contents.recordarray.RecordArray):
15 raise ak._v2._util.error(
16 TypeError(f"Record 'array' must be a RecordArray, not {array!r}")
17 )
18 if not ak._util.isint(at):
19 raise ak._v2._util.error(
20 TypeError(f"Record 'at' must be an integer, not {array!r}")
21 )
22 if at < 0 or at >= array.length:
23 raise ak._v2._util.error(
24 ValueError(
25 f"Record 'at' must be >= 0 and < len(array) == {array.length}, not {at}"
26 )
27 )
28 else:
29 self._array = array
30 self._at = at
31
32 @property
33 def array(self):
34 return self._array
35
36 @property
37 def at(self):
38 return self._at
39
40 @property
41 def fields(self):
42 return self._array.fields
43
44 @property
45 def is_tuple(self):
46 return self._array.is_tuple
47
48 @property
49 def as_tuple(self):
50 return Record(self._array.as_tuple, self._at)
51
52 @property
53 def contents(self):
54 out = []
55 for field in self._array.fields:
56 out.append(self._array[field][self._at])
57 return out
58
59 def content(self, index_or_field):
60 return self._array.content(index_or_field)[self._at]
61
62 def __repr__(self):
63 return self._repr("", "", "")
64
65 def _repr(self, indent, pre, post):
66 out = [indent, pre, "<Record at="]
67 out.append(repr(str(self._at)))
68 out.append(">\n")
69 out.append(self._array._repr(indent + " ", "<array>", "</array>\n"))
70 out.append(indent)
71 out.append("</Record>")
72 out.append(post)
73 return "".join(out)
74
75 def validityerror(self, path="layout.array"):
76 return self._array.validityerror(path)
77
78 @property
79 def parameters(self):
80 return self._array.parameters
81
82 def parameter(self, key):
83 return self._array.parameter(key)
84
85 def purelist_parameter(self, key):
86 return self._array.purelist_parameter(key)
87
88 @property
89 def purelist_isregular(self):
90 return self._array.purelist_isregular
91
92 @property
93 def purelist_depth(self):
94 return 0
95
96 @property
97 def minmax_depth(self):
98 mindepth, maxdepth = self._array.minmax_depth
99 return mindepth - 1, maxdepth - 1
100
101 @property
102 def branch_depth(self):
103 branch, depth = self._array.branch_depth
104 return branch, depth - 1
105
106 def __getitem__(self, where):
107 with ak._v2._util.SlicingErrorContext(self, where):
108 return self._getitem(where)
109
110 def _getitem(self, where):
111 if ak._util.isint(where):
112 raise ak._v2._util.error(
113 IndexError("scalar Record cannot be sliced by an integer")
114 )
115
116 elif isinstance(where, slice):
117 raise ak._v2._util.error(
118 IndexError("scalar Record cannot be sliced by a range slice (`:`)")
119 )
120
121 elif ak._util.isstr(where):
122 return self._getitem_field(where)
123
124 elif where is np.newaxis:
125 raise ak._v2._util.error(
126 IndexError("scalar Record cannot be sliced by np.newaxis (`None`)")
127 )
128
129 elif where is Ellipsis:
130 raise ak._v2._util.error(
131 IndexError("scalar Record cannot be sliced by an ellipsis (`...`)")
132 )
133
134 elif isinstance(where, tuple) and len(where) == 0:
135 return self
136
137 elif isinstance(where, tuple) and len(where) == 1:
138 return self._getitem(where[0])
139
140 elif isinstance(where, tuple) and ak._util.isstr(where[0]):
141 return self._getitem_field(where[0])._getitem(where[1:])
142
143 elif isinstance(where, ak.highlevel.Array):
144 raise ak._v2._util.error(
145 IndexError("scalar Record cannot be sliced by an array")
146 )
147
148 elif isinstance(where, ak.layout.Content):
149 raise ak._v2._util.error(
150 IndexError("scalar Record cannot be sliced by an array")
151 )
152
153 elif isinstance(where, Content):
154 raise ak._v2._util.error(
155 IndexError("scalar Record cannot be sliced by an array")
156 )
157
158 elif isinstance(where, Iterable) and all(ak._util.isstr(x) for x in where):
159 return self._getitem_fields(where)
160
161 elif isinstance(where, Iterable):
162 raise ak._v2._util.error(
163 IndexError("scalar Record cannot be sliced by an array")
164 )
165
166 else:
167 raise ak._v2._util.error(
168 TypeError(
169 "only field name (str) or names (non-tuple iterable of str) "
170 "are valid indices for slicing a scalar record, not\n\n "
171 + repr(where)
172 )
173 )
174
175 def _getitem_field(self, where):
176 return self._array._getitem_field(where)._getitem_at(self._at)
177
178 def _getitem_fields(self, where):
179 return self._array._getitem_fields(where)._getitem_at(self._at)
180
181 def packed(self):
182 if self._array.length == 1:
183 return Record(self._array.packed(), self._at)
184 else:
185 return Record(self._array[self._at : self._at + 1].packed(), 0)
186
187 def to_list(self, behavior=None):
188 cls = ak._v2._util.recordclass(self._array, behavior)
189 if cls is not ak._v2.highlevel.Record:
190 return cls(self)
191
192 return self._array[self._at : self._at + 1].to_list(behavior)[0]
193
194 def deep_copy(self):
195 return Record(self._array.deep_copy(), copy.deepcopy(self._at))
196
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/awkward/_v2/record.py b/src/awkward/_v2/record.py
--- a/src/awkward/_v2/record.py
+++ b/src/awkward/_v2/record.py
@@ -193,3 +193,29 @@
def deep_copy(self):
return Record(self._array.deep_copy(), copy.deepcopy(self._at))
+
+ def recursively_apply(
+ self,
+ action,
+ depth_context=None,
+ lateral_context=None,
+ keep_parameters=True,
+ numpy_to_regular=True,
+ return_array=True,
+ function_name=None,
+ ):
+
+ out = self._array.recursively_apply(
+ action,
+ depth_context,
+ lateral_context,
+ keep_parameters,
+ numpy_to_regular,
+ return_array,
+ function_name,
+ )
+
+ if return_array:
+ return Record(out, self._at)
+ else:
+ return None
|
{"golden_diff": "diff --git a/src/awkward/_v2/record.py b/src/awkward/_v2/record.py\n--- a/src/awkward/_v2/record.py\n+++ b/src/awkward/_v2/record.py\n@@ -193,3 +193,29 @@\n \n def deep_copy(self):\n return Record(self._array.deep_copy(), copy.deepcopy(self._at))\n+\n+ def recursively_apply(\n+ self,\n+ action,\n+ depth_context=None,\n+ lateral_context=None,\n+ keep_parameters=True,\n+ numpy_to_regular=True,\n+ return_array=True,\n+ function_name=None,\n+ ):\n+\n+ out = self._array.recursively_apply(\n+ action,\n+ depth_context,\n+ lateral_context,\n+ keep_parameters,\n+ numpy_to_regular,\n+ return_array,\n+ function_name,\n+ )\n+\n+ if return_array:\n+ return Record(out, self._at)\n+ else:\n+ return None\n", "issue": "`with_name` fails for `ak._v2.Record`\n### Version of Awkward Array\n\n278dccee\n\n### Description and code to reproduce\n\n```python\r\nimport awkward._v2 as ak\r\n\r\nak.Record({'x': 10.0}, with_name=\"X\")\r\n```\n", "before_files": [{"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n\nimport copy\nfrom collections.abc import Iterable\n\nimport awkward as ak\nfrom awkward._v2.contents.content import Content\n\nnp = ak.nplike.NumpyMetadata.instance()\n\n\nclass Record:\n def __init__(self, array, at):\n if not isinstance(array, ak._v2.contents.recordarray.RecordArray):\n raise ak._v2._util.error(\n TypeError(f\"Record 'array' must be a RecordArray, not {array!r}\")\n )\n if not ak._util.isint(at):\n raise ak._v2._util.error(\n TypeError(f\"Record 'at' must be an integer, not {array!r}\")\n )\n if at < 0 or at >= array.length:\n raise ak._v2._util.error(\n ValueError(\n f\"Record 'at' must be >= 0 and < len(array) == {array.length}, not {at}\"\n )\n )\n else:\n self._array = array\n self._at = at\n\n @property\n def array(self):\n return self._array\n\n @property\n def at(self):\n return self._at\n\n @property\n def fields(self):\n return self._array.fields\n\n @property\n def is_tuple(self):\n return self._array.is_tuple\n\n @property\n def as_tuple(self):\n return Record(self._array.as_tuple, self._at)\n\n @property\n def contents(self):\n out = []\n for field in self._array.fields:\n out.append(self._array[field][self._at])\n return out\n\n def content(self, index_or_field):\n return self._array.content(index_or_field)[self._at]\n\n def __repr__(self):\n return self._repr(\"\", \"\", \"\")\n\n def _repr(self, indent, pre, post):\n out = [indent, pre, \"<Record at=\"]\n out.append(repr(str(self._at)))\n out.append(\">\\n\")\n out.append(self._array._repr(indent + \" \", \"<array>\", \"</array>\\n\"))\n out.append(indent)\n out.append(\"</Record>\")\n out.append(post)\n return \"\".join(out)\n\n def validityerror(self, path=\"layout.array\"):\n return self._array.validityerror(path)\n\n @property\n def parameters(self):\n return self._array.parameters\n\n def parameter(self, key):\n return self._array.parameter(key)\n\n def purelist_parameter(self, key):\n return self._array.purelist_parameter(key)\n\n @property\n def purelist_isregular(self):\n return self._array.purelist_isregular\n\n @property\n def purelist_depth(self):\n return 0\n\n @property\n def minmax_depth(self):\n mindepth, maxdepth = self._array.minmax_depth\n return mindepth - 1, maxdepth - 1\n\n @property\n def branch_depth(self):\n branch, depth = self._array.branch_depth\n return branch, depth - 1\n\n def __getitem__(self, where):\n with ak._v2._util.SlicingErrorContext(self, where):\n return self._getitem(where)\n\n def _getitem(self, where):\n if ak._util.isint(where):\n raise ak._v2._util.error(\n IndexError(\"scalar Record cannot be sliced by an integer\")\n )\n\n elif isinstance(where, slice):\n raise ak._v2._util.error(\n IndexError(\"scalar Record cannot be sliced by a range slice (`:`)\")\n )\n\n elif ak._util.isstr(where):\n return self._getitem_field(where)\n\n elif where is np.newaxis:\n raise ak._v2._util.error(\n IndexError(\"scalar Record cannot be sliced by np.newaxis (`None`)\")\n )\n\n elif where is Ellipsis:\n raise ak._v2._util.error(\n IndexError(\"scalar Record cannot be sliced by an ellipsis (`...`)\")\n )\n\n elif isinstance(where, tuple) and len(where) == 0:\n return self\n\n elif isinstance(where, tuple) and len(where) == 1:\n return self._getitem(where[0])\n\n elif isinstance(where, tuple) and ak._util.isstr(where[0]):\n return self._getitem_field(where[0])._getitem(where[1:])\n\n elif isinstance(where, ak.highlevel.Array):\n raise ak._v2._util.error(\n IndexError(\"scalar Record cannot be sliced by an array\")\n )\n\n elif isinstance(where, ak.layout.Content):\n raise ak._v2._util.error(\n IndexError(\"scalar Record cannot be sliced by an array\")\n )\n\n elif isinstance(where, Content):\n raise ak._v2._util.error(\n IndexError(\"scalar Record cannot be sliced by an array\")\n )\n\n elif isinstance(where, Iterable) and all(ak._util.isstr(x) for x in where):\n return self._getitem_fields(where)\n\n elif isinstance(where, Iterable):\n raise ak._v2._util.error(\n IndexError(\"scalar Record cannot be sliced by an array\")\n )\n\n else:\n raise ak._v2._util.error(\n TypeError(\n \"only field name (str) or names (non-tuple iterable of str) \"\n \"are valid indices for slicing a scalar record, not\\n\\n \"\n + repr(where)\n )\n )\n\n def _getitem_field(self, where):\n return self._array._getitem_field(where)._getitem_at(self._at)\n\n def _getitem_fields(self, where):\n return self._array._getitem_fields(where)._getitem_at(self._at)\n\n def packed(self):\n if self._array.length == 1:\n return Record(self._array.packed(), self._at)\n else:\n return Record(self._array[self._at : self._at + 1].packed(), 0)\n\n def to_list(self, behavior=None):\n cls = ak._v2._util.recordclass(self._array, behavior)\n if cls is not ak._v2.highlevel.Record:\n return cls(self)\n\n return self._array[self._at : self._at + 1].to_list(behavior)[0]\n\n def deep_copy(self):\n return Record(self._array.deep_copy(), copy.deepcopy(self._at))\n", "path": "src/awkward/_v2/record.py"}], "after_files": [{"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n\nimport copy\nfrom collections.abc import Iterable\n\nimport awkward as ak\nfrom awkward._v2.contents.content import Content\n\nnp = ak.nplike.NumpyMetadata.instance()\n\n\nclass Record:\n def __init__(self, array, at):\n if not isinstance(array, ak._v2.contents.recordarray.RecordArray):\n raise ak._v2._util.error(\n TypeError(f\"Record 'array' must be a RecordArray, not {array!r}\")\n )\n if not ak._util.isint(at):\n raise ak._v2._util.error(\n TypeError(f\"Record 'at' must be an integer, not {array!r}\")\n )\n if at < 0 or at >= array.length:\n raise ak._v2._util.error(\n ValueError(\n f\"Record 'at' must be >= 0 and < len(array) == {array.length}, not {at}\"\n )\n )\n else:\n self._array = array\n self._at = at\n\n @property\n def array(self):\n return self._array\n\n @property\n def at(self):\n return self._at\n\n @property\n def fields(self):\n return self._array.fields\n\n @property\n def is_tuple(self):\n return self._array.is_tuple\n\n @property\n def as_tuple(self):\n return Record(self._array.as_tuple, self._at)\n\n @property\n def contents(self):\n out = []\n for field in self._array.fields:\n out.append(self._array[field][self._at])\n return out\n\n def content(self, index_or_field):\n return self._array.content(index_or_field)[self._at]\n\n def __repr__(self):\n return self._repr(\"\", \"\", \"\")\n\n def _repr(self, indent, pre, post):\n out = [indent, pre, \"<Record at=\"]\n out.append(repr(str(self._at)))\n out.append(\">\\n\")\n out.append(self._array._repr(indent + \" \", \"<array>\", \"</array>\\n\"))\n out.append(indent)\n out.append(\"</Record>\")\n out.append(post)\n return \"\".join(out)\n\n def validityerror(self, path=\"layout.array\"):\n return self._array.validityerror(path)\n\n @property\n def parameters(self):\n return self._array.parameters\n\n def parameter(self, key):\n return self._array.parameter(key)\n\n def purelist_parameter(self, key):\n return self._array.purelist_parameter(key)\n\n @property\n def purelist_isregular(self):\n return self._array.purelist_isregular\n\n @property\n def purelist_depth(self):\n return 0\n\n @property\n def minmax_depth(self):\n mindepth, maxdepth = self._array.minmax_depth\n return mindepth - 1, maxdepth - 1\n\n @property\n def branch_depth(self):\n branch, depth = self._array.branch_depth\n return branch, depth - 1\n\n def __getitem__(self, where):\n with ak._v2._util.SlicingErrorContext(self, where):\n return self._getitem(where)\n\n def _getitem(self, where):\n if ak._util.isint(where):\n raise ak._v2._util.error(\n IndexError(\"scalar Record cannot be sliced by an integer\")\n )\n\n elif isinstance(where, slice):\n raise ak._v2._util.error(\n IndexError(\"scalar Record cannot be sliced by a range slice (`:`)\")\n )\n\n elif ak._util.isstr(where):\n return self._getitem_field(where)\n\n elif where is np.newaxis:\n raise ak._v2._util.error(\n IndexError(\"scalar Record cannot be sliced by np.newaxis (`None`)\")\n )\n\n elif where is Ellipsis:\n raise ak._v2._util.error(\n IndexError(\"scalar Record cannot be sliced by an ellipsis (`...`)\")\n )\n\n elif isinstance(where, tuple) and len(where) == 0:\n return self\n\n elif isinstance(where, tuple) and len(where) == 1:\n return self._getitem(where[0])\n\n elif isinstance(where, tuple) and ak._util.isstr(where[0]):\n return self._getitem_field(where[0])._getitem(where[1:])\n\n elif isinstance(where, ak.highlevel.Array):\n raise ak._v2._util.error(\n IndexError(\"scalar Record cannot be sliced by an array\")\n )\n\n elif isinstance(where, ak.layout.Content):\n raise ak._v2._util.error(\n IndexError(\"scalar Record cannot be sliced by an array\")\n )\n\n elif isinstance(where, Content):\n raise ak._v2._util.error(\n IndexError(\"scalar Record cannot be sliced by an array\")\n )\n\n elif isinstance(where, Iterable) and all(ak._util.isstr(x) for x in where):\n return self._getitem_fields(where)\n\n elif isinstance(where, Iterable):\n raise ak._v2._util.error(\n IndexError(\"scalar Record cannot be sliced by an array\")\n )\n\n else:\n raise ak._v2._util.error(\n TypeError(\n \"only field name (str) or names (non-tuple iterable of str) \"\n \"are valid indices for slicing a scalar record, not\\n\\n \"\n + repr(where)\n )\n )\n\n def _getitem_field(self, where):\n return self._array._getitem_field(where)._getitem_at(self._at)\n\n def _getitem_fields(self, where):\n return self._array._getitem_fields(where)._getitem_at(self._at)\n\n def packed(self):\n if self._array.length == 1:\n return Record(self._array.packed(), self._at)\n else:\n return Record(self._array[self._at : self._at + 1].packed(), 0)\n\n def to_list(self, behavior=None):\n cls = ak._v2._util.recordclass(self._array, behavior)\n if cls is not ak._v2.highlevel.Record:\n return cls(self)\n\n return self._array[self._at : self._at + 1].to_list(behavior)[0]\n\n def deep_copy(self):\n return Record(self._array.deep_copy(), copy.deepcopy(self._at))\n\n def recursively_apply(\n self,\n action,\n depth_context=None,\n lateral_context=None,\n keep_parameters=True,\n numpy_to_regular=True,\n return_array=True,\n function_name=None,\n ):\n\n out = self._array.recursively_apply(\n action,\n depth_context,\n lateral_context,\n keep_parameters,\n numpy_to_regular,\n return_array,\n function_name,\n )\n\n if return_array:\n return Record(out, self._at)\n else:\n return None\n", "path": "src/awkward/_v2/record.py"}]}
| 2,195 | 220 |
gh_patches_debug_35917
|
rasdani/github-patches
|
git_diff
|
lutris__lutris-1429
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Lutris takes extremely long time to parse very long string keys in registry
If user.reg contains very long (we a talking megabytes long here) string key it takes a very long time to start anything in this wineprefix with lutris. With ~15MB key lutris was taking ~30 minutes. It happens before starting wine, in "preparing to start" phase.
P.S. If you ask how does this happens - Battletech ny HBS writes some analytics into string key in registry. And it only adds to it, never cleans.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lutris/util/wine/registry.py`
Content:
```
1 import os
2 import re
3 from collections import OrderedDict
4 from datetime import datetime
5 from lutris.util.log import logger
6 from lutris.util import system
7 from lutris.util.wine.wine import WINE_DEFAULT_ARCH
8
9 (
10 REG_NONE,
11 REG_SZ,
12 REG_EXPAND_SZ,
13 REG_BINARY,
14 REG_DWORD,
15 REG_DWORD_BIG_ENDIAN,
16 REG_LINK,
17 REG_MULTI_SZ,
18 ) = range(8)
19
20 DATA_TYPES = {
21 '"': REG_SZ,
22 'str:"': REG_SZ,
23 'str(2):"': REG_EXPAND_SZ,
24 'str(7):"': REG_MULTI_SZ,
25 "hex": REG_BINARY,
26 "dword": REG_DWORD,
27 }
28
29
30 class WindowsFileTime:
31 """Utility class to deal with Windows FILETIME structures.
32
33 See: https://msdn.microsoft.com/en-us/library/ms724284(v=vs.85).aspx
34 """
35
36 ticks_per_seconds = 10000000 # 1 tick every 100 nanoseconds
37 epoch_delta = 11644473600 # 3600 * 24 * ((1970 - 1601) * 365 + 89)
38
39 def __init__(self, timestamp=None):
40 self.timestamp = timestamp
41
42 def __repr__(self):
43 return "<{}>: {}".format(self.__class__.__name__, self.timestamp)
44
45 @classmethod
46 def from_hex(cls, hexvalue):
47 timestamp = int(hexvalue, 16)
48 return WindowsFileTime(timestamp)
49
50 def to_hex(self):
51 return "{:x}".format(self.timestamp)
52
53 @classmethod
54 def from_unix_timestamp(cls, timestamp):
55 timestamp = timestamp + cls.epoch_delta
56 timestamp = int(timestamp * cls.ticks_per_seconds)
57 return WindowsFileTime(timestamp)
58
59 def to_unix_timestamp(self):
60 if not self.timestamp:
61 raise ValueError("No timestamp set")
62 unix_ts = self.timestamp / self.ticks_per_seconds
63 unix_ts = unix_ts - self.epoch_delta
64 return unix_ts
65
66 def to_date_time(self):
67 return datetime.fromtimestamp(self.to_unix_timestamp())
68
69
70 class WineRegistry:
71 version_header = "WINE REGISTRY Version "
72 relative_to_header = ";; All keys relative to "
73
74 def __init__(self, reg_filename=None):
75 self.arch = WINE_DEFAULT_ARCH
76 self.version = 2
77 self.relative_to = "\\\\User\\\\S-1-5-21-0-0-0-1000"
78 self.keys = OrderedDict()
79 self.reg_filename = reg_filename
80 if reg_filename:
81 if not system.path_exists(reg_filename):
82 logger.error("Unexisting registry %s", reg_filename)
83 self.parse_reg_file(reg_filename)
84
85 @property
86 def prefix_path(self):
87 """Return the Wine prefix path (where the .reg files are located)"""
88 if self.reg_filename:
89 return os.path.dirname(self.reg_filename)
90
91 @staticmethod
92 def get_raw_registry(reg_filename):
93 """Return an array of the unprocessed contents of a registry file"""
94 if not system.path_exists(reg_filename):
95 return []
96 with open(reg_filename, "r") as reg_file:
97
98 try:
99 registry_content = reg_file.readlines()
100 except Exception: # pylint: disable=broad-except
101 logger.exception(
102 "Failed to registry read %s, please send attach this file in a bug report",
103 reg_filename
104 )
105 registry_content = []
106 return registry_content
107
108 def parse_reg_file(self, reg_filename):
109 registry_lines = self.get_raw_registry(reg_filename)
110 current_key = None
111 add_next_to_value = False
112 for line in registry_lines:
113 line = line.rstrip("\n") # Remove trailing newlines
114
115 if line.startswith(self.version_header):
116 self.version = int(line[len(self.version_header):])
117 continue
118
119 if line.startswith(self.relative_to_header):
120 self.relative_to = line[len(self.relative_to_header):]
121 continue
122
123 if line.startswith("#arch"):
124 self.arch = line.split("=")[1]
125 continue
126
127 if line.startswith("["):
128 current_key = WineRegistryKey(key_def=line)
129 self.keys[current_key.name] = current_key
130 continue
131
132 if current_key:
133 if add_next_to_value:
134 current_key.add_to_last(line)
135 else:
136 current_key.parse(line)
137 add_next_to_value = line.endswith("\\")
138
139 def render(self):
140 content = "{}{}\n".format(self.version_header, self.version)
141 content += "{}{}\n\n".format(self.relative_to_header, self.relative_to)
142 content += "#arch={}\n".format(self.arch)
143 for key in self.keys:
144 content += "\n"
145 content += self.keys[key].render()
146 return content
147
148 def save(self, path=None):
149 """Write the registry to a file"""
150 if not path:
151 path = self.reg_filename
152 if not path:
153 raise OSError("No filename provided")
154 with open(path, "w") as registry_file:
155 registry_file.write(self.render())
156
157 def query(self, path, subkey):
158 key = self.keys.get(path)
159 if key:
160 return key.get_subkey(subkey)
161
162 def set_value(self, path, subkey, value):
163 key = self.keys.get(path)
164 if not key:
165 key = WineRegistryKey(path=path)
166 self.keys[key.name] = key
167 key.set_subkey(subkey, value)
168
169 def clear_key(self, path):
170 """Removes all subkeys from a key"""
171 key = self.keys.get(path)
172 if not key:
173 return
174 key.subkeys.clear()
175
176 def clear_subkeys(self, path, keys):
177 """Remove some subkeys from a key"""
178 key = self.keys.get(path)
179 if not key:
180 return
181 for subkey in list(key.subkeys.keys()):
182 if subkey not in keys:
183 continue
184 key.subkeys.pop(subkey)
185
186 def get_unix_path(self, windows_path):
187 windows_path = windows_path.replace("\\\\", "/")
188 if not self.prefix_path:
189 return
190 drives_path = os.path.join(self.prefix_path, "dosdevices")
191 if not system.path_exists(drives_path):
192 return
193 letter, relpath = windows_path.split(":", 1)
194 relpath = relpath.strip("/")
195 drive_link = os.path.join(drives_path, letter.lower() + ":")
196 try:
197 drive_path = os.readlink(drive_link)
198 except FileNotFoundError:
199 logger.error("Unable to read link for %s", drive_link)
200 return
201
202 if not os.path.isabs(drive_path):
203 drive_path = os.path.join(drives_path, drive_path)
204 return os.path.join(drive_path, relpath)
205
206
207 class WineRegistryKey:
208 def __init__(self, key_def=None, path=None):
209
210 self.subkeys = OrderedDict()
211 self.metas = OrderedDict()
212
213 if path:
214 # Key is created by path, it's a new key
215 timestamp = datetime.now().timestamp()
216 self.name = path
217 self.raw_name = "[{}]".format(path.replace("/", "\\\\"))
218 self.raw_timestamp = " ".join(str(timestamp).split("."))
219
220 windows_timestamp = WindowsFileTime.from_unix_timestamp(timestamp)
221 self.metas["time"] = windows_timestamp.to_hex()
222 else:
223 # Existing key loaded from file
224 self.raw_name, self.raw_timestamp = re.split(
225 re.compile(r"(?<=[^\\]\]) "), key_def, maxsplit=1
226 )
227 self.name = self.raw_name.replace("\\\\", "/").strip("[]")
228
229 # Parse timestamp either as int or float
230 ts_parts = self.raw_timestamp.strip().split()
231 if len(ts_parts) == 1:
232 self.timestamp = int(ts_parts[0])
233 else:
234 self.timestamp = float("{}.{}".format(ts_parts[0], ts_parts[1]))
235
236 def __str__(self):
237 return "{0} {1}".format(self.raw_name, self.raw_timestamp)
238
239 def parse(self, line):
240 """Parse a registry line, populating meta and subkeys"""
241 if len(line) < 4:
242 # Line is too short, nothing to parse
243 return
244
245 if line.startswith("#"):
246 self.add_meta(line)
247 elif line.startswith('"'):
248 try:
249 key, value = re.split(re.compile(r"(?<![^\\]\\\")="), line, maxsplit=1)
250 except ValueError as ex:
251 logger.error("Unable to parse line %s", line)
252 logger.exception(ex)
253 return
254 key = key[1:-1]
255 self.subkeys[key] = value
256 elif line.startswith("@"):
257 key, value = line.split("=", 1)
258 self.subkeys["default"] = value
259
260 def add_to_last(self, line):
261 last_subkey = list(self.subkeys.keys())[-1]
262 self.subkeys[last_subkey] += "\n{}".format(line)
263
264 def render(self):
265 """Return the content of the key in the wine .reg format"""
266 content = self.raw_name + " " + self.raw_timestamp + "\n"
267 for key, value in self.metas.items():
268 if value is None:
269 content += "#{}\n".format(key)
270 else:
271 content += "#{}={}\n".format(key, value)
272 for key, value in self.subkeys.items():
273 if key == "default":
274 key = "@"
275 else:
276 key = '"{}"'.format(key)
277 content += "{}={}\n".format(key, value)
278 return content
279
280 def render_value(self, value):
281 if isinstance(value, int):
282 return "dword:{:08x}".format(value)
283 if isinstance(value, str):
284 return '"{}"'.format(value)
285 raise NotImplementedError("TODO")
286
287 def add_meta(self, meta_line):
288 if not meta_line.startswith("#"):
289 raise ValueError("Key metas should start with '#'")
290 meta_line = meta_line[1:]
291 parts = meta_line.split("=")
292 if len(parts) == 2:
293 key = parts[0]
294 value = parts[1]
295 elif len(parts) == 1:
296 key = parts[0]
297 value = None
298 else:
299 raise ValueError("Invalid meta line '{}'".format(meta_line))
300 self.metas[key] = value
301
302 def get_meta(self, name):
303 return self.metas.get(name)
304
305 def set_subkey(self, name, value):
306 self.subkeys[name] = self.render_value(value)
307
308 def get_subkey(self, name):
309 if name not in self.subkeys:
310 return None
311 value = self.subkeys[name]
312 if value.startswith('"') and value.endswith('"'):
313 return value[1:-1]
314 if value.startswith("dword:"):
315 return int(value[6:], 16)
316 raise ValueError("Handle %s" % value)
317
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lutris/util/wine/registry.py b/lutris/util/wine/registry.py
--- a/lutris/util/wine/registry.py
+++ b/lutris/util/wine/registry.py
@@ -109,32 +109,29 @@
registry_lines = self.get_raw_registry(reg_filename)
current_key = None
add_next_to_value = False
+ additional_values = []
for line in registry_lines:
- line = line.rstrip("\n") # Remove trailing newlines
-
- if line.startswith(self.version_header):
- self.version = int(line[len(self.version_header):])
- continue
-
- if line.startswith(self.relative_to_header):
- self.relative_to = line[len(self.relative_to_header):]
- continue
-
- if line.startswith("#arch"):
- self.arch = line.split("=")[1]
- continue
+ line = line.rstrip("\n")
if line.startswith("["):
current_key = WineRegistryKey(key_def=line)
self.keys[current_key.name] = current_key
- continue
-
- if current_key:
+ elif current_key:
if add_next_to_value:
- current_key.add_to_last(line)
- else:
+ additional_values.append(line)
+ elif not add_next_to_value:
+ if additional_values:
+ additional_values = '\n'.join(additional_values)
+ current_key.add_to_last(additional_values)
+ additional_values = []
current_key.parse(line)
add_next_to_value = line.endswith("\\")
+ elif line.startswith(self.version_header):
+ self.version = int(line[len(self.version_header):])
+ elif line.startswith(self.relative_to_header):
+ self.relative_to = line[len(self.relative_to_header):]
+ elif line.startswith("#arch"):
+ self.arch = line.split("=")[1]
def render(self):
content = "{}{}\n".format(self.version_header, self.version)
@@ -258,7 +255,7 @@
self.subkeys["default"] = value
def add_to_last(self, line):
- last_subkey = list(self.subkeys.keys())[-1]
+ last_subkey = next(reversed(self.subkeys))
self.subkeys[last_subkey] += "\n{}".format(line)
def render(self):
|
{"golden_diff": "diff --git a/lutris/util/wine/registry.py b/lutris/util/wine/registry.py\n--- a/lutris/util/wine/registry.py\n+++ b/lutris/util/wine/registry.py\n@@ -109,32 +109,29 @@\n registry_lines = self.get_raw_registry(reg_filename)\n current_key = None\n add_next_to_value = False\n+ additional_values = []\n for line in registry_lines:\n- line = line.rstrip(\"\\n\") # Remove trailing newlines\n-\n- if line.startswith(self.version_header):\n- self.version = int(line[len(self.version_header):])\n- continue\n-\n- if line.startswith(self.relative_to_header):\n- self.relative_to = line[len(self.relative_to_header):]\n- continue\n-\n- if line.startswith(\"#arch\"):\n- self.arch = line.split(\"=\")[1]\n- continue\n+ line = line.rstrip(\"\\n\")\n \n if line.startswith(\"[\"):\n current_key = WineRegistryKey(key_def=line)\n self.keys[current_key.name] = current_key\n- continue\n-\n- if current_key:\n+ elif current_key:\n if add_next_to_value:\n- current_key.add_to_last(line)\n- else:\n+ additional_values.append(line)\n+ elif not add_next_to_value:\n+ if additional_values:\n+ additional_values = '\\n'.join(additional_values)\n+ current_key.add_to_last(additional_values)\n+ additional_values = []\n current_key.parse(line)\n add_next_to_value = line.endswith(\"\\\\\")\n+ elif line.startswith(self.version_header):\n+ self.version = int(line[len(self.version_header):])\n+ elif line.startswith(self.relative_to_header):\n+ self.relative_to = line[len(self.relative_to_header):]\n+ elif line.startswith(\"#arch\"):\n+ self.arch = line.split(\"=\")[1]\n \n def render(self):\n content = \"{}{}\\n\".format(self.version_header, self.version)\n@@ -258,7 +255,7 @@\n self.subkeys[\"default\"] = value\n \n def add_to_last(self, line):\n- last_subkey = list(self.subkeys.keys())[-1]\n+ last_subkey = next(reversed(self.subkeys))\n self.subkeys[last_subkey] += \"\\n{}\".format(line)\n \n def render(self):\n", "issue": "Lutris takes extremely long time to parse very long string keys in registry\nIf user.reg contains very long (we a talking megabytes long here) string key it takes a very long time to start anything in this wineprefix with lutris. With ~15MB key lutris was taking ~30 minutes. It happens before starting wine, in \"preparing to start\" phase.\r\n\r\nP.S. If you ask how does this happens - Battletech ny HBS writes some analytics into string key in registry. And it only adds to it, never cleans.\n", "before_files": [{"content": "import os\nimport re\nfrom collections import OrderedDict\nfrom datetime import datetime\nfrom lutris.util.log import logger\nfrom lutris.util import system\nfrom lutris.util.wine.wine import WINE_DEFAULT_ARCH\n\n(\n REG_NONE,\n REG_SZ,\n REG_EXPAND_SZ,\n REG_BINARY,\n REG_DWORD,\n REG_DWORD_BIG_ENDIAN,\n REG_LINK,\n REG_MULTI_SZ,\n) = range(8)\n\nDATA_TYPES = {\n '\"': REG_SZ,\n 'str:\"': REG_SZ,\n 'str(2):\"': REG_EXPAND_SZ,\n 'str(7):\"': REG_MULTI_SZ,\n \"hex\": REG_BINARY,\n \"dword\": REG_DWORD,\n}\n\n\nclass WindowsFileTime:\n \"\"\"Utility class to deal with Windows FILETIME structures.\n\n See: https://msdn.microsoft.com/en-us/library/ms724284(v=vs.85).aspx\n \"\"\"\n\n ticks_per_seconds = 10000000 # 1 tick every 100 nanoseconds\n epoch_delta = 11644473600 # 3600 * 24 * ((1970 - 1601) * 365 + 89)\n\n def __init__(self, timestamp=None):\n self.timestamp = timestamp\n\n def __repr__(self):\n return \"<{}>: {}\".format(self.__class__.__name__, self.timestamp)\n\n @classmethod\n def from_hex(cls, hexvalue):\n timestamp = int(hexvalue, 16)\n return WindowsFileTime(timestamp)\n\n def to_hex(self):\n return \"{:x}\".format(self.timestamp)\n\n @classmethod\n def from_unix_timestamp(cls, timestamp):\n timestamp = timestamp + cls.epoch_delta\n timestamp = int(timestamp * cls.ticks_per_seconds)\n return WindowsFileTime(timestamp)\n\n def to_unix_timestamp(self):\n if not self.timestamp:\n raise ValueError(\"No timestamp set\")\n unix_ts = self.timestamp / self.ticks_per_seconds\n unix_ts = unix_ts - self.epoch_delta\n return unix_ts\n\n def to_date_time(self):\n return datetime.fromtimestamp(self.to_unix_timestamp())\n\n\nclass WineRegistry:\n version_header = \"WINE REGISTRY Version \"\n relative_to_header = \";; All keys relative to \"\n\n def __init__(self, reg_filename=None):\n self.arch = WINE_DEFAULT_ARCH\n self.version = 2\n self.relative_to = \"\\\\\\\\User\\\\\\\\S-1-5-21-0-0-0-1000\"\n self.keys = OrderedDict()\n self.reg_filename = reg_filename\n if reg_filename:\n if not system.path_exists(reg_filename):\n logger.error(\"Unexisting registry %s\", reg_filename)\n self.parse_reg_file(reg_filename)\n\n @property\n def prefix_path(self):\n \"\"\"Return the Wine prefix path (where the .reg files are located)\"\"\"\n if self.reg_filename:\n return os.path.dirname(self.reg_filename)\n\n @staticmethod\n def get_raw_registry(reg_filename):\n \"\"\"Return an array of the unprocessed contents of a registry file\"\"\"\n if not system.path_exists(reg_filename):\n return []\n with open(reg_filename, \"r\") as reg_file:\n\n try:\n registry_content = reg_file.readlines()\n except Exception: # pylint: disable=broad-except\n logger.exception(\n \"Failed to registry read %s, please send attach this file in a bug report\",\n reg_filename\n )\n registry_content = []\n return registry_content\n\n def parse_reg_file(self, reg_filename):\n registry_lines = self.get_raw_registry(reg_filename)\n current_key = None\n add_next_to_value = False\n for line in registry_lines:\n line = line.rstrip(\"\\n\") # Remove trailing newlines\n\n if line.startswith(self.version_header):\n self.version = int(line[len(self.version_header):])\n continue\n\n if line.startswith(self.relative_to_header):\n self.relative_to = line[len(self.relative_to_header):]\n continue\n\n if line.startswith(\"#arch\"):\n self.arch = line.split(\"=\")[1]\n continue\n\n if line.startswith(\"[\"):\n current_key = WineRegistryKey(key_def=line)\n self.keys[current_key.name] = current_key\n continue\n\n if current_key:\n if add_next_to_value:\n current_key.add_to_last(line)\n else:\n current_key.parse(line)\n add_next_to_value = line.endswith(\"\\\\\")\n\n def render(self):\n content = \"{}{}\\n\".format(self.version_header, self.version)\n content += \"{}{}\\n\\n\".format(self.relative_to_header, self.relative_to)\n content += \"#arch={}\\n\".format(self.arch)\n for key in self.keys:\n content += \"\\n\"\n content += self.keys[key].render()\n return content\n\n def save(self, path=None):\n \"\"\"Write the registry to a file\"\"\"\n if not path:\n path = self.reg_filename\n if not path:\n raise OSError(\"No filename provided\")\n with open(path, \"w\") as registry_file:\n registry_file.write(self.render())\n\n def query(self, path, subkey):\n key = self.keys.get(path)\n if key:\n return key.get_subkey(subkey)\n\n def set_value(self, path, subkey, value):\n key = self.keys.get(path)\n if not key:\n key = WineRegistryKey(path=path)\n self.keys[key.name] = key\n key.set_subkey(subkey, value)\n\n def clear_key(self, path):\n \"\"\"Removes all subkeys from a key\"\"\"\n key = self.keys.get(path)\n if not key:\n return\n key.subkeys.clear()\n\n def clear_subkeys(self, path, keys):\n \"\"\"Remove some subkeys from a key\"\"\"\n key = self.keys.get(path)\n if not key:\n return\n for subkey in list(key.subkeys.keys()):\n if subkey not in keys:\n continue\n key.subkeys.pop(subkey)\n\n def get_unix_path(self, windows_path):\n windows_path = windows_path.replace(\"\\\\\\\\\", \"/\")\n if not self.prefix_path:\n return\n drives_path = os.path.join(self.prefix_path, \"dosdevices\")\n if not system.path_exists(drives_path):\n return\n letter, relpath = windows_path.split(\":\", 1)\n relpath = relpath.strip(\"/\")\n drive_link = os.path.join(drives_path, letter.lower() + \":\")\n try:\n drive_path = os.readlink(drive_link)\n except FileNotFoundError:\n logger.error(\"Unable to read link for %s\", drive_link)\n return\n\n if not os.path.isabs(drive_path):\n drive_path = os.path.join(drives_path, drive_path)\n return os.path.join(drive_path, relpath)\n\n\nclass WineRegistryKey:\n def __init__(self, key_def=None, path=None):\n\n self.subkeys = OrderedDict()\n self.metas = OrderedDict()\n\n if path:\n # Key is created by path, it's a new key\n timestamp = datetime.now().timestamp()\n self.name = path\n self.raw_name = \"[{}]\".format(path.replace(\"/\", \"\\\\\\\\\"))\n self.raw_timestamp = \" \".join(str(timestamp).split(\".\"))\n\n windows_timestamp = WindowsFileTime.from_unix_timestamp(timestamp)\n self.metas[\"time\"] = windows_timestamp.to_hex()\n else:\n # Existing key loaded from file\n self.raw_name, self.raw_timestamp = re.split(\n re.compile(r\"(?<=[^\\\\]\\]) \"), key_def, maxsplit=1\n )\n self.name = self.raw_name.replace(\"\\\\\\\\\", \"/\").strip(\"[]\")\n\n # Parse timestamp either as int or float\n ts_parts = self.raw_timestamp.strip().split()\n if len(ts_parts) == 1:\n self.timestamp = int(ts_parts[0])\n else:\n self.timestamp = float(\"{}.{}\".format(ts_parts[0], ts_parts[1]))\n\n def __str__(self):\n return \"{0} {1}\".format(self.raw_name, self.raw_timestamp)\n\n def parse(self, line):\n \"\"\"Parse a registry line, populating meta and subkeys\"\"\"\n if len(line) < 4:\n # Line is too short, nothing to parse\n return\n\n if line.startswith(\"#\"):\n self.add_meta(line)\n elif line.startswith('\"'):\n try:\n key, value = re.split(re.compile(r\"(?<![^\\\\]\\\\\\\")=\"), line, maxsplit=1)\n except ValueError as ex:\n logger.error(\"Unable to parse line %s\", line)\n logger.exception(ex)\n return\n key = key[1:-1]\n self.subkeys[key] = value\n elif line.startswith(\"@\"):\n key, value = line.split(\"=\", 1)\n self.subkeys[\"default\"] = value\n\n def add_to_last(self, line):\n last_subkey = list(self.subkeys.keys())[-1]\n self.subkeys[last_subkey] += \"\\n{}\".format(line)\n\n def render(self):\n \"\"\"Return the content of the key in the wine .reg format\"\"\"\n content = self.raw_name + \" \" + self.raw_timestamp + \"\\n\"\n for key, value in self.metas.items():\n if value is None:\n content += \"#{}\\n\".format(key)\n else:\n content += \"#{}={}\\n\".format(key, value)\n for key, value in self.subkeys.items():\n if key == \"default\":\n key = \"@\"\n else:\n key = '\"{}\"'.format(key)\n content += \"{}={}\\n\".format(key, value)\n return content\n\n def render_value(self, value):\n if isinstance(value, int):\n return \"dword:{:08x}\".format(value)\n if isinstance(value, str):\n return '\"{}\"'.format(value)\n raise NotImplementedError(\"TODO\")\n\n def add_meta(self, meta_line):\n if not meta_line.startswith(\"#\"):\n raise ValueError(\"Key metas should start with '#'\")\n meta_line = meta_line[1:]\n parts = meta_line.split(\"=\")\n if len(parts) == 2:\n key = parts[0]\n value = parts[1]\n elif len(parts) == 1:\n key = parts[0]\n value = None\n else:\n raise ValueError(\"Invalid meta line '{}'\".format(meta_line))\n self.metas[key] = value\n\n def get_meta(self, name):\n return self.metas.get(name)\n\n def set_subkey(self, name, value):\n self.subkeys[name] = self.render_value(value)\n\n def get_subkey(self, name):\n if name not in self.subkeys:\n return None\n value = self.subkeys[name]\n if value.startswith('\"') and value.endswith('\"'):\n return value[1:-1]\n if value.startswith(\"dword:\"):\n return int(value[6:], 16)\n raise ValueError(\"Handle %s\" % value)\n", "path": "lutris/util/wine/registry.py"}], "after_files": [{"content": "import os\nimport re\nfrom collections import OrderedDict\nfrom datetime import datetime\nfrom lutris.util.log import logger\nfrom lutris.util import system\nfrom lutris.util.wine.wine import WINE_DEFAULT_ARCH\n\n(\n REG_NONE,\n REG_SZ,\n REG_EXPAND_SZ,\n REG_BINARY,\n REG_DWORD,\n REG_DWORD_BIG_ENDIAN,\n REG_LINK,\n REG_MULTI_SZ,\n) = range(8)\n\nDATA_TYPES = {\n '\"': REG_SZ,\n 'str:\"': REG_SZ,\n 'str(2):\"': REG_EXPAND_SZ,\n 'str(7):\"': REG_MULTI_SZ,\n \"hex\": REG_BINARY,\n \"dword\": REG_DWORD,\n}\n\n\nclass WindowsFileTime:\n \"\"\"Utility class to deal with Windows FILETIME structures.\n\n See: https://msdn.microsoft.com/en-us/library/ms724284(v=vs.85).aspx\n \"\"\"\n\n ticks_per_seconds = 10000000 # 1 tick every 100 nanoseconds\n epoch_delta = 11644473600 # 3600 * 24 * ((1970 - 1601) * 365 + 89)\n\n def __init__(self, timestamp=None):\n self.timestamp = timestamp\n\n def __repr__(self):\n return \"<{}>: {}\".format(self.__class__.__name__, self.timestamp)\n\n @classmethod\n def from_hex(cls, hexvalue):\n timestamp = int(hexvalue, 16)\n return WindowsFileTime(timestamp)\n\n def to_hex(self):\n return \"{:x}\".format(self.timestamp)\n\n @classmethod\n def from_unix_timestamp(cls, timestamp):\n timestamp = timestamp + cls.epoch_delta\n timestamp = int(timestamp * cls.ticks_per_seconds)\n return WindowsFileTime(timestamp)\n\n def to_unix_timestamp(self):\n if not self.timestamp:\n raise ValueError(\"No timestamp set\")\n unix_ts = self.timestamp / self.ticks_per_seconds\n unix_ts = unix_ts - self.epoch_delta\n return unix_ts\n\n def to_date_time(self):\n return datetime.fromtimestamp(self.to_unix_timestamp())\n\n\nclass WineRegistry:\n version_header = \"WINE REGISTRY Version \"\n relative_to_header = \";; All keys relative to \"\n\n def __init__(self, reg_filename=None):\n self.arch = WINE_DEFAULT_ARCH\n self.version = 2\n self.relative_to = \"\\\\\\\\User\\\\\\\\S-1-5-21-0-0-0-1000\"\n self.keys = OrderedDict()\n self.reg_filename = reg_filename\n if reg_filename:\n if not system.path_exists(reg_filename):\n logger.error(\"Unexisting registry %s\", reg_filename)\n self.parse_reg_file(reg_filename)\n\n @property\n def prefix_path(self):\n \"\"\"Return the Wine prefix path (where the .reg files are located)\"\"\"\n if self.reg_filename:\n return os.path.dirname(self.reg_filename)\n\n @staticmethod\n def get_raw_registry(reg_filename):\n \"\"\"Return an array of the unprocessed contents of a registry file\"\"\"\n if not system.path_exists(reg_filename):\n return []\n with open(reg_filename, \"r\") as reg_file:\n\n try:\n registry_content = reg_file.readlines()\n except Exception: # pylint: disable=broad-except\n logger.exception(\n \"Failed to registry read %s, please send attach this file in a bug report\",\n reg_filename\n )\n registry_content = []\n return registry_content\n\n def parse_reg_file(self, reg_filename):\n registry_lines = self.get_raw_registry(reg_filename)\n current_key = None\n add_next_to_value = False\n additional_values = []\n for line in registry_lines:\n line = line.rstrip(\"\\n\")\n\n if line.startswith(\"[\"):\n current_key = WineRegistryKey(key_def=line)\n self.keys[current_key.name] = current_key\n elif current_key:\n if add_next_to_value:\n additional_values.append(line)\n elif not add_next_to_value:\n if additional_values:\n additional_values = '\\n'.join(additional_values)\n current_key.add_to_last(additional_values)\n additional_values = []\n current_key.parse(line)\n add_next_to_value = line.endswith(\"\\\\\")\n elif line.startswith(self.version_header):\n self.version = int(line[len(self.version_header):])\n elif line.startswith(self.relative_to_header):\n self.relative_to = line[len(self.relative_to_header):]\n elif line.startswith(\"#arch\"):\n self.arch = line.split(\"=\")[1]\n\n def render(self):\n content = \"{}{}\\n\".format(self.version_header, self.version)\n content += \"{}{}\\n\\n\".format(self.relative_to_header, self.relative_to)\n content += \"#arch={}\\n\".format(self.arch)\n for key in self.keys:\n content += \"\\n\"\n content += self.keys[key].render()\n return content\n\n def save(self, path=None):\n \"\"\"Write the registry to a file\"\"\"\n if not path:\n path = self.reg_filename\n if not path:\n raise OSError(\"No filename provided\")\n with open(path, \"w\") as registry_file:\n registry_file.write(self.render())\n\n def query(self, path, subkey):\n key = self.keys.get(path)\n if key:\n return key.get_subkey(subkey)\n\n def set_value(self, path, subkey, value):\n key = self.keys.get(path)\n if not key:\n key = WineRegistryKey(path=path)\n self.keys[key.name] = key\n key.set_subkey(subkey, value)\n\n def clear_key(self, path):\n \"\"\"Removes all subkeys from a key\"\"\"\n key = self.keys.get(path)\n if not key:\n return\n key.subkeys.clear()\n\n def clear_subkeys(self, path, keys):\n \"\"\"Remove some subkeys from a key\"\"\"\n key = self.keys.get(path)\n if not key:\n return\n for subkey in list(key.subkeys.keys()):\n if subkey not in keys:\n continue\n key.subkeys.pop(subkey)\n\n def get_unix_path(self, windows_path):\n windows_path = windows_path.replace(\"\\\\\\\\\", \"/\")\n if not self.prefix_path:\n return\n drives_path = os.path.join(self.prefix_path, \"dosdevices\")\n if not system.path_exists(drives_path):\n return\n letter, relpath = windows_path.split(\":\", 1)\n relpath = relpath.strip(\"/\")\n drive_link = os.path.join(drives_path, letter.lower() + \":\")\n try:\n drive_path = os.readlink(drive_link)\n except FileNotFoundError:\n logger.error(\"Unable to read link for %s\", drive_link)\n return\n\n if not os.path.isabs(drive_path):\n drive_path = os.path.join(drives_path, drive_path)\n return os.path.join(drive_path, relpath)\n\n\nclass WineRegistryKey:\n def __init__(self, key_def=None, path=None):\n\n self.subkeys = OrderedDict()\n self.metas = OrderedDict()\n\n if path:\n # Key is created by path, it's a new key\n timestamp = datetime.now().timestamp()\n self.name = path\n self.raw_name = \"[{}]\".format(path.replace(\"/\", \"\\\\\\\\\"))\n self.raw_timestamp = \" \".join(str(timestamp).split(\".\"))\n\n windows_timestamp = WindowsFileTime.from_unix_timestamp(timestamp)\n self.metas[\"time\"] = windows_timestamp.to_hex()\n else:\n # Existing key loaded from file\n self.raw_name, self.raw_timestamp = re.split(\n re.compile(r\"(?<=[^\\\\]\\]) \"), key_def, maxsplit=1\n )\n self.name = self.raw_name.replace(\"\\\\\\\\\", \"/\").strip(\"[]\")\n\n # Parse timestamp either as int or float\n ts_parts = self.raw_timestamp.strip().split()\n if len(ts_parts) == 1:\n self.timestamp = int(ts_parts[0])\n else:\n self.timestamp = float(\"{}.{}\".format(ts_parts[0], ts_parts[1]))\n\n def __str__(self):\n return \"{0} {1}\".format(self.raw_name, self.raw_timestamp)\n\n def parse(self, line):\n \"\"\"Parse a registry line, populating meta and subkeys\"\"\"\n if len(line) < 4:\n # Line is too short, nothing to parse\n return\n\n if line.startswith(\"#\"):\n self.add_meta(line)\n elif line.startswith('\"'):\n try:\n key, value = re.split(re.compile(r\"(?<![^\\\\]\\\\\\\")=\"), line, maxsplit=1)\n except ValueError as ex:\n logger.error(\"Unable to parse line %s\", line)\n logger.exception(ex)\n return\n key = key[1:-1]\n self.subkeys[key] = value\n elif line.startswith(\"@\"):\n key, value = line.split(\"=\", 1)\n self.subkeys[\"default\"] = value\n\n def add_to_last(self, line):\n last_subkey = next(reversed(self.subkeys))\n self.subkeys[last_subkey] += \"\\n{}\".format(line)\n\n def render(self):\n \"\"\"Return the content of the key in the wine .reg format\"\"\"\n content = self.raw_name + \" \" + self.raw_timestamp + \"\\n\"\n for key, value in self.metas.items():\n if value is None:\n content += \"#{}\\n\".format(key)\n else:\n content += \"#{}={}\\n\".format(key, value)\n for key, value in self.subkeys.items():\n if key == \"default\":\n key = \"@\"\n else:\n key = '\"{}\"'.format(key)\n content += \"{}={}\\n\".format(key, value)\n return content\n\n def render_value(self, value):\n if isinstance(value, int):\n return \"dword:{:08x}\".format(value)\n if isinstance(value, str):\n return '\"{}\"'.format(value)\n raise NotImplementedError(\"TODO\")\n\n def add_meta(self, meta_line):\n if not meta_line.startswith(\"#\"):\n raise ValueError(\"Key metas should start with '#'\")\n meta_line = meta_line[1:]\n parts = meta_line.split(\"=\")\n if len(parts) == 2:\n key = parts[0]\n value = parts[1]\n elif len(parts) == 1:\n key = parts[0]\n value = None\n else:\n raise ValueError(\"Invalid meta line '{}'\".format(meta_line))\n self.metas[key] = value\n\n def get_meta(self, name):\n return self.metas.get(name)\n\n def set_subkey(self, name, value):\n self.subkeys[name] = self.render_value(value)\n\n def get_subkey(self, name):\n if name not in self.subkeys:\n return None\n value = self.subkeys[name]\n if value.startswith('\"') and value.endswith('\"'):\n return value[1:-1]\n if value.startswith(\"dword:\"):\n return int(value[6:], 16)\n raise ValueError(\"Handle %s\" % value)\n", "path": "lutris/util/wine/registry.py"}]}
| 3,617 | 509 |
gh_patches_debug_4937
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-621
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug in b3 propagation
When configuring b3 propagation, if the parent of a Span is an instance of a SpanContext, the following error occurs:
`'SpanContext' object has no attribute 'context'`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import typing
16
17 import opentelemetry.trace as trace
18 from opentelemetry.context import Context
19 from opentelemetry.trace.propagation import (
20 get_span_from_context,
21 set_span_in_context,
22 )
23 from opentelemetry.trace.propagation.httptextformat import (
24 Getter,
25 HTTPTextFormat,
26 HTTPTextFormatT,
27 Setter,
28 )
29
30
31 class B3Format(HTTPTextFormat):
32 """Propagator for the B3 HTTP header format.
33
34 See: https://github.com/openzipkin/b3-propagation
35 """
36
37 SINGLE_HEADER_KEY = "b3"
38 TRACE_ID_KEY = "x-b3-traceid"
39 SPAN_ID_KEY = "x-b3-spanid"
40 PARENT_SPAN_ID_KEY = "x-b3-parentspanid"
41 SAMPLED_KEY = "x-b3-sampled"
42 FLAGS_KEY = "x-b3-flags"
43 _SAMPLE_PROPAGATE_VALUES = set(["1", "True", "true", "d"])
44
45 def extract(
46 self,
47 get_from_carrier: Getter[HTTPTextFormatT],
48 carrier: HTTPTextFormatT,
49 context: typing.Optional[Context] = None,
50 ) -> Context:
51 trace_id = format_trace_id(trace.INVALID_TRACE_ID)
52 span_id = format_span_id(trace.INVALID_SPAN_ID)
53 sampled = "0"
54 flags = None
55
56 single_header = _extract_first_element(
57 get_from_carrier(carrier, self.SINGLE_HEADER_KEY)
58 )
59 if single_header:
60 # The b3 spec calls for the sampling state to be
61 # "deferred", which is unspecified. This concept does not
62 # translate to SpanContext, so we set it as recorded.
63 sampled = "1"
64 fields = single_header.split("-", 4)
65
66 if len(fields) == 1:
67 sampled = fields[0]
68 elif len(fields) == 2:
69 trace_id, span_id = fields
70 elif len(fields) == 3:
71 trace_id, span_id, sampled = fields
72 elif len(fields) == 4:
73 trace_id, span_id, sampled, _ = fields
74 else:
75 return set_span_in_context(trace.INVALID_SPAN)
76 else:
77 trace_id = (
78 _extract_first_element(
79 get_from_carrier(carrier, self.TRACE_ID_KEY)
80 )
81 or trace_id
82 )
83 span_id = (
84 _extract_first_element(
85 get_from_carrier(carrier, self.SPAN_ID_KEY)
86 )
87 or span_id
88 )
89 sampled = (
90 _extract_first_element(
91 get_from_carrier(carrier, self.SAMPLED_KEY)
92 )
93 or sampled
94 )
95 flags = (
96 _extract_first_element(
97 get_from_carrier(carrier, self.FLAGS_KEY)
98 )
99 or flags
100 )
101
102 options = 0
103 # The b3 spec provides no defined behavior for both sample and
104 # flag values set. Since the setting of at least one implies
105 # the desire for some form of sampling, propagate if either
106 # header is set to allow.
107 if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == "1":
108 options |= trace.TraceFlags.SAMPLED
109 return set_span_in_context(
110 trace.DefaultSpan(
111 trace.SpanContext(
112 # trace an span ids are encoded in hex, so must be converted
113 trace_id=int(trace_id, 16),
114 span_id=int(span_id, 16),
115 is_remote=True,
116 trace_flags=trace.TraceFlags(options),
117 trace_state=trace.TraceState(),
118 )
119 )
120 )
121
122 def inject(
123 self,
124 set_in_carrier: Setter[HTTPTextFormatT],
125 carrier: HTTPTextFormatT,
126 context: typing.Optional[Context] = None,
127 ) -> None:
128 span = get_span_from_context(context=context)
129 sampled = (trace.TraceFlags.SAMPLED & span.context.trace_flags) != 0
130 set_in_carrier(
131 carrier, self.TRACE_ID_KEY, format_trace_id(span.context.trace_id),
132 )
133 set_in_carrier(
134 carrier, self.SPAN_ID_KEY, format_span_id(span.context.span_id)
135 )
136 if span.parent is not None:
137 set_in_carrier(
138 carrier,
139 self.PARENT_SPAN_ID_KEY,
140 format_span_id(span.parent.context.span_id),
141 )
142 set_in_carrier(carrier, self.SAMPLED_KEY, "1" if sampled else "0")
143
144
145 def format_trace_id(trace_id: int) -> str:
146 """Format the trace id according to b3 specification."""
147 return format(trace_id, "032x")
148
149
150 def format_span_id(span_id: int) -> str:
151 """Format the span id according to b3 specification."""
152 return format(span_id, "016x")
153
154
155 def _extract_first_element(
156 items: typing.Iterable[HTTPTextFormatT],
157 ) -> typing.Optional[HTTPTextFormatT]:
158 if items is None:
159 return None
160 return next(iter(items), None)
161
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py b/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py
--- a/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py
+++ b/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py
@@ -137,7 +137,7 @@
set_in_carrier(
carrier,
self.PARENT_SPAN_ID_KEY,
- format_span_id(span.parent.context.span_id),
+ format_span_id(span.parent.span_id),
)
set_in_carrier(carrier, self.SAMPLED_KEY, "1" if sampled else "0")
|
{"golden_diff": "diff --git a/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py b/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py\n--- a/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py\n+++ b/opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py\n@@ -137,7 +137,7 @@\n set_in_carrier(\n carrier,\n self.PARENT_SPAN_ID_KEY,\n- format_span_id(span.parent.context.span_id),\n+ format_span_id(span.parent.span_id),\n )\n set_in_carrier(carrier, self.SAMPLED_KEY, \"1\" if sampled else \"0\")\n", "issue": "Bug in b3 propagation\nWhen configuring b3 propagation, if the parent of a Span is an instance of a SpanContext, the following error occurs:\r\n\r\n`'SpanContext' object has no attribute 'context'`\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport typing\n\nimport opentelemetry.trace as trace\nfrom opentelemetry.context import Context\nfrom opentelemetry.trace.propagation import (\n get_span_from_context,\n set_span_in_context,\n)\nfrom opentelemetry.trace.propagation.httptextformat import (\n Getter,\n HTTPTextFormat,\n HTTPTextFormatT,\n Setter,\n)\n\n\nclass B3Format(HTTPTextFormat):\n \"\"\"Propagator for the B3 HTTP header format.\n\n See: https://github.com/openzipkin/b3-propagation\n \"\"\"\n\n SINGLE_HEADER_KEY = \"b3\"\n TRACE_ID_KEY = \"x-b3-traceid\"\n SPAN_ID_KEY = \"x-b3-spanid\"\n PARENT_SPAN_ID_KEY = \"x-b3-parentspanid\"\n SAMPLED_KEY = \"x-b3-sampled\"\n FLAGS_KEY = \"x-b3-flags\"\n _SAMPLE_PROPAGATE_VALUES = set([\"1\", \"True\", \"true\", \"d\"])\n\n def extract(\n self,\n get_from_carrier: Getter[HTTPTextFormatT],\n carrier: HTTPTextFormatT,\n context: typing.Optional[Context] = None,\n ) -> Context:\n trace_id = format_trace_id(trace.INVALID_TRACE_ID)\n span_id = format_span_id(trace.INVALID_SPAN_ID)\n sampled = \"0\"\n flags = None\n\n single_header = _extract_first_element(\n get_from_carrier(carrier, self.SINGLE_HEADER_KEY)\n )\n if single_header:\n # The b3 spec calls for the sampling state to be\n # \"deferred\", which is unspecified. This concept does not\n # translate to SpanContext, so we set it as recorded.\n sampled = \"1\"\n fields = single_header.split(\"-\", 4)\n\n if len(fields) == 1:\n sampled = fields[0]\n elif len(fields) == 2:\n trace_id, span_id = fields\n elif len(fields) == 3:\n trace_id, span_id, sampled = fields\n elif len(fields) == 4:\n trace_id, span_id, sampled, _ = fields\n else:\n return set_span_in_context(trace.INVALID_SPAN)\n else:\n trace_id = (\n _extract_first_element(\n get_from_carrier(carrier, self.TRACE_ID_KEY)\n )\n or trace_id\n )\n span_id = (\n _extract_first_element(\n get_from_carrier(carrier, self.SPAN_ID_KEY)\n )\n or span_id\n )\n sampled = (\n _extract_first_element(\n get_from_carrier(carrier, self.SAMPLED_KEY)\n )\n or sampled\n )\n flags = (\n _extract_first_element(\n get_from_carrier(carrier, self.FLAGS_KEY)\n )\n or flags\n )\n\n options = 0\n # The b3 spec provides no defined behavior for both sample and\n # flag values set. Since the setting of at least one implies\n # the desire for some form of sampling, propagate if either\n # header is set to allow.\n if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == \"1\":\n options |= trace.TraceFlags.SAMPLED\n return set_span_in_context(\n trace.DefaultSpan(\n trace.SpanContext(\n # trace an span ids are encoded in hex, so must be converted\n trace_id=int(trace_id, 16),\n span_id=int(span_id, 16),\n is_remote=True,\n trace_flags=trace.TraceFlags(options),\n trace_state=trace.TraceState(),\n )\n )\n )\n\n def inject(\n self,\n set_in_carrier: Setter[HTTPTextFormatT],\n carrier: HTTPTextFormatT,\n context: typing.Optional[Context] = None,\n ) -> None:\n span = get_span_from_context(context=context)\n sampled = (trace.TraceFlags.SAMPLED & span.context.trace_flags) != 0\n set_in_carrier(\n carrier, self.TRACE_ID_KEY, format_trace_id(span.context.trace_id),\n )\n set_in_carrier(\n carrier, self.SPAN_ID_KEY, format_span_id(span.context.span_id)\n )\n if span.parent is not None:\n set_in_carrier(\n carrier,\n self.PARENT_SPAN_ID_KEY,\n format_span_id(span.parent.context.span_id),\n )\n set_in_carrier(carrier, self.SAMPLED_KEY, \"1\" if sampled else \"0\")\n\n\ndef format_trace_id(trace_id: int) -> str:\n \"\"\"Format the trace id according to b3 specification.\"\"\"\n return format(trace_id, \"032x\")\n\n\ndef format_span_id(span_id: int) -> str:\n \"\"\"Format the span id according to b3 specification.\"\"\"\n return format(span_id, \"016x\")\n\n\ndef _extract_first_element(\n items: typing.Iterable[HTTPTextFormatT],\n) -> typing.Optional[HTTPTextFormatT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport typing\n\nimport opentelemetry.trace as trace\nfrom opentelemetry.context import Context\nfrom opentelemetry.trace.propagation import (\n get_span_from_context,\n set_span_in_context,\n)\nfrom opentelemetry.trace.propagation.httptextformat import (\n Getter,\n HTTPTextFormat,\n HTTPTextFormatT,\n Setter,\n)\n\n\nclass B3Format(HTTPTextFormat):\n \"\"\"Propagator for the B3 HTTP header format.\n\n See: https://github.com/openzipkin/b3-propagation\n \"\"\"\n\n SINGLE_HEADER_KEY = \"b3\"\n TRACE_ID_KEY = \"x-b3-traceid\"\n SPAN_ID_KEY = \"x-b3-spanid\"\n PARENT_SPAN_ID_KEY = \"x-b3-parentspanid\"\n SAMPLED_KEY = \"x-b3-sampled\"\n FLAGS_KEY = \"x-b3-flags\"\n _SAMPLE_PROPAGATE_VALUES = set([\"1\", \"True\", \"true\", \"d\"])\n\n def extract(\n self,\n get_from_carrier: Getter[HTTPTextFormatT],\n carrier: HTTPTextFormatT,\n context: typing.Optional[Context] = None,\n ) -> Context:\n trace_id = format_trace_id(trace.INVALID_TRACE_ID)\n span_id = format_span_id(trace.INVALID_SPAN_ID)\n sampled = \"0\"\n flags = None\n\n single_header = _extract_first_element(\n get_from_carrier(carrier, self.SINGLE_HEADER_KEY)\n )\n if single_header:\n # The b3 spec calls for the sampling state to be\n # \"deferred\", which is unspecified. This concept does not\n # translate to SpanContext, so we set it as recorded.\n sampled = \"1\"\n fields = single_header.split(\"-\", 4)\n\n if len(fields) == 1:\n sampled = fields[0]\n elif len(fields) == 2:\n trace_id, span_id = fields\n elif len(fields) == 3:\n trace_id, span_id, sampled = fields\n elif len(fields) == 4:\n trace_id, span_id, sampled, _ = fields\n else:\n return set_span_in_context(trace.INVALID_SPAN)\n else:\n trace_id = (\n _extract_first_element(\n get_from_carrier(carrier, self.TRACE_ID_KEY)\n )\n or trace_id\n )\n span_id = (\n _extract_first_element(\n get_from_carrier(carrier, self.SPAN_ID_KEY)\n )\n or span_id\n )\n sampled = (\n _extract_first_element(\n get_from_carrier(carrier, self.SAMPLED_KEY)\n )\n or sampled\n )\n flags = (\n _extract_first_element(\n get_from_carrier(carrier, self.FLAGS_KEY)\n )\n or flags\n )\n\n options = 0\n # The b3 spec provides no defined behavior for both sample and\n # flag values set. Since the setting of at least one implies\n # the desire for some form of sampling, propagate if either\n # header is set to allow.\n if sampled in self._SAMPLE_PROPAGATE_VALUES or flags == \"1\":\n options |= trace.TraceFlags.SAMPLED\n return set_span_in_context(\n trace.DefaultSpan(\n trace.SpanContext(\n # trace an span ids are encoded in hex, so must be converted\n trace_id=int(trace_id, 16),\n span_id=int(span_id, 16),\n is_remote=True,\n trace_flags=trace.TraceFlags(options),\n trace_state=trace.TraceState(),\n )\n )\n )\n\n def inject(\n self,\n set_in_carrier: Setter[HTTPTextFormatT],\n carrier: HTTPTextFormatT,\n context: typing.Optional[Context] = None,\n ) -> None:\n span = get_span_from_context(context=context)\n sampled = (trace.TraceFlags.SAMPLED & span.context.trace_flags) != 0\n set_in_carrier(\n carrier, self.TRACE_ID_KEY, format_trace_id(span.context.trace_id),\n )\n set_in_carrier(\n carrier, self.SPAN_ID_KEY, format_span_id(span.context.span_id)\n )\n if span.parent is not None:\n set_in_carrier(\n carrier,\n self.PARENT_SPAN_ID_KEY,\n format_span_id(span.parent.span_id),\n )\n set_in_carrier(carrier, self.SAMPLED_KEY, \"1\" if sampled else \"0\")\n\n\ndef format_trace_id(trace_id: int) -> str:\n \"\"\"Format the trace id according to b3 specification.\"\"\"\n return format(trace_id, \"032x\")\n\n\ndef format_span_id(span_id: int) -> str:\n \"\"\"Format the span id according to b3 specification.\"\"\"\n return format(span_id, \"016x\")\n\n\ndef _extract_first_element(\n items: typing.Iterable[HTTPTextFormatT],\n) -> typing.Optional[HTTPTextFormatT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "opentelemetry-sdk/src/opentelemetry/sdk/trace/propagation/b3_format.py"}]}
| 1,908 | 162 |
gh_patches_debug_17371
|
rasdani/github-patches
|
git_diff
|
translate__pootle-4635
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Put `Review suggestions` action button before others
Let's change order of action buttons and put `Review suggestions` before `Needs translation` to encourage going through suggestion backlog.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/core/browser.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) Pootle contributors.
5 #
6 # This file is a part of the Pootle project. It is distributed under the GPL3
7 # or later license. See the LICENSE file for a copy of the license and the
8 # AUTHORS file for copyright and authorship information.
9
10 from django.utils.translation import ugettext_lazy as _
11
12
13 HEADING_CHOICES = [
14 {
15 'id': 'name',
16 'class': 'stats',
17 'display_name': _("Name"),
18 },
19 {
20 'id': 'priority',
21 'class': 'stats-number sorttable_numeric',
22 'display_name': _("Priority"),
23 },
24 {
25 'id': 'project',
26 'class': 'stats',
27 'display_name': _("Project"),
28 },
29 {
30 'id': 'language',
31 'class': 'stats',
32 'display_name': _("Language"),
33 },
34 {
35 'id': 'progress',
36 'class': 'stats',
37 # Translators: noun. The graphical representation of translation status
38 'display_name': _("Progress"),
39 },
40 {
41 'id': 'total',
42 'class': 'stats-number sorttable_numeric when-loaded',
43 # Translators: Heading representing the total number of words of a file
44 # or directory
45 'display_name': _("Total"),
46 },
47 {
48 'id': 'last-updated',
49 'class': 'stats sorttable_numeric when-loaded',
50 'display_name': _("Last updated"),
51 },
52 {
53 'id': 'need-translation',
54 'class': 'stats-number sorttable_numeric when-loaded',
55 'display_name': _("Need Translation"),
56 },
57 {
58 'id': 'suggestions',
59 'class': 'stats-number sorttable_numeric when-loaded',
60 # Translators: The number of suggestions pending review
61 'display_name': _("Suggestions"),
62 },
63 {
64 'id': 'critical',
65 'class': 'stats-number sorttable_numeric when-loaded',
66 'display_name': _("Critical"),
67 },
68 {
69 'id': 'activity',
70 'class': 'stats sorttable_numeric when-loaded',
71 'display_name': _("Last Activity"),
72 },
73 ]
74
75
76 def get_table_headings(choices):
77 """Filters the list of available table headings to the given `choices`."""
78 return filter(lambda x: x['id'] in choices, HEADING_CHOICES)
79
80
81 def make_generic_item(path_obj, **kwargs):
82 """Template variables for each row in the table."""
83 return {
84 'href': path_obj.get_absolute_url(),
85 'href_all': path_obj.get_translate_url(),
86 'href_todo': path_obj.get_translate_url(state='incomplete', **kwargs),
87 'href_sugg': path_obj.get_translate_url(state='suggestions', **kwargs),
88 'href_critical': path_obj.get_critical_url(**kwargs),
89 'title': path_obj.name,
90 'code': path_obj.code,
91 'is_disabled': getattr(path_obj, 'disabled', False),
92 }
93
94
95 def make_directory_item(directory, **filters):
96 item = make_generic_item(directory, **filters)
97 item.update({
98 'icon': 'folder',
99 })
100 return item
101
102
103 def make_store_item(store):
104 item = make_generic_item(store)
105 item.update({
106 'icon': 'file',
107 })
108 return item
109
110
111 def get_parent(path_obj):
112 """Retrieves a representation of the parent object.
113
114 :param path_obj: either a `Directory` or Store` instance.
115 """
116 parent_dir = path_obj.parent
117
118 if parent_dir.is_project():
119 return None
120
121 if parent_dir.is_language():
122 label = _('Back to language')
123 else:
124 label = _('Back to parent folder')
125
126 return {
127 'title': label,
128 'href': parent_dir.get_absolute_url()
129 }
130
131
132 def make_project_item(translation_project):
133 item = make_generic_item(translation_project)
134 item.update({
135 'icon': 'project',
136 'title': translation_project.project.name,
137 })
138 return item
139
140
141 def make_language_item(translation_project):
142 item = make_generic_item(translation_project)
143 item.update({
144 'icon': 'language',
145 'title': translation_project.language.name,
146 })
147 return item
148
149
150 def make_xlanguage_item(resource_obj):
151 translation_project = resource_obj.translation_project
152 item = make_generic_item(resource_obj)
153 item.update({
154 'icon': 'language',
155 'code': translation_project.language.code,
156 'title': translation_project.language.name,
157 })
158 return item
159
160
161 def make_project_list_item(project):
162 item = make_generic_item(project)
163 item.update({
164 'icon': 'project',
165 'title': project.fullname,
166 })
167 return item
168
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pootle/core/browser.py b/pootle/core/browser.py
--- a/pootle/core/browser.py
+++ b/pootle/core/browser.py
@@ -50,9 +50,9 @@
'display_name': _("Last updated"),
},
{
- 'id': 'need-translation',
+ 'id': 'critical',
'class': 'stats-number sorttable_numeric when-loaded',
- 'display_name': _("Need Translation"),
+ 'display_name': _("Critical"),
},
{
'id': 'suggestions',
@@ -61,9 +61,9 @@
'display_name': _("Suggestions"),
},
{
- 'id': 'critical',
+ 'id': 'need-translation',
'class': 'stats-number sorttable_numeric when-loaded',
- 'display_name': _("Critical"),
+ 'display_name': _("Need Translation"),
},
{
'id': 'activity',
|
{"golden_diff": "diff --git a/pootle/core/browser.py b/pootle/core/browser.py\n--- a/pootle/core/browser.py\n+++ b/pootle/core/browser.py\n@@ -50,9 +50,9 @@\n 'display_name': _(\"Last updated\"),\n },\n {\n- 'id': 'need-translation',\n+ 'id': 'critical',\n 'class': 'stats-number sorttable_numeric when-loaded',\n- 'display_name': _(\"Need Translation\"),\n+ 'display_name': _(\"Critical\"),\n },\n {\n 'id': 'suggestions',\n@@ -61,9 +61,9 @@\n 'display_name': _(\"Suggestions\"),\n },\n {\n- 'id': 'critical',\n+ 'id': 'need-translation',\n 'class': 'stats-number sorttable_numeric when-loaded',\n- 'display_name': _(\"Critical\"),\n+ 'display_name': _(\"Need Translation\"),\n },\n {\n 'id': 'activity',\n", "issue": "Put `Review suggestions` action button before others\nLet's change order of action buttons and put `Review suggestions` before `Needs translation` to encourage going through suggestion backlog.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.utils.translation import ugettext_lazy as _\n\n\nHEADING_CHOICES = [\n {\n 'id': 'name',\n 'class': 'stats',\n 'display_name': _(\"Name\"),\n },\n {\n 'id': 'priority',\n 'class': 'stats-number sorttable_numeric',\n 'display_name': _(\"Priority\"),\n },\n {\n 'id': 'project',\n 'class': 'stats',\n 'display_name': _(\"Project\"),\n },\n {\n 'id': 'language',\n 'class': 'stats',\n 'display_name': _(\"Language\"),\n },\n {\n 'id': 'progress',\n 'class': 'stats',\n # Translators: noun. The graphical representation of translation status\n 'display_name': _(\"Progress\"),\n },\n {\n 'id': 'total',\n 'class': 'stats-number sorttable_numeric when-loaded',\n # Translators: Heading representing the total number of words of a file\n # or directory\n 'display_name': _(\"Total\"),\n },\n {\n 'id': 'last-updated',\n 'class': 'stats sorttable_numeric when-loaded',\n 'display_name': _(\"Last updated\"),\n },\n {\n 'id': 'need-translation',\n 'class': 'stats-number sorttable_numeric when-loaded',\n 'display_name': _(\"Need Translation\"),\n },\n {\n 'id': 'suggestions',\n 'class': 'stats-number sorttable_numeric when-loaded',\n # Translators: The number of suggestions pending review\n 'display_name': _(\"Suggestions\"),\n },\n {\n 'id': 'critical',\n 'class': 'stats-number sorttable_numeric when-loaded',\n 'display_name': _(\"Critical\"),\n },\n {\n 'id': 'activity',\n 'class': 'stats sorttable_numeric when-loaded',\n 'display_name': _(\"Last Activity\"),\n },\n]\n\n\ndef get_table_headings(choices):\n \"\"\"Filters the list of available table headings to the given `choices`.\"\"\"\n return filter(lambda x: x['id'] in choices, HEADING_CHOICES)\n\n\ndef make_generic_item(path_obj, **kwargs):\n \"\"\"Template variables for each row in the table.\"\"\"\n return {\n 'href': path_obj.get_absolute_url(),\n 'href_all': path_obj.get_translate_url(),\n 'href_todo': path_obj.get_translate_url(state='incomplete', **kwargs),\n 'href_sugg': path_obj.get_translate_url(state='suggestions', **kwargs),\n 'href_critical': path_obj.get_critical_url(**kwargs),\n 'title': path_obj.name,\n 'code': path_obj.code,\n 'is_disabled': getattr(path_obj, 'disabled', False),\n }\n\n\ndef make_directory_item(directory, **filters):\n item = make_generic_item(directory, **filters)\n item.update({\n 'icon': 'folder',\n })\n return item\n\n\ndef make_store_item(store):\n item = make_generic_item(store)\n item.update({\n 'icon': 'file',\n })\n return item\n\n\ndef get_parent(path_obj):\n \"\"\"Retrieves a representation of the parent object.\n\n :param path_obj: either a `Directory` or Store` instance.\n \"\"\"\n parent_dir = path_obj.parent\n\n if parent_dir.is_project():\n return None\n\n if parent_dir.is_language():\n label = _('Back to language')\n else:\n label = _('Back to parent folder')\n\n return {\n 'title': label,\n 'href': parent_dir.get_absolute_url()\n }\n\n\ndef make_project_item(translation_project):\n item = make_generic_item(translation_project)\n item.update({\n 'icon': 'project',\n 'title': translation_project.project.name,\n })\n return item\n\n\ndef make_language_item(translation_project):\n item = make_generic_item(translation_project)\n item.update({\n 'icon': 'language',\n 'title': translation_project.language.name,\n })\n return item\n\n\ndef make_xlanguage_item(resource_obj):\n translation_project = resource_obj.translation_project\n item = make_generic_item(resource_obj)\n item.update({\n 'icon': 'language',\n 'code': translation_project.language.code,\n 'title': translation_project.language.name,\n })\n return item\n\n\ndef make_project_list_item(project):\n item = make_generic_item(project)\n item.update({\n 'icon': 'project',\n 'title': project.fullname,\n })\n return item\n", "path": "pootle/core/browser.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.utils.translation import ugettext_lazy as _\n\n\nHEADING_CHOICES = [\n {\n 'id': 'name',\n 'class': 'stats',\n 'display_name': _(\"Name\"),\n },\n {\n 'id': 'priority',\n 'class': 'stats-number sorttable_numeric',\n 'display_name': _(\"Priority\"),\n },\n {\n 'id': 'project',\n 'class': 'stats',\n 'display_name': _(\"Project\"),\n },\n {\n 'id': 'language',\n 'class': 'stats',\n 'display_name': _(\"Language\"),\n },\n {\n 'id': 'progress',\n 'class': 'stats',\n # Translators: noun. The graphical representation of translation status\n 'display_name': _(\"Progress\"),\n },\n {\n 'id': 'total',\n 'class': 'stats-number sorttable_numeric when-loaded',\n # Translators: Heading representing the total number of words of a file\n # or directory\n 'display_name': _(\"Total\"),\n },\n {\n 'id': 'last-updated',\n 'class': 'stats sorttable_numeric when-loaded',\n 'display_name': _(\"Last updated\"),\n },\n {\n 'id': 'critical',\n 'class': 'stats-number sorttable_numeric when-loaded',\n 'display_name': _(\"Critical\"),\n },\n {\n 'id': 'suggestions',\n 'class': 'stats-number sorttable_numeric when-loaded',\n # Translators: The number of suggestions pending review\n 'display_name': _(\"Suggestions\"),\n },\n {\n 'id': 'need-translation',\n 'class': 'stats-number sorttable_numeric when-loaded',\n 'display_name': _(\"Need Translation\"),\n },\n {\n 'id': 'activity',\n 'class': 'stats sorttable_numeric when-loaded',\n 'display_name': _(\"Last Activity\"),\n },\n]\n\n\ndef get_table_headings(choices):\n \"\"\"Filters the list of available table headings to the given `choices`.\"\"\"\n return filter(lambda x: x['id'] in choices, HEADING_CHOICES)\n\n\ndef make_generic_item(path_obj, **kwargs):\n \"\"\"Template variables for each row in the table.\"\"\"\n return {\n 'href': path_obj.get_absolute_url(),\n 'href_all': path_obj.get_translate_url(),\n 'href_todo': path_obj.get_translate_url(state='incomplete', **kwargs),\n 'href_sugg': path_obj.get_translate_url(state='suggestions', **kwargs),\n 'href_critical': path_obj.get_critical_url(**kwargs),\n 'title': path_obj.name,\n 'code': path_obj.code,\n 'is_disabled': getattr(path_obj, 'disabled', False),\n }\n\n\ndef make_directory_item(directory, **filters):\n item = make_generic_item(directory, **filters)\n item.update({\n 'icon': 'folder',\n })\n return item\n\n\ndef make_store_item(store):\n item = make_generic_item(store)\n item.update({\n 'icon': 'file',\n })\n return item\n\n\ndef get_parent(path_obj):\n \"\"\"Retrieves a representation of the parent object.\n\n :param path_obj: either a `Directory` or Store` instance.\n \"\"\"\n parent_dir = path_obj.parent\n\n if parent_dir.is_project():\n return None\n\n if parent_dir.is_language():\n label = _('Back to language')\n else:\n label = _('Back to parent folder')\n\n return {\n 'title': label,\n 'href': parent_dir.get_absolute_url()\n }\n\n\ndef make_project_item(translation_project):\n item = make_generic_item(translation_project)\n item.update({\n 'icon': 'project',\n 'title': translation_project.project.name,\n })\n return item\n\n\ndef make_language_item(translation_project):\n item = make_generic_item(translation_project)\n item.update({\n 'icon': 'language',\n 'title': translation_project.language.name,\n })\n return item\n\n\ndef make_xlanguage_item(resource_obj):\n translation_project = resource_obj.translation_project\n item = make_generic_item(resource_obj)\n item.update({\n 'icon': 'language',\n 'code': translation_project.language.code,\n 'title': translation_project.language.name,\n })\n return item\n\n\ndef make_project_list_item(project):\n item = make_generic_item(project)\n item.update({\n 'icon': 'project',\n 'title': project.fullname,\n })\n return item\n", "path": "pootle/core/browser.py"}]}
| 1,717 | 208 |
gh_patches_debug_23732
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-4814
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Test the upcoming pip dependency resolver in CI
From [email protected], about pip 20.2:
> NOTICE: This release includes the beta of the next-generation dependency resolver. It is significantly stricter and more consistent when it receives incompatible instructions, and reduces support for certain kinds of constraints files, so some workarounds and workflows may break. Please test it with the `--use-feature=2020-resolver` flag. Please see our guide on how to test and migrate, and how to report issues
<https://pip.pypa.io/en/latest/user_guide/#changes-to-the-pip-dependency-resolver-in-20-2-2020>.
>
> The new dependency resolver is *off by default* because it is *not yet
ready for everyday use*.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from os.path import dirname, join
2 from pkg_resources import parse_version
3 from setuptools import setup, find_packages, __version__ as setuptools_version
4
5
6 with open(join(dirname(__file__), 'scrapy/VERSION'), 'rb') as f:
7 version = f.read().decode('ascii').strip()
8
9
10 def has_environment_marker_platform_impl_support():
11 """Code extracted from 'pytest/setup.py'
12 https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31
13
14 The first known release to support environment marker with range operators
15 it is 18.5, see:
16 https://setuptools.readthedocs.io/en/latest/history.html#id235
17 """
18 return parse_version(setuptools_version) >= parse_version('18.5')
19
20
21 install_requires = [
22 'Twisted>=17.9.0',
23 'cryptography>=2.0',
24 'cssselect>=0.9.1',
25 'itemloaders>=1.0.1',
26 'parsel>=1.5.0',
27 'PyDispatcher>=2.0.5',
28 'pyOpenSSL>=16.2.0',
29 'queuelib>=1.4.2',
30 'service_identity>=16.0.0',
31 'w3lib>=1.17.0',
32 'zope.interface>=4.1.3',
33 'protego>=0.1.15',
34 'itemadapter>=0.1.0',
35 ]
36 extras_require = {}
37
38 if has_environment_marker_platform_impl_support():
39 extras_require[':platform_python_implementation == "CPython"'] = [
40 'lxml>=3.5.0',
41 ]
42 extras_require[':platform_python_implementation == "PyPy"'] = [
43 # Earlier lxml versions are affected by
44 # https://foss.heptapod.net/pypy/pypy/-/issues/2498,
45 # which was fixed in Cython 0.26, released on 2017-06-19, and used to
46 # generate the C headers of lxml release tarballs published since then, the
47 # first of which was:
48 'lxml>=4.0.0',
49 'PyPyDispatcher>=2.1.0',
50 ]
51 else:
52 install_requires.append('lxml>=3.5.0')
53
54
55 setup(
56 name='Scrapy',
57 version=version,
58 url='https://scrapy.org',
59 project_urls={
60 'Documentation': 'https://docs.scrapy.org/',
61 'Source': 'https://github.com/scrapy/scrapy',
62 'Tracker': 'https://github.com/scrapy/scrapy/issues',
63 },
64 description='A high-level Web Crawling and Web Scraping framework',
65 long_description=open('README.rst').read(),
66 author='Scrapy developers',
67 maintainer='Pablo Hoffman',
68 maintainer_email='[email protected]',
69 license='BSD',
70 packages=find_packages(exclude=('tests', 'tests.*')),
71 include_package_data=True,
72 zip_safe=False,
73 entry_points={
74 'console_scripts': ['scrapy = scrapy.cmdline:execute']
75 },
76 classifiers=[
77 'Framework :: Scrapy',
78 'Development Status :: 5 - Production/Stable',
79 'Environment :: Console',
80 'Intended Audience :: Developers',
81 'License :: OSI Approved :: BSD License',
82 'Operating System :: OS Independent',
83 'Programming Language :: Python',
84 'Programming Language :: Python :: 3',
85 'Programming Language :: Python :: 3.6',
86 'Programming Language :: Python :: 3.7',
87 'Programming Language :: Python :: 3.8',
88 'Programming Language :: Python :: Implementation :: CPython',
89 'Programming Language :: Python :: Implementation :: PyPy',
90 'Topic :: Internet :: WWW/HTTP',
91 'Topic :: Software Development :: Libraries :: Application Frameworks',
92 'Topic :: Software Development :: Libraries :: Python Modules',
93 ],
94 python_requires='>=3.6',
95 install_requires=install_requires,
96 extras_require=extras_require,
97 )
98
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -24,7 +24,6 @@
'cssselect>=0.9.1',
'itemloaders>=1.0.1',
'parsel>=1.5.0',
- 'PyDispatcher>=2.0.5',
'pyOpenSSL>=16.2.0',
'queuelib>=1.4.2',
'service_identity>=16.0.0',
@@ -34,11 +33,12 @@
'itemadapter>=0.1.0',
]
extras_require = {}
-
+cpython_dependencies = [
+ 'lxml>=3.5.0',
+ 'PyDispatcher>=2.0.5',
+]
if has_environment_marker_platform_impl_support():
- extras_require[':platform_python_implementation == "CPython"'] = [
- 'lxml>=3.5.0',
- ]
+ extras_require[':platform_python_implementation == "CPython"'] = cpython_dependencies
extras_require[':platform_python_implementation == "PyPy"'] = [
# Earlier lxml versions are affected by
# https://foss.heptapod.net/pypy/pypy/-/issues/2498,
@@ -49,7 +49,7 @@
'PyPyDispatcher>=2.1.0',
]
else:
- install_requires.append('lxml>=3.5.0')
+ install_requires.extend(cpython_dependencies)
setup(
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -24,7 +24,6 @@\n 'cssselect>=0.9.1',\n 'itemloaders>=1.0.1',\n 'parsel>=1.5.0',\n- 'PyDispatcher>=2.0.5',\n 'pyOpenSSL>=16.2.0',\n 'queuelib>=1.4.2',\n 'service_identity>=16.0.0',\n@@ -34,11 +33,12 @@\n 'itemadapter>=0.1.0',\n ]\n extras_require = {}\n-\n+cpython_dependencies = [\n+ 'lxml>=3.5.0',\n+ 'PyDispatcher>=2.0.5',\n+]\n if has_environment_marker_platform_impl_support():\n- extras_require[':platform_python_implementation == \"CPython\"'] = [\n- 'lxml>=3.5.0',\n- ]\n+ extras_require[':platform_python_implementation == \"CPython\"'] = cpython_dependencies\n extras_require[':platform_python_implementation == \"PyPy\"'] = [\n # Earlier lxml versions are affected by\n # https://foss.heptapod.net/pypy/pypy/-/issues/2498,\n@@ -49,7 +49,7 @@\n 'PyPyDispatcher>=2.1.0',\n ]\n else:\n- install_requires.append('lxml>=3.5.0')\n+ install_requires.extend(cpython_dependencies)\n \n \n setup(\n", "issue": "Test the upcoming pip dependency resolver in CI\nFrom [email protected], about pip 20.2:\r\n\r\n> NOTICE: This release includes the beta of the next-generation dependency resolver. It is significantly stricter and more consistent when it receives incompatible instructions, and reduces support for certain kinds of constraints files, so some workarounds and workflows may break. Please test it with the `--use-feature=2020-resolver` flag. Please see our guide on how to test and migrate, and how to report issues\r\n<https://pip.pypa.io/en/latest/user_guide/#changes-to-the-pip-dependency-resolver-in-20-2-2020>.\r\n> \r\n> The new dependency resolver is *off by default* because it is *not yet\r\nready for everyday use*.\n", "before_files": [{"content": "from os.path import dirname, join\nfrom pkg_resources import parse_version\nfrom setuptools import setup, find_packages, __version__ as setuptools_version\n\n\nwith open(join(dirname(__file__), 'scrapy/VERSION'), 'rb') as f:\n version = f.read().decode('ascii').strip()\n\n\ndef has_environment_marker_platform_impl_support():\n \"\"\"Code extracted from 'pytest/setup.py'\n https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31\n\n The first known release to support environment marker with range operators\n it is 18.5, see:\n https://setuptools.readthedocs.io/en/latest/history.html#id235\n \"\"\"\n return parse_version(setuptools_version) >= parse_version('18.5')\n\n\ninstall_requires = [\n 'Twisted>=17.9.0',\n 'cryptography>=2.0',\n 'cssselect>=0.9.1',\n 'itemloaders>=1.0.1',\n 'parsel>=1.5.0',\n 'PyDispatcher>=2.0.5',\n 'pyOpenSSL>=16.2.0',\n 'queuelib>=1.4.2',\n 'service_identity>=16.0.0',\n 'w3lib>=1.17.0',\n 'zope.interface>=4.1.3',\n 'protego>=0.1.15',\n 'itemadapter>=0.1.0',\n]\nextras_require = {}\n\nif has_environment_marker_platform_impl_support():\n extras_require[':platform_python_implementation == \"CPython\"'] = [\n 'lxml>=3.5.0',\n ]\n extras_require[':platform_python_implementation == \"PyPy\"'] = [\n # Earlier lxml versions are affected by\n # https://foss.heptapod.net/pypy/pypy/-/issues/2498,\n # which was fixed in Cython 0.26, released on 2017-06-19, and used to\n # generate the C headers of lxml release tarballs published since then, the\n # first of which was:\n 'lxml>=4.0.0',\n 'PyPyDispatcher>=2.1.0',\n ]\nelse:\n install_requires.append('lxml>=3.5.0')\n\n\nsetup(\n name='Scrapy',\n version=version,\n url='https://scrapy.org',\n project_urls={\n 'Documentation': 'https://docs.scrapy.org/',\n 'Source': 'https://github.com/scrapy/scrapy',\n 'Tracker': 'https://github.com/scrapy/scrapy/issues',\n },\n description='A high-level Web Crawling and Web Scraping framework',\n long_description=open('README.rst').read(),\n author='Scrapy developers',\n maintainer='Pablo Hoffman',\n maintainer_email='[email protected]',\n license='BSD',\n packages=find_packages(exclude=('tests', 'tests.*')),\n include_package_data=True,\n zip_safe=False,\n entry_points={\n 'console_scripts': ['scrapy = scrapy.cmdline:execute']\n },\n classifiers=[\n 'Framework :: Scrapy',\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.6',\n install_requires=install_requires,\n extras_require=extras_require,\n)\n", "path": "setup.py"}], "after_files": [{"content": "from os.path import dirname, join\nfrom pkg_resources import parse_version\nfrom setuptools import setup, find_packages, __version__ as setuptools_version\n\n\nwith open(join(dirname(__file__), 'scrapy/VERSION'), 'rb') as f:\n version = f.read().decode('ascii').strip()\n\n\ndef has_environment_marker_platform_impl_support():\n \"\"\"Code extracted from 'pytest/setup.py'\n https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31\n\n The first known release to support environment marker with range operators\n it is 18.5, see:\n https://setuptools.readthedocs.io/en/latest/history.html#id235\n \"\"\"\n return parse_version(setuptools_version) >= parse_version('18.5')\n\n\ninstall_requires = [\n 'Twisted>=17.9.0',\n 'cryptography>=2.0',\n 'cssselect>=0.9.1',\n 'itemloaders>=1.0.1',\n 'parsel>=1.5.0',\n 'pyOpenSSL>=16.2.0',\n 'queuelib>=1.4.2',\n 'service_identity>=16.0.0',\n 'w3lib>=1.17.0',\n 'zope.interface>=4.1.3',\n 'protego>=0.1.15',\n 'itemadapter>=0.1.0',\n]\nextras_require = {}\ncpython_dependencies = [\n 'lxml>=3.5.0',\n 'PyDispatcher>=2.0.5',\n]\nif has_environment_marker_platform_impl_support():\n extras_require[':platform_python_implementation == \"CPython\"'] = cpython_dependencies\n extras_require[':platform_python_implementation == \"PyPy\"'] = [\n # Earlier lxml versions are affected by\n # https://foss.heptapod.net/pypy/pypy/-/issues/2498,\n # which was fixed in Cython 0.26, released on 2017-06-19, and used to\n # generate the C headers of lxml release tarballs published since then, the\n # first of which was:\n 'lxml>=4.0.0',\n 'PyPyDispatcher>=2.1.0',\n ]\nelse:\n install_requires.extend(cpython_dependencies)\n\n\nsetup(\n name='Scrapy',\n version=version,\n url='https://scrapy.org',\n project_urls={\n 'Documentation': 'https://docs.scrapy.org/',\n 'Source': 'https://github.com/scrapy/scrapy',\n 'Tracker': 'https://github.com/scrapy/scrapy/issues',\n },\n description='A high-level Web Crawling and Web Scraping framework',\n long_description=open('README.rst').read(),\n author='Scrapy developers',\n maintainer='Pablo Hoffman',\n maintainer_email='[email protected]',\n license='BSD',\n packages=find_packages(exclude=('tests', 'tests.*')),\n include_package_data=True,\n zip_safe=False,\n entry_points={\n 'console_scripts': ['scrapy = scrapy.cmdline:execute']\n },\n classifiers=[\n 'Framework :: Scrapy',\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.6',\n install_requires=install_requires,\n extras_require=extras_require,\n)\n", "path": "setup.py"}]}
| 1,504 | 345 |
gh_patches_debug_1705
|
rasdani/github-patches
|
git_diff
|
ManimCommunity__manim-2013
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve TransformMatchingAbstractBase to support different types of objects
## Enhancement proposal
<!-- Add a clear and concise description of your enhancement proposal. In particular,
if your enhancement introduces changes to the API, illustrate them with
(fictional) code examples. -->
The first line of `TransformMatchingAbstractBase __init__()` method contains an `assert` keyword to make sure the `mobjects` are of the same type. This implementation makes using the class with submobjects of `Tex` or `MathTex` a bit difficult and it just doesn't look right. Here's what I mean:
I've made this class:
```py
class TransformTest(Scene):
def construct(self):
text1 = Tex("What's that? ", "the ", "morse code")
text2 = Tex("Look! ", "here come dots").next_to(text1, DOWN)
self.add(text1, text2[0])
self.wait(0.5)
#self.play(TransformMatchingShapes(text1[1:], text2[1])) **This does not works (throws assertion error)**
self.play(TransformMatchingShapes(text1[1:], text2[1:])) #**This does**
```
The line that is commented throws an `assertion error` because the first object is a `VGroup` and the second one is a `SingleStringMathTex`. But the line after that does work. So basically that `assert` statement is preventing the user from making certain transforms which are (and should be IMO) possible. If some extra logic is added to `TransformMatchingAbstractBase`, this kind of issue would be solved.
Here's another example:
```py
class TransformTest(Scene):
def construct(self):
text3 = Tex("Result", "eleven", " plus", " two", color=RED).next_to(text2, DOWN)
text4 = Tex("twelve plus one", color=RED).next_to(text3, DOWN)
self.add(text3, text4[0])
self.wait(0.5)
#self.play(TransformMatchingShapes(text3[1:], text4)) **This doesn't work**
self.play(TransformMatchingShapes(text3[1:], text4[:])) #**This does**
self.wait(0.5)
```
Clearly the Transfroms are possible, but I don't think the user should need to add weird syntax that isn't necessary just so the program doesn't spit out an error.
## Additional comments
<!-- Add further context that you think might be relevant. -->
I'm using `TransformMatchingShapes` here but of course this also affects `TransformMatchingTex` since they share the same parent class.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `manim/animation/transform_matching_parts.py`
Content:
```
1 """Animations that try to transform Mobjects while keeping track of identical parts."""
2
3 __all__ = ["TransformMatchingShapes", "TransformMatchingTex"]
4
5 from typing import TYPE_CHECKING, List, Optional
6
7 import numpy as np
8
9 from .._config import config
10 from ..mobject.mobject import Group, Mobject
11 from ..mobject.opengl_mobject import OpenGLGroup, OpenGLMobject
12 from ..mobject.types.opengl_vectorized_mobject import OpenGLVGroup, OpenGLVMobject
13 from ..mobject.types.vectorized_mobject import VGroup, VMobject
14 from .composition import AnimationGroup
15 from .fading import FadeIn, FadeOut
16 from .transform import FadeTransformPieces, Transform
17
18 if TYPE_CHECKING:
19 from ..scene.scene import Scene
20
21
22 class TransformMatchingAbstractBase(AnimationGroup):
23 """Abstract base class for transformations that keep track of matching parts.
24
25 Subclasses have to implement the two static methods
26 :meth:`~.TransformMatchingAbstractBase.get_mobject_parts` and
27 :meth:`~.TransformMatchingAbstractBase.get_mobject_key`.
28
29 Basically, this transformation first maps all submobjects returned
30 by the ``get_mobject_parts`` method to certain keys by applying the
31 ``get_mobject_key`` method. Then, submobjects with matching keys
32 are transformed into each other.
33
34 Parameters
35 ----------
36 mobject
37 The starting :class:`~.Mobject`.
38 target_mobject
39 The target :class:`~.Mobject`.
40 transform_mismatches
41 Controls whether submobjects without a matching key are transformed
42 into each other by using :class:`~.Transform`. Default: ``False``.
43 fade_transform_mismatches
44 Controls whether submobjects without a matching key are transformed
45 into each other by using :class:`~.FadeTransform`. Default: ``False``.
46 key_map
47 Optional. A dictionary mapping keys belonging to some of the starting mobject's
48 submobjects (i.e., the return values of the ``get_mobject_key`` method)
49 to some keys belonging to the target mobject's submobjects that should
50 be transformed although the keys don't match.
51 kwargs
52 All further keyword arguments are passed to the submobject transformations.
53
54
55 Note
56 ----
57 If neither ``transform_mismatches`` nor ``fade_transform_mismatches``
58 are set to ``True``, submobjects without matching keys in the starting
59 mobject are faded out in the direction of the unmatched submobjects in
60 the target mobject, and unmatched submobjects in the target mobject
61 are faded in from the direction of the unmatched submobjects in the
62 start mobject.
63
64 """
65
66 def __init__(
67 self,
68 mobject: "Mobject",
69 target_mobject: "Mobject",
70 transform_mismatches: bool = False,
71 fade_transform_mismatches: bool = False,
72 key_map: Optional[dict] = None,
73 **kwargs
74 ):
75 assert type(mobject) is type(target_mobject)
76
77 if isinstance(mobject, OpenGLVMobject):
78 group_type = OpenGLVGroup
79 elif isinstance(mobject, OpenGLMobject):
80 group_type = OpenGLGroup
81 elif isinstance(mobject, VMobject):
82 group_type = VGroup
83 else:
84 group_type = Group
85
86 source_map = self.get_shape_map(mobject)
87 target_map = self.get_shape_map(target_mobject)
88
89 if key_map is None:
90 key_map = {}
91
92 # Create two mobjects whose submobjects all match each other
93 # according to whatever keys are used for source_map and
94 # target_map
95 transform_source = group_type()
96 transform_target = group_type()
97 kwargs["final_alpha_value"] = 0
98 for key in set(source_map).intersection(target_map):
99 transform_source.add(source_map[key])
100 transform_target.add(target_map[key])
101 anims = [Transform(transform_source, transform_target, **kwargs)]
102 # User can manually specify when one part should transform
103 # into another despite not matching by using key_map
104 key_mapped_source = group_type()
105 key_mapped_target = group_type()
106 for key1, key2 in key_map.items():
107 if key1 in source_map and key2 in target_map:
108 key_mapped_source.add(source_map[key1])
109 key_mapped_target.add(target_map[key2])
110 source_map.pop(key1, None)
111 target_map.pop(key2, None)
112 if len(key_mapped_source) > 0:
113 anims.append(
114 FadeTransformPieces(key_mapped_source, key_mapped_target, **kwargs),
115 )
116
117 fade_source = group_type()
118 fade_target = group_type()
119 for key in set(source_map).difference(target_map):
120 fade_source.add(source_map[key])
121 for key in set(target_map).difference(source_map):
122 fade_target.add(target_map[key])
123
124 if transform_mismatches:
125 if "replace_mobject_with_target_in_scene" not in kwargs:
126 kwargs["replace_mobject_with_target_in_scene"] = True
127 anims.append(Transform(fade_source, fade_target, **kwargs))
128 elif fade_transform_mismatches:
129 anims.append(FadeTransformPieces(fade_source, fade_target, **kwargs))
130 else:
131 anims.append(FadeOut(fade_source, target_position=fade_target, **kwargs))
132 anims.append(
133 FadeIn(fade_target.copy(), target_position=fade_target, **kwargs),
134 )
135
136 super().__init__(*anims)
137
138 self.to_remove = mobject
139 self.to_add = target_mobject
140
141 def get_shape_map(self, mobject: "Mobject") -> dict:
142 shape_map = {}
143 for sm in self.get_mobject_parts(mobject):
144 key = self.get_mobject_key(sm)
145 if key not in shape_map:
146 if config["renderer"] == "opengl":
147 shape_map[key] = OpenGLVGroup()
148 else:
149 shape_map[key] = VGroup()
150 shape_map[key].add(sm)
151 return shape_map
152
153 def clean_up_from_scene(self, scene: "Scene") -> None:
154 for anim in self.animations:
155 anim.interpolate(0)
156 scene.remove(self.mobject)
157 scene.remove(self.to_remove)
158 scene.add(self.to_add)
159
160 @staticmethod
161 def get_mobject_parts(mobject: "Mobject"):
162 raise NotImplementedError("To be implemented in subclass.")
163
164 @staticmethod
165 def get_mobject_key(mobject: "Mobject"):
166 raise NotImplementedError("To be implemented in subclass.")
167
168
169 class TransformMatchingShapes(TransformMatchingAbstractBase):
170 """An animation trying to transform groups by matching the shape
171 of their submobjects.
172
173 Two submobjects match if the hash of their point coordinates after
174 normalization (i.e., after translation to the origin, fixing the submobject
175 height at 1 unit, and rounding the coordinates to three decimal places)
176 matches.
177
178 See also
179 --------
180 :class:`~.TransformMatchingAbstractBase`
181
182 Examples
183 --------
184
185 .. manim:: Anagram
186
187 class Anagram(Scene):
188 def construct(self):
189 src = Text("the morse code")
190 tar = Text("here come dots")
191 self.play(Write(src))
192 self.wait(0.5)
193 self.play(TransformMatchingShapes(src, tar, path_arc=PI/2))
194 self.wait(0.5)
195
196 """
197
198 def __init__(
199 self,
200 mobject: "Mobject",
201 target_mobject: "Mobject",
202 transform_mismatches: bool = False,
203 fade_transform_mismatches: bool = False,
204 key_map: Optional[dict] = None,
205 **kwargs
206 ):
207 super().__init__(
208 mobject,
209 target_mobject,
210 transform_mismatches=transform_mismatches,
211 fade_transform_mismatches=fade_transform_mismatches,
212 key_map=key_map,
213 **kwargs
214 )
215
216 @staticmethod
217 def get_mobject_parts(mobject: "Mobject") -> List["Mobject"]:
218 return mobject.family_members_with_points()
219
220 @staticmethod
221 def get_mobject_key(mobject: "Mobject") -> int:
222 mobject.save_state()
223 mobject.center()
224 mobject.set_height(1)
225 result = hash(np.round(mobject.points, 3).tobytes())
226 mobject.restore()
227 return result
228
229
230 class TransformMatchingTex(TransformMatchingAbstractBase):
231 """A transformation trying to transform rendered LaTeX strings.
232
233 Two submobjects match if their ``tex_string`` matches.
234
235 See also
236 --------
237 :class:`~.TransformMatchingAbstractBase`
238
239 Examples
240 --------
241
242 .. manim:: MatchingEquationParts
243
244 class MatchingEquationParts(Scene):
245 def construct(self):
246 eq1 = MathTex("{{a^2}} + {{b^2}} = {{c^2}}")
247 eq2 = MathTex("{{a^2}} = {{c^2}} - {{b^2}}")
248 self.add(eq1)
249 self.wait(0.5)
250 self.play(TransformMatchingTex(eq1, eq2))
251 self.wait(0.5)
252
253 """
254
255 def __init__(
256 self,
257 mobject: "Mobject",
258 target_mobject: "Mobject",
259 transform_mismatches: bool = False,
260 fade_transform_mismatches: bool = False,
261 key_map: Optional[dict] = None,
262 **kwargs
263 ):
264 assert hasattr(mobject, "tex_string")
265 assert hasattr(target_mobject, "tex_string")
266 super().__init__(
267 mobject,
268 target_mobject,
269 transform_mismatches=transform_mismatches,
270 fade_transform_mismatches=fade_transform_mismatches,
271 key_map=key_map,
272 **kwargs
273 )
274
275 @staticmethod
276 def get_mobject_parts(mobject: "Mobject") -> List["Mobject"]:
277 return mobject.submobjects
278
279 @staticmethod
280 def get_mobject_key(mobject: "Mobject") -> str:
281 return mobject.tex_string
282
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/manim/animation/transform_matching_parts.py b/manim/animation/transform_matching_parts.py
--- a/manim/animation/transform_matching_parts.py
+++ b/manim/animation/transform_matching_parts.py
@@ -72,7 +72,6 @@
key_map: Optional[dict] = None,
**kwargs
):
- assert type(mobject) is type(target_mobject)
if isinstance(mobject, OpenGLVMobject):
group_type = OpenGLVGroup
|
{"golden_diff": "diff --git a/manim/animation/transform_matching_parts.py b/manim/animation/transform_matching_parts.py\n--- a/manim/animation/transform_matching_parts.py\n+++ b/manim/animation/transform_matching_parts.py\n@@ -72,7 +72,6 @@\n key_map: Optional[dict] = None,\n **kwargs\n ):\n- assert type(mobject) is type(target_mobject)\n \n if isinstance(mobject, OpenGLVMobject):\n group_type = OpenGLVGroup\n", "issue": "Improve TransformMatchingAbstractBase to support different types of objects\n## Enhancement proposal\r\n<!-- Add a clear and concise description of your enhancement proposal. In particular,\r\n if your enhancement introduces changes to the API, illustrate them with\r\n (fictional) code examples. -->\r\n\r\nThe first line of `TransformMatchingAbstractBase __init__()` method contains an `assert` keyword to make sure the `mobjects` are of the same type. This implementation makes using the class with submobjects of `Tex` or `MathTex` a bit difficult and it just doesn't look right. Here's what I mean: \r\n\r\nI've made this class:\r\n\r\n```py\r\nclass TransformTest(Scene):\r\n def construct(self):\r\n text1 = Tex(\"What's that? \", \"the \", \"morse code\")\r\n text2 = Tex(\"Look! \", \"here come dots\").next_to(text1, DOWN)\r\n\r\n self.add(text1, text2[0])\r\n self.wait(0.5)\r\n #self.play(TransformMatchingShapes(text1[1:], text2[1])) **This does not works (throws assertion error)**\r\n self.play(TransformMatchingShapes(text1[1:], text2[1:])) #**This does**\r\n```\r\nThe line that is commented throws an `assertion error` because the first object is a `VGroup` and the second one is a `SingleStringMathTex`. But the line after that does work. So basically that `assert` statement is preventing the user from making certain transforms which are (and should be IMO) possible. If some extra logic is added to `TransformMatchingAbstractBase`, this kind of issue would be solved.\r\n\r\nHere's another example:\r\n\r\n```py\r\nclass TransformTest(Scene):\r\n def construct(self):\r\n\r\n text3 = Tex(\"Result\", \"eleven\", \" plus\", \" two\", color=RED).next_to(text2, DOWN)\r\n text4 = Tex(\"twelve plus one\", color=RED).next_to(text3, DOWN)\r\n\r\n self.add(text3, text4[0])\r\n self.wait(0.5)\r\n #self.play(TransformMatchingShapes(text3[1:], text4)) **This doesn't work**\r\n self.play(TransformMatchingShapes(text3[1:], text4[:])) #**This does**\r\n self.wait(0.5)\r\n```\r\n\r\nClearly the Transfroms are possible, but I don't think the user should need to add weird syntax that isn't necessary just so the program doesn't spit out an error.\r\n\r\n## Additional comments\r\n<!-- Add further context that you think might be relevant. -->\r\nI'm using `TransformMatchingShapes` here but of course this also affects `TransformMatchingTex` since they share the same parent class.\r\n\n", "before_files": [{"content": "\"\"\"Animations that try to transform Mobjects while keeping track of identical parts.\"\"\"\n\n__all__ = [\"TransformMatchingShapes\", \"TransformMatchingTex\"]\n\nfrom typing import TYPE_CHECKING, List, Optional\n\nimport numpy as np\n\nfrom .._config import config\nfrom ..mobject.mobject import Group, Mobject\nfrom ..mobject.opengl_mobject import OpenGLGroup, OpenGLMobject\nfrom ..mobject.types.opengl_vectorized_mobject import OpenGLVGroup, OpenGLVMobject\nfrom ..mobject.types.vectorized_mobject import VGroup, VMobject\nfrom .composition import AnimationGroup\nfrom .fading import FadeIn, FadeOut\nfrom .transform import FadeTransformPieces, Transform\n\nif TYPE_CHECKING:\n from ..scene.scene import Scene\n\n\nclass TransformMatchingAbstractBase(AnimationGroup):\n \"\"\"Abstract base class for transformations that keep track of matching parts.\n\n Subclasses have to implement the two static methods\n :meth:`~.TransformMatchingAbstractBase.get_mobject_parts` and\n :meth:`~.TransformMatchingAbstractBase.get_mobject_key`.\n\n Basically, this transformation first maps all submobjects returned\n by the ``get_mobject_parts`` method to certain keys by applying the\n ``get_mobject_key`` method. Then, submobjects with matching keys\n are transformed into each other.\n\n Parameters\n ----------\n mobject\n The starting :class:`~.Mobject`.\n target_mobject\n The target :class:`~.Mobject`.\n transform_mismatches\n Controls whether submobjects without a matching key are transformed\n into each other by using :class:`~.Transform`. Default: ``False``.\n fade_transform_mismatches\n Controls whether submobjects without a matching key are transformed\n into each other by using :class:`~.FadeTransform`. Default: ``False``.\n key_map\n Optional. A dictionary mapping keys belonging to some of the starting mobject's\n submobjects (i.e., the return values of the ``get_mobject_key`` method)\n to some keys belonging to the target mobject's submobjects that should\n be transformed although the keys don't match.\n kwargs\n All further keyword arguments are passed to the submobject transformations.\n\n\n Note\n ----\n If neither ``transform_mismatches`` nor ``fade_transform_mismatches``\n are set to ``True``, submobjects without matching keys in the starting\n mobject are faded out in the direction of the unmatched submobjects in\n the target mobject, and unmatched submobjects in the target mobject\n are faded in from the direction of the unmatched submobjects in the\n start mobject.\n\n \"\"\"\n\n def __init__(\n self,\n mobject: \"Mobject\",\n target_mobject: \"Mobject\",\n transform_mismatches: bool = False,\n fade_transform_mismatches: bool = False,\n key_map: Optional[dict] = None,\n **kwargs\n ):\n assert type(mobject) is type(target_mobject)\n\n if isinstance(mobject, OpenGLVMobject):\n group_type = OpenGLVGroup\n elif isinstance(mobject, OpenGLMobject):\n group_type = OpenGLGroup\n elif isinstance(mobject, VMobject):\n group_type = VGroup\n else:\n group_type = Group\n\n source_map = self.get_shape_map(mobject)\n target_map = self.get_shape_map(target_mobject)\n\n if key_map is None:\n key_map = {}\n\n # Create two mobjects whose submobjects all match each other\n # according to whatever keys are used for source_map and\n # target_map\n transform_source = group_type()\n transform_target = group_type()\n kwargs[\"final_alpha_value\"] = 0\n for key in set(source_map).intersection(target_map):\n transform_source.add(source_map[key])\n transform_target.add(target_map[key])\n anims = [Transform(transform_source, transform_target, **kwargs)]\n # User can manually specify when one part should transform\n # into another despite not matching by using key_map\n key_mapped_source = group_type()\n key_mapped_target = group_type()\n for key1, key2 in key_map.items():\n if key1 in source_map and key2 in target_map:\n key_mapped_source.add(source_map[key1])\n key_mapped_target.add(target_map[key2])\n source_map.pop(key1, None)\n target_map.pop(key2, None)\n if len(key_mapped_source) > 0:\n anims.append(\n FadeTransformPieces(key_mapped_source, key_mapped_target, **kwargs),\n )\n\n fade_source = group_type()\n fade_target = group_type()\n for key in set(source_map).difference(target_map):\n fade_source.add(source_map[key])\n for key in set(target_map).difference(source_map):\n fade_target.add(target_map[key])\n\n if transform_mismatches:\n if \"replace_mobject_with_target_in_scene\" not in kwargs:\n kwargs[\"replace_mobject_with_target_in_scene\"] = True\n anims.append(Transform(fade_source, fade_target, **kwargs))\n elif fade_transform_mismatches:\n anims.append(FadeTransformPieces(fade_source, fade_target, **kwargs))\n else:\n anims.append(FadeOut(fade_source, target_position=fade_target, **kwargs))\n anims.append(\n FadeIn(fade_target.copy(), target_position=fade_target, **kwargs),\n )\n\n super().__init__(*anims)\n\n self.to_remove = mobject\n self.to_add = target_mobject\n\n def get_shape_map(self, mobject: \"Mobject\") -> dict:\n shape_map = {}\n for sm in self.get_mobject_parts(mobject):\n key = self.get_mobject_key(sm)\n if key not in shape_map:\n if config[\"renderer\"] == \"opengl\":\n shape_map[key] = OpenGLVGroup()\n else:\n shape_map[key] = VGroup()\n shape_map[key].add(sm)\n return shape_map\n\n def clean_up_from_scene(self, scene: \"Scene\") -> None:\n for anim in self.animations:\n anim.interpolate(0)\n scene.remove(self.mobject)\n scene.remove(self.to_remove)\n scene.add(self.to_add)\n\n @staticmethod\n def get_mobject_parts(mobject: \"Mobject\"):\n raise NotImplementedError(\"To be implemented in subclass.\")\n\n @staticmethod\n def get_mobject_key(mobject: \"Mobject\"):\n raise NotImplementedError(\"To be implemented in subclass.\")\n\n\nclass TransformMatchingShapes(TransformMatchingAbstractBase):\n \"\"\"An animation trying to transform groups by matching the shape\n of their submobjects.\n\n Two submobjects match if the hash of their point coordinates after\n normalization (i.e., after translation to the origin, fixing the submobject\n height at 1 unit, and rounding the coordinates to three decimal places)\n matches.\n\n See also\n --------\n :class:`~.TransformMatchingAbstractBase`\n\n Examples\n --------\n\n .. manim:: Anagram\n\n class Anagram(Scene):\n def construct(self):\n src = Text(\"the morse code\")\n tar = Text(\"here come dots\")\n self.play(Write(src))\n self.wait(0.5)\n self.play(TransformMatchingShapes(src, tar, path_arc=PI/2))\n self.wait(0.5)\n\n \"\"\"\n\n def __init__(\n self,\n mobject: \"Mobject\",\n target_mobject: \"Mobject\",\n transform_mismatches: bool = False,\n fade_transform_mismatches: bool = False,\n key_map: Optional[dict] = None,\n **kwargs\n ):\n super().__init__(\n mobject,\n target_mobject,\n transform_mismatches=transform_mismatches,\n fade_transform_mismatches=fade_transform_mismatches,\n key_map=key_map,\n **kwargs\n )\n\n @staticmethod\n def get_mobject_parts(mobject: \"Mobject\") -> List[\"Mobject\"]:\n return mobject.family_members_with_points()\n\n @staticmethod\n def get_mobject_key(mobject: \"Mobject\") -> int:\n mobject.save_state()\n mobject.center()\n mobject.set_height(1)\n result = hash(np.round(mobject.points, 3).tobytes())\n mobject.restore()\n return result\n\n\nclass TransformMatchingTex(TransformMatchingAbstractBase):\n \"\"\"A transformation trying to transform rendered LaTeX strings.\n\n Two submobjects match if their ``tex_string`` matches.\n\n See also\n --------\n :class:`~.TransformMatchingAbstractBase`\n\n Examples\n --------\n\n .. manim:: MatchingEquationParts\n\n class MatchingEquationParts(Scene):\n def construct(self):\n eq1 = MathTex(\"{{a^2}} + {{b^2}} = {{c^2}}\")\n eq2 = MathTex(\"{{a^2}} = {{c^2}} - {{b^2}}\")\n self.add(eq1)\n self.wait(0.5)\n self.play(TransformMatchingTex(eq1, eq2))\n self.wait(0.5)\n\n \"\"\"\n\n def __init__(\n self,\n mobject: \"Mobject\",\n target_mobject: \"Mobject\",\n transform_mismatches: bool = False,\n fade_transform_mismatches: bool = False,\n key_map: Optional[dict] = None,\n **kwargs\n ):\n assert hasattr(mobject, \"tex_string\")\n assert hasattr(target_mobject, \"tex_string\")\n super().__init__(\n mobject,\n target_mobject,\n transform_mismatches=transform_mismatches,\n fade_transform_mismatches=fade_transform_mismatches,\n key_map=key_map,\n **kwargs\n )\n\n @staticmethod\n def get_mobject_parts(mobject: \"Mobject\") -> List[\"Mobject\"]:\n return mobject.submobjects\n\n @staticmethod\n def get_mobject_key(mobject: \"Mobject\") -> str:\n return mobject.tex_string\n", "path": "manim/animation/transform_matching_parts.py"}], "after_files": [{"content": "\"\"\"Animations that try to transform Mobjects while keeping track of identical parts.\"\"\"\n\n__all__ = [\"TransformMatchingShapes\", \"TransformMatchingTex\"]\n\nfrom typing import TYPE_CHECKING, List, Optional\n\nimport numpy as np\n\nfrom .._config import config\nfrom ..mobject.mobject import Group, Mobject\nfrom ..mobject.opengl_mobject import OpenGLGroup, OpenGLMobject\nfrom ..mobject.types.opengl_vectorized_mobject import OpenGLVGroup, OpenGLVMobject\nfrom ..mobject.types.vectorized_mobject import VGroup, VMobject\nfrom .composition import AnimationGroup\nfrom .fading import FadeIn, FadeOut\nfrom .transform import FadeTransformPieces, Transform\n\nif TYPE_CHECKING:\n from ..scene.scene import Scene\n\n\nclass TransformMatchingAbstractBase(AnimationGroup):\n \"\"\"Abstract base class for transformations that keep track of matching parts.\n\n Subclasses have to implement the two static methods\n :meth:`~.TransformMatchingAbstractBase.get_mobject_parts` and\n :meth:`~.TransformMatchingAbstractBase.get_mobject_key`.\n\n Basically, this transformation first maps all submobjects returned\n by the ``get_mobject_parts`` method to certain keys by applying the\n ``get_mobject_key`` method. Then, submobjects with matching keys\n are transformed into each other.\n\n Parameters\n ----------\n mobject\n The starting :class:`~.Mobject`.\n target_mobject\n The target :class:`~.Mobject`.\n transform_mismatches\n Controls whether submobjects without a matching key are transformed\n into each other by using :class:`~.Transform`. Default: ``False``.\n fade_transform_mismatches\n Controls whether submobjects without a matching key are transformed\n into each other by using :class:`~.FadeTransform`. Default: ``False``.\n key_map\n Optional. A dictionary mapping keys belonging to some of the starting mobject's\n submobjects (i.e., the return values of the ``get_mobject_key`` method)\n to some keys belonging to the target mobject's submobjects that should\n be transformed although the keys don't match.\n kwargs\n All further keyword arguments are passed to the submobject transformations.\n\n\n Note\n ----\n If neither ``transform_mismatches`` nor ``fade_transform_mismatches``\n are set to ``True``, submobjects without matching keys in the starting\n mobject are faded out in the direction of the unmatched submobjects in\n the target mobject, and unmatched submobjects in the target mobject\n are faded in from the direction of the unmatched submobjects in the\n start mobject.\n\n \"\"\"\n\n def __init__(\n self,\n mobject: \"Mobject\",\n target_mobject: \"Mobject\",\n transform_mismatches: bool = False,\n fade_transform_mismatches: bool = False,\n key_map: Optional[dict] = None,\n **kwargs\n ):\n\n if isinstance(mobject, OpenGLVMobject):\n group_type = OpenGLVGroup\n elif isinstance(mobject, OpenGLMobject):\n group_type = OpenGLGroup\n elif isinstance(mobject, VMobject):\n group_type = VGroup\n else:\n group_type = Group\n\n source_map = self.get_shape_map(mobject)\n target_map = self.get_shape_map(target_mobject)\n\n if key_map is None:\n key_map = {}\n\n # Create two mobjects whose submobjects all match each other\n # according to whatever keys are used for source_map and\n # target_map\n transform_source = group_type()\n transform_target = group_type()\n kwargs[\"final_alpha_value\"] = 0\n for key in set(source_map).intersection(target_map):\n transform_source.add(source_map[key])\n transform_target.add(target_map[key])\n anims = [Transform(transform_source, transform_target, **kwargs)]\n # User can manually specify when one part should transform\n # into another despite not matching by using key_map\n key_mapped_source = group_type()\n key_mapped_target = group_type()\n for key1, key2 in key_map.items():\n if key1 in source_map and key2 in target_map:\n key_mapped_source.add(source_map[key1])\n key_mapped_target.add(target_map[key2])\n source_map.pop(key1, None)\n target_map.pop(key2, None)\n if len(key_mapped_source) > 0:\n anims.append(\n FadeTransformPieces(key_mapped_source, key_mapped_target, **kwargs),\n )\n\n fade_source = group_type()\n fade_target = group_type()\n for key in set(source_map).difference(target_map):\n fade_source.add(source_map[key])\n for key in set(target_map).difference(source_map):\n fade_target.add(target_map[key])\n\n if transform_mismatches:\n if \"replace_mobject_with_target_in_scene\" not in kwargs:\n kwargs[\"replace_mobject_with_target_in_scene\"] = True\n anims.append(Transform(fade_source, fade_target, **kwargs))\n elif fade_transform_mismatches:\n anims.append(FadeTransformPieces(fade_source, fade_target, **kwargs))\n else:\n anims.append(FadeOut(fade_source, target_position=fade_target, **kwargs))\n anims.append(\n FadeIn(fade_target.copy(), target_position=fade_target, **kwargs),\n )\n\n super().__init__(*anims)\n\n self.to_remove = mobject\n self.to_add = target_mobject\n\n def get_shape_map(self, mobject: \"Mobject\") -> dict:\n shape_map = {}\n for sm in self.get_mobject_parts(mobject):\n key = self.get_mobject_key(sm)\n if key not in shape_map:\n if config[\"renderer\"] == \"opengl\":\n shape_map[key] = OpenGLVGroup()\n else:\n shape_map[key] = VGroup()\n shape_map[key].add(sm)\n return shape_map\n\n def clean_up_from_scene(self, scene: \"Scene\") -> None:\n for anim in self.animations:\n anim.interpolate(0)\n scene.remove(self.mobject)\n scene.remove(self.to_remove)\n scene.add(self.to_add)\n\n @staticmethod\n def get_mobject_parts(mobject: \"Mobject\"):\n raise NotImplementedError(\"To be implemented in subclass.\")\n\n @staticmethod\n def get_mobject_key(mobject: \"Mobject\"):\n raise NotImplementedError(\"To be implemented in subclass.\")\n\n\nclass TransformMatchingShapes(TransformMatchingAbstractBase):\n \"\"\"An animation trying to transform groups by matching the shape\n of their submobjects.\n\n Two submobjects match if the hash of their point coordinates after\n normalization (i.e., after translation to the origin, fixing the submobject\n height at 1 unit, and rounding the coordinates to three decimal places)\n matches.\n\n See also\n --------\n :class:`~.TransformMatchingAbstractBase`\n\n Examples\n --------\n\n .. manim:: Anagram\n\n class Anagram(Scene):\n def construct(self):\n src = Text(\"the morse code\")\n tar = Text(\"here come dots\")\n self.play(Write(src))\n self.wait(0.5)\n self.play(TransformMatchingShapes(src, tar, path_arc=PI/2))\n self.wait(0.5)\n\n \"\"\"\n\n def __init__(\n self,\n mobject: \"Mobject\",\n target_mobject: \"Mobject\",\n transform_mismatches: bool = False,\n fade_transform_mismatches: bool = False,\n key_map: Optional[dict] = None,\n **kwargs\n ):\n super().__init__(\n mobject,\n target_mobject,\n transform_mismatches=transform_mismatches,\n fade_transform_mismatches=fade_transform_mismatches,\n key_map=key_map,\n **kwargs\n )\n\n @staticmethod\n def get_mobject_parts(mobject: \"Mobject\") -> List[\"Mobject\"]:\n return mobject.family_members_with_points()\n\n @staticmethod\n def get_mobject_key(mobject: \"Mobject\") -> int:\n mobject.save_state()\n mobject.center()\n mobject.set_height(1)\n result = hash(np.round(mobject.points, 3).tobytes())\n mobject.restore()\n return result\n\n\nclass TransformMatchingTex(TransformMatchingAbstractBase):\n \"\"\"A transformation trying to transform rendered LaTeX strings.\n\n Two submobjects match if their ``tex_string`` matches.\n\n See also\n --------\n :class:`~.TransformMatchingAbstractBase`\n\n Examples\n --------\n\n .. manim:: MatchingEquationParts\n\n class MatchingEquationParts(Scene):\n def construct(self):\n eq1 = MathTex(\"{{a^2}} + {{b^2}} = {{c^2}}\")\n eq2 = MathTex(\"{{a^2}} = {{c^2}} - {{b^2}}\")\n self.add(eq1)\n self.wait(0.5)\n self.play(TransformMatchingTex(eq1, eq2))\n self.wait(0.5)\n\n \"\"\"\n\n def __init__(\n self,\n mobject: \"Mobject\",\n target_mobject: \"Mobject\",\n transform_mismatches: bool = False,\n fade_transform_mismatches: bool = False,\n key_map: Optional[dict] = None,\n **kwargs\n ):\n assert hasattr(mobject, \"tex_string\")\n assert hasattr(target_mobject, \"tex_string\")\n super().__init__(\n mobject,\n target_mobject,\n transform_mismatches=transform_mismatches,\n fade_transform_mismatches=fade_transform_mismatches,\n key_map=key_map,\n **kwargs\n )\n\n @staticmethod\n def get_mobject_parts(mobject: \"Mobject\") -> List[\"Mobject\"]:\n return mobject.submobjects\n\n @staticmethod\n def get_mobject_key(mobject: \"Mobject\") -> str:\n return mobject.tex_string\n", "path": "manim/animation/transform_matching_parts.py"}]}
| 3,742 | 109 |
gh_patches_debug_11321
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-963
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Don't use /tmp but os.gettempdir
```
>> Issue: [B108:hardcoded_tmp_directory] Probable insecure usage of temp file/directory.
Severity: Medium Confidence: Medium
Location: kinto/core/__init__.py:66
```
Don't use /tmp but os.gettempdir
```
>> Issue: [B108:hardcoded_tmp_directory] Probable insecure usage of temp file/directory.
Severity: Medium Confidence: Medium
Location: kinto/core/__init__.py:66
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kinto/core/__init__.py`
Content:
```
1 """Main entry point
2 """
3 import pkg_resources
4
5 from cornice import Service as CorniceService
6 from pyramid.settings import aslist
7
8 from kinto.core import errors
9 from kinto.core import events
10 from kinto.core.initialization import ( # NOQA
11 initialize, install_middlewares,
12 load_default_settings)
13 from kinto.core.utils import (
14 follow_subrequest, current_service, current_resource_name,
15 prefixed_userid, prefixed_principals)
16 from kinto.core.logs import logger
17
18
19 # Module version, as defined in PEP-0396.
20 __version__ = pkg_resources.get_distribution('kinto').version # FIXME?
21
22
23 DEFAULT_SETTINGS = {
24 'backoff': None,
25 'batch_max_requests': 25,
26 'cache_backend': '',
27 'cache_url': '',
28 'cache_pool_size': 25,
29 'cache_prefix': '',
30 'cors_origins': '*',
31 'cors_max_age_seconds': 3600,
32 'eos': None,
33 'eos_message': None,
34 'eos_url': None,
35 'error_info_link': 'https://github.com/Kinto/kinto/issues/',
36 'http_host': None,
37 'http_scheme': None,
38 'id_generator': 'kinto.core.storage.generators.UUID4',
39 'includes': '',
40 'initialization_sequence': (
41 'kinto.core.initialization.setup_request_bound_data',
42 'kinto.core.initialization.setup_json_serializer',
43 'kinto.core.initialization.setup_logging',
44 'kinto.core.initialization.setup_storage',
45 'kinto.core.initialization.setup_permission',
46 'kinto.core.initialization.setup_cache',
47 'kinto.core.initialization.setup_requests_scheme',
48 'kinto.core.initialization.setup_version_redirection',
49 'kinto.core.initialization.setup_deprecation',
50 'kinto.core.initialization.setup_authentication',
51 'kinto.core.initialization.setup_backoff',
52 'kinto.core.initialization.setup_statsd',
53 'kinto.core.initialization.setup_listeners',
54 'kinto.core.events.setup_transaction_hook',
55 ),
56 'event_listeners': '',
57 'heartbeat_timeout_seconds': 10,
58 'logging_renderer': 'kinto.core.logs.ClassicLogRenderer',
59 'newrelic_config': None,
60 'newrelic_env': 'dev',
61 'paginate_by': None,
62 'permission_backend': '',
63 'permission_url': '',
64 'permission_pool_size': 25,
65 'profiler_dir': '/tmp',
66 'profiler_enabled': False,
67 'project_docs': '',
68 'project_name': '',
69 'project_version': '',
70 'readonly': False,
71 'retry_after_seconds': 30,
72 'statsd_backend': 'kinto.core.statsd',
73 'statsd_prefix': 'kinto.core',
74 'statsd_url': None,
75 'storage_backend': '',
76 'storage_url': '',
77 'storage_max_fetch_size': 10000,
78 'storage_pool_size': 25,
79 'tm.annotate_user': False, # Do annotate transactions with the user-id.
80 'transaction_per_request': True,
81 'userid_hmac_secret': '',
82 'version_json_path': 'version.json',
83 'version_prefix_redirect_enabled': True,
84 'trailing_slash_redirect_enabled': True,
85 'multiauth.groupfinder': 'kinto.core.authorization.groupfinder',
86 'multiauth.policies': 'basicauth',
87 'multiauth.policy.basicauth.use': ('kinto.core.authentication.'
88 'BasicAuthAuthenticationPolicy'),
89 'multiauth.authorization_policy': ('kinto.core.authorization.'
90 'AuthorizationPolicy')
91 }
92
93
94 class Service(CorniceService):
95 """Subclass of the default cornice service.
96
97 This is useful in order to attach specific behaviours without monkey
98 patching the default cornice service (which would impact other uses of it)
99 """
100 default_cors_headers = ('Backoff', 'Retry-After', 'Alert',
101 'Content-Length')
102
103 def error_handler(self, request):
104 return errors.json_error_handler(request)
105
106 @classmethod
107 def init_from_settings(cls, settings):
108 cls.cors_origins = tuple(aslist(settings['cors_origins']))
109 cors_max_age = settings['cors_max_age_seconds']
110 cls.cors_max_age = int(cors_max_age) if cors_max_age else None
111
112
113 def includeme(config):
114 settings = config.get_settings()
115
116 # Heartbeat registry.
117 config.registry.heartbeats = {}
118
119 # Public settings registry.
120 config.registry.public_settings = {'batch_max_requests', 'readonly'}
121
122 # Directive to declare arbitrary API capabilities.
123 def add_api_capability(config, identifier, description="", url="", **kw):
124 existing = config.registry.api_capabilities.get(identifier)
125 if existing:
126 error_msg = "The '%s' API capability was already registered (%s)."
127 raise ValueError(error_msg % (identifier, existing))
128
129 capability = dict(description=description, url=url, **kw)
130 config.registry.api_capabilities[identifier] = capability
131
132 config.add_directive('add_api_capability', add_api_capability)
133 config.registry.api_capabilities = {}
134
135 # Resource events helpers.
136 config.add_request_method(events.get_resource_events,
137 name='get_resource_events')
138 config.add_request_method(events.notify_resource_event,
139 name='notify_resource_event')
140
141 # Setup cornice.
142 config.include("cornice")
143
144 # Per-request transaction.
145 config.include("pyramid_tm")
146
147 # Add CORS settings to the base kinto.core Service class.
148 Service.init_from_settings(settings)
149
150 # Setup components.
151 for step in aslist(settings['initialization_sequence']):
152 step_func = config.maybe_dotted(step)
153 step_func(config)
154
155 # Custom helpers.
156 config.add_request_method(follow_subrequest)
157 config.add_request_method(prefixed_userid, property=True)
158 config.add_request_method(prefixed_principals, reify=True)
159 config.add_request_method(lambda r: {
160 'id': r.prefixed_userid,
161 'principals': r.prefixed_principals},
162 name='get_user_info')
163 config.add_request_method(current_resource_name, reify=True)
164 config.add_request_method(current_service, reify=True)
165 config.commit()
166
167 # Include plugins after init, unlike pyramid includes.
168 includes = aslist(settings['includes'])
169 for app in includes:
170 config.include(app)
171
172 # # Show settings to output.
173 # for key, value in settings.items():
174 # logger.info('Using %s = %s' % (key, value))
175
176 # Scan views.
177 config.scan("kinto.core.views")
178
179 # Give sign of life.
180 msg = "Running %(project_name)s %(project_version)s."
181 logger.info(msg % settings)
182
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kinto/core/__init__.py b/kinto/core/__init__.py
--- a/kinto/core/__init__.py
+++ b/kinto/core/__init__.py
@@ -1,6 +1,7 @@
"""Main entry point
"""
import pkg_resources
+import tempfile
from cornice import Service as CorniceService
from pyramid.settings import aslist
@@ -62,7 +63,7 @@
'permission_backend': '',
'permission_url': '',
'permission_pool_size': 25,
- 'profiler_dir': '/tmp',
+ 'profiler_dir': tempfile.gettempdir(),
'profiler_enabled': False,
'project_docs': '',
'project_name': '',
|
{"golden_diff": "diff --git a/kinto/core/__init__.py b/kinto/core/__init__.py\n--- a/kinto/core/__init__.py\n+++ b/kinto/core/__init__.py\n@@ -1,6 +1,7 @@\n \"\"\"Main entry point\n \"\"\"\n import pkg_resources\n+import tempfile\n \n from cornice import Service as CorniceService\n from pyramid.settings import aslist\n@@ -62,7 +63,7 @@\n 'permission_backend': '',\n 'permission_url': '',\n 'permission_pool_size': 25,\n- 'profiler_dir': '/tmp',\n+ 'profiler_dir': tempfile.gettempdir(),\n 'profiler_enabled': False,\n 'project_docs': '',\n 'project_name': '',\n", "issue": "Don't use /tmp but os.gettempdir\n```\r\n>> Issue: [B108:hardcoded_tmp_directory] Probable insecure usage of temp file/directory.\r\n Severity: Medium Confidence: Medium\r\n Location: kinto/core/__init__.py:66\r\n```\nDon't use /tmp but os.gettempdir\n```\r\n>> Issue: [B108:hardcoded_tmp_directory] Probable insecure usage of temp file/directory.\r\n Severity: Medium Confidence: Medium\r\n Location: kinto/core/__init__.py:66\r\n```\n", "before_files": [{"content": "\"\"\"Main entry point\n\"\"\"\nimport pkg_resources\n\nfrom cornice import Service as CorniceService\nfrom pyramid.settings import aslist\n\nfrom kinto.core import errors\nfrom kinto.core import events\nfrom kinto.core.initialization import ( # NOQA\n initialize, install_middlewares,\n load_default_settings)\nfrom kinto.core.utils import (\n follow_subrequest, current_service, current_resource_name,\n prefixed_userid, prefixed_principals)\nfrom kinto.core.logs import logger\n\n\n# Module version, as defined in PEP-0396.\n__version__ = pkg_resources.get_distribution('kinto').version # FIXME?\n\n\nDEFAULT_SETTINGS = {\n 'backoff': None,\n 'batch_max_requests': 25,\n 'cache_backend': '',\n 'cache_url': '',\n 'cache_pool_size': 25,\n 'cache_prefix': '',\n 'cors_origins': '*',\n 'cors_max_age_seconds': 3600,\n 'eos': None,\n 'eos_message': None,\n 'eos_url': None,\n 'error_info_link': 'https://github.com/Kinto/kinto/issues/',\n 'http_host': None,\n 'http_scheme': None,\n 'id_generator': 'kinto.core.storage.generators.UUID4',\n 'includes': '',\n 'initialization_sequence': (\n 'kinto.core.initialization.setup_request_bound_data',\n 'kinto.core.initialization.setup_json_serializer',\n 'kinto.core.initialization.setup_logging',\n 'kinto.core.initialization.setup_storage',\n 'kinto.core.initialization.setup_permission',\n 'kinto.core.initialization.setup_cache',\n 'kinto.core.initialization.setup_requests_scheme',\n 'kinto.core.initialization.setup_version_redirection',\n 'kinto.core.initialization.setup_deprecation',\n 'kinto.core.initialization.setup_authentication',\n 'kinto.core.initialization.setup_backoff',\n 'kinto.core.initialization.setup_statsd',\n 'kinto.core.initialization.setup_listeners',\n 'kinto.core.events.setup_transaction_hook',\n ),\n 'event_listeners': '',\n 'heartbeat_timeout_seconds': 10,\n 'logging_renderer': 'kinto.core.logs.ClassicLogRenderer',\n 'newrelic_config': None,\n 'newrelic_env': 'dev',\n 'paginate_by': None,\n 'permission_backend': '',\n 'permission_url': '',\n 'permission_pool_size': 25,\n 'profiler_dir': '/tmp',\n 'profiler_enabled': False,\n 'project_docs': '',\n 'project_name': '',\n 'project_version': '',\n 'readonly': False,\n 'retry_after_seconds': 30,\n 'statsd_backend': 'kinto.core.statsd',\n 'statsd_prefix': 'kinto.core',\n 'statsd_url': None,\n 'storage_backend': '',\n 'storage_url': '',\n 'storage_max_fetch_size': 10000,\n 'storage_pool_size': 25,\n 'tm.annotate_user': False, # Do annotate transactions with the user-id.\n 'transaction_per_request': True,\n 'userid_hmac_secret': '',\n 'version_json_path': 'version.json',\n 'version_prefix_redirect_enabled': True,\n 'trailing_slash_redirect_enabled': True,\n 'multiauth.groupfinder': 'kinto.core.authorization.groupfinder',\n 'multiauth.policies': 'basicauth',\n 'multiauth.policy.basicauth.use': ('kinto.core.authentication.'\n 'BasicAuthAuthenticationPolicy'),\n 'multiauth.authorization_policy': ('kinto.core.authorization.'\n 'AuthorizationPolicy')\n}\n\n\nclass Service(CorniceService):\n \"\"\"Subclass of the default cornice service.\n\n This is useful in order to attach specific behaviours without monkey\n patching the default cornice service (which would impact other uses of it)\n \"\"\"\n default_cors_headers = ('Backoff', 'Retry-After', 'Alert',\n 'Content-Length')\n\n def error_handler(self, request):\n return errors.json_error_handler(request)\n\n @classmethod\n def init_from_settings(cls, settings):\n cls.cors_origins = tuple(aslist(settings['cors_origins']))\n cors_max_age = settings['cors_max_age_seconds']\n cls.cors_max_age = int(cors_max_age) if cors_max_age else None\n\n\ndef includeme(config):\n settings = config.get_settings()\n\n # Heartbeat registry.\n config.registry.heartbeats = {}\n\n # Public settings registry.\n config.registry.public_settings = {'batch_max_requests', 'readonly'}\n\n # Directive to declare arbitrary API capabilities.\n def add_api_capability(config, identifier, description=\"\", url=\"\", **kw):\n existing = config.registry.api_capabilities.get(identifier)\n if existing:\n error_msg = \"The '%s' API capability was already registered (%s).\"\n raise ValueError(error_msg % (identifier, existing))\n\n capability = dict(description=description, url=url, **kw)\n config.registry.api_capabilities[identifier] = capability\n\n config.add_directive('add_api_capability', add_api_capability)\n config.registry.api_capabilities = {}\n\n # Resource events helpers.\n config.add_request_method(events.get_resource_events,\n name='get_resource_events')\n config.add_request_method(events.notify_resource_event,\n name='notify_resource_event')\n\n # Setup cornice.\n config.include(\"cornice\")\n\n # Per-request transaction.\n config.include(\"pyramid_tm\")\n\n # Add CORS settings to the base kinto.core Service class.\n Service.init_from_settings(settings)\n\n # Setup components.\n for step in aslist(settings['initialization_sequence']):\n step_func = config.maybe_dotted(step)\n step_func(config)\n\n # Custom helpers.\n config.add_request_method(follow_subrequest)\n config.add_request_method(prefixed_userid, property=True)\n config.add_request_method(prefixed_principals, reify=True)\n config.add_request_method(lambda r: {\n 'id': r.prefixed_userid,\n 'principals': r.prefixed_principals},\n name='get_user_info')\n config.add_request_method(current_resource_name, reify=True)\n config.add_request_method(current_service, reify=True)\n config.commit()\n\n # Include plugins after init, unlike pyramid includes.\n includes = aslist(settings['includes'])\n for app in includes:\n config.include(app)\n\n # # Show settings to output.\n # for key, value in settings.items():\n # logger.info('Using %s = %s' % (key, value))\n\n # Scan views.\n config.scan(\"kinto.core.views\")\n\n # Give sign of life.\n msg = \"Running %(project_name)s %(project_version)s.\"\n logger.info(msg % settings)\n", "path": "kinto/core/__init__.py"}], "after_files": [{"content": "\"\"\"Main entry point\n\"\"\"\nimport pkg_resources\nimport tempfile\n\nfrom cornice import Service as CorniceService\nfrom pyramid.settings import aslist\n\nfrom kinto.core import errors\nfrom kinto.core import events\nfrom kinto.core.initialization import ( # NOQA\n initialize, install_middlewares,\n load_default_settings)\nfrom kinto.core.utils import (\n follow_subrequest, current_service, current_resource_name,\n prefixed_userid, prefixed_principals)\nfrom kinto.core.logs import logger\n\n\n# Module version, as defined in PEP-0396.\n__version__ = pkg_resources.get_distribution('kinto').version # FIXME?\n\n\nDEFAULT_SETTINGS = {\n 'backoff': None,\n 'batch_max_requests': 25,\n 'cache_backend': '',\n 'cache_url': '',\n 'cache_pool_size': 25,\n 'cache_prefix': '',\n 'cors_origins': '*',\n 'cors_max_age_seconds': 3600,\n 'eos': None,\n 'eos_message': None,\n 'eos_url': None,\n 'error_info_link': 'https://github.com/Kinto/kinto/issues/',\n 'http_host': None,\n 'http_scheme': None,\n 'id_generator': 'kinto.core.storage.generators.UUID4',\n 'includes': '',\n 'initialization_sequence': (\n 'kinto.core.initialization.setup_request_bound_data',\n 'kinto.core.initialization.setup_json_serializer',\n 'kinto.core.initialization.setup_logging',\n 'kinto.core.initialization.setup_storage',\n 'kinto.core.initialization.setup_permission',\n 'kinto.core.initialization.setup_cache',\n 'kinto.core.initialization.setup_requests_scheme',\n 'kinto.core.initialization.setup_version_redirection',\n 'kinto.core.initialization.setup_deprecation',\n 'kinto.core.initialization.setup_authentication',\n 'kinto.core.initialization.setup_backoff',\n 'kinto.core.initialization.setup_statsd',\n 'kinto.core.initialization.setup_listeners',\n 'kinto.core.events.setup_transaction_hook',\n ),\n 'event_listeners': '',\n 'heartbeat_timeout_seconds': 10,\n 'logging_renderer': 'kinto.core.logs.ClassicLogRenderer',\n 'newrelic_config': None,\n 'newrelic_env': 'dev',\n 'paginate_by': None,\n 'permission_backend': '',\n 'permission_url': '',\n 'permission_pool_size': 25,\n 'profiler_dir': tempfile.gettempdir(),\n 'profiler_enabled': False,\n 'project_docs': '',\n 'project_name': '',\n 'project_version': '',\n 'readonly': False,\n 'retry_after_seconds': 30,\n 'statsd_backend': 'kinto.core.statsd',\n 'statsd_prefix': 'kinto.core',\n 'statsd_url': None,\n 'storage_backend': '',\n 'storage_url': '',\n 'storage_max_fetch_size': 10000,\n 'storage_pool_size': 25,\n 'tm.annotate_user': False, # Do annotate transactions with the user-id.\n 'transaction_per_request': True,\n 'userid_hmac_secret': '',\n 'version_json_path': 'version.json',\n 'version_prefix_redirect_enabled': True,\n 'trailing_slash_redirect_enabled': True,\n 'multiauth.groupfinder': 'kinto.core.authorization.groupfinder',\n 'multiauth.policies': 'basicauth',\n 'multiauth.policy.basicauth.use': ('kinto.core.authentication.'\n 'BasicAuthAuthenticationPolicy'),\n 'multiauth.authorization_policy': ('kinto.core.authorization.'\n 'AuthorizationPolicy')\n}\n\n\nclass Service(CorniceService):\n \"\"\"Subclass of the default cornice service.\n\n This is useful in order to attach specific behaviours without monkey\n patching the default cornice service (which would impact other uses of it)\n \"\"\"\n default_cors_headers = ('Backoff', 'Retry-After', 'Alert',\n 'Content-Length')\n\n def error_handler(self, request):\n return errors.json_error_handler(request)\n\n @classmethod\n def init_from_settings(cls, settings):\n cls.cors_origins = tuple(aslist(settings['cors_origins']))\n cors_max_age = settings['cors_max_age_seconds']\n cls.cors_max_age = int(cors_max_age) if cors_max_age else None\n\n\ndef includeme(config):\n settings = config.get_settings()\n\n # Heartbeat registry.\n config.registry.heartbeats = {}\n\n # Public settings registry.\n config.registry.public_settings = {'batch_max_requests', 'readonly'}\n\n # Directive to declare arbitrary API capabilities.\n def add_api_capability(config, identifier, description=\"\", url=\"\", **kw):\n existing = config.registry.api_capabilities.get(identifier)\n if existing:\n error_msg = \"The '%s' API capability was already registered (%s).\"\n raise ValueError(error_msg % (identifier, existing))\n\n capability = dict(description=description, url=url, **kw)\n config.registry.api_capabilities[identifier] = capability\n\n config.add_directive('add_api_capability', add_api_capability)\n config.registry.api_capabilities = {}\n\n # Resource events helpers.\n config.add_request_method(events.get_resource_events,\n name='get_resource_events')\n config.add_request_method(events.notify_resource_event,\n name='notify_resource_event')\n\n # Setup cornice.\n config.include(\"cornice\")\n\n # Per-request transaction.\n config.include(\"pyramid_tm\")\n\n # Add CORS settings to the base kinto.core Service class.\n Service.init_from_settings(settings)\n\n # Setup components.\n for step in aslist(settings['initialization_sequence']):\n step_func = config.maybe_dotted(step)\n step_func(config)\n\n # Custom helpers.\n config.add_request_method(follow_subrequest)\n config.add_request_method(prefixed_userid, property=True)\n config.add_request_method(prefixed_principals, reify=True)\n config.add_request_method(lambda r: {\n 'id': r.prefixed_userid,\n 'principals': r.prefixed_principals},\n name='get_user_info')\n config.add_request_method(current_resource_name, reify=True)\n config.add_request_method(current_service, reify=True)\n config.commit()\n\n # Include plugins after init, unlike pyramid includes.\n includes = aslist(settings['includes'])\n for app in includes:\n config.include(app)\n\n # # Show settings to output.\n # for key, value in settings.items():\n # logger.info('Using %s = %s' % (key, value))\n\n # Scan views.\n config.scan(\"kinto.core.views\")\n\n # Give sign of life.\n msg = \"Running %(project_name)s %(project_version)s.\"\n logger.info(msg % settings)\n", "path": "kinto/core/__init__.py"}]}
| 2,272 | 157 |
gh_patches_debug_5803
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-986
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error when building RPM
When building RPM for Fedora i got this:
`*** ERROR: ambiguous python shebang in /usr/lib/python3.7/site-packages/pre_commit/resources/hook-tmpl: #!/usr/bin/env python. Change it to python3 (or python2) explicitly.`
Would be great if this fixed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pre_commit/commands/install_uninstall.py`
Content:
```
1 from __future__ import print_function
2 from __future__ import unicode_literals
3
4 import io
5 import itertools
6 import logging
7 import os.path
8 import sys
9
10 from pre_commit import git
11 from pre_commit import output
12 from pre_commit.clientlib import load_config
13 from pre_commit.repository import all_hooks
14 from pre_commit.repository import install_hook_envs
15 from pre_commit.util import cmd_output
16 from pre_commit.util import make_executable
17 from pre_commit.util import mkdirp
18 from pre_commit.util import resource_text
19
20
21 logger = logging.getLogger(__name__)
22
23 # This is used to identify the hook file we install
24 PRIOR_HASHES = (
25 '4d9958c90bc262f47553e2c073f14cfe',
26 'd8ee923c46731b42cd95cc869add4062',
27 '49fd668cb42069aa1b6048464be5d395',
28 '79f09a650522a87b0da915d0d983b2de',
29 'e358c9dae00eac5d06b38dfdb1e33a8c',
30 )
31 CURRENT_HASH = '138fd403232d2ddd5efb44317e38bf03'
32 TEMPLATE_START = '# start templated\n'
33 TEMPLATE_END = '# end templated\n'
34
35
36 def _hook_paths(hook_type):
37 pth = os.path.join(git.get_git_dir(), 'hooks', hook_type)
38 return pth, '{}.legacy'.format(pth)
39
40
41 def is_our_script(filename):
42 if not os.path.exists(filename): # pragma: windows no cover (symlink)
43 return False
44 with io.open(filename) as f:
45 contents = f.read()
46 return any(h in contents for h in (CURRENT_HASH,) + PRIOR_HASHES)
47
48
49 def shebang():
50 if sys.platform == 'win32':
51 py = 'python'
52 else:
53 # Homebrew/homebrew-core#35825: be more timid about appropriate `PATH`
54 path_choices = [p for p in os.defpath.split(os.pathsep) if p]
55 exe_choices = [
56 'python{}'.format('.'.join(str(v) for v in sys.version_info[:i]))
57 for i in range(3)
58 ]
59 for path, exe in itertools.product(path_choices, exe_choices):
60 if os.path.exists(os.path.join(path, exe)):
61 py = exe
62 break
63 else:
64 py = 'python'
65 return '#!/usr/bin/env {}'.format(py)
66
67
68 def install(
69 config_file, store,
70 overwrite=False, hooks=False, hook_type='pre-commit',
71 skip_on_missing_conf=False,
72 ):
73 """Install the pre-commit hooks."""
74 if cmd_output('git', 'config', 'core.hooksPath', retcode=None)[1].strip():
75 logger.error(
76 'Cowardly refusing to install hooks with `core.hooksPath` set.\n'
77 'hint: `git config --unset-all core.hooksPath`',
78 )
79 return 1
80
81 hook_path, legacy_path = _hook_paths(hook_type)
82
83 mkdirp(os.path.dirname(hook_path))
84
85 # If we have an existing hook, move it to pre-commit.legacy
86 if os.path.lexists(hook_path) and not is_our_script(hook_path):
87 os.rename(hook_path, legacy_path)
88
89 # If we specify overwrite, we simply delete the legacy file
90 if overwrite and os.path.exists(legacy_path):
91 os.remove(legacy_path)
92 elif os.path.exists(legacy_path):
93 output.write_line(
94 'Running in migration mode with existing hooks at {}\n'
95 'Use -f to use only pre-commit.'.format(legacy_path),
96 )
97
98 params = {
99 'CONFIG': config_file,
100 'HOOK_TYPE': hook_type,
101 'INSTALL_PYTHON': sys.executable,
102 'SKIP_ON_MISSING_CONFIG': skip_on_missing_conf,
103 }
104
105 with io.open(hook_path, 'w') as hook_file:
106 contents = resource_text('hook-tmpl')
107 before, rest = contents.split(TEMPLATE_START)
108 to_template, after = rest.split(TEMPLATE_END)
109
110 before = before.replace('#!/usr/bin/env python', shebang())
111
112 hook_file.write(before + TEMPLATE_START)
113 for line in to_template.splitlines():
114 var = line.split()[0]
115 hook_file.write('{} = {!r}\n'.format(var, params[var]))
116 hook_file.write(TEMPLATE_END + after)
117 make_executable(hook_path)
118
119 output.write_line('pre-commit installed at {}'.format(hook_path))
120
121 # If they requested we install all of the hooks, do so.
122 if hooks:
123 install_hooks(config_file, store)
124
125 return 0
126
127
128 def install_hooks(config_file, store):
129 install_hook_envs(all_hooks(load_config(config_file), store), store)
130
131
132 def uninstall(hook_type='pre-commit'):
133 """Uninstall the pre-commit hooks."""
134 hook_path, legacy_path = _hook_paths(hook_type)
135
136 # If our file doesn't exist or it isn't ours, gtfo.
137 if not os.path.exists(hook_path) or not is_our_script(hook_path):
138 return 0
139
140 os.remove(hook_path)
141 output.write_line('{} uninstalled'.format(hook_type))
142
143 if os.path.exists(legacy_path):
144 os.rename(legacy_path, hook_path)
145 output.write_line('Restored previous hooks to {}'.format(hook_path))
146
147 return 0
148
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pre_commit/commands/install_uninstall.py b/pre_commit/commands/install_uninstall.py
--- a/pre_commit/commands/install_uninstall.py
+++ b/pre_commit/commands/install_uninstall.py
@@ -107,7 +107,7 @@
before, rest = contents.split(TEMPLATE_START)
to_template, after = rest.split(TEMPLATE_END)
- before = before.replace('#!/usr/bin/env python', shebang())
+ before = before.replace('#!/usr/bin/env python3', shebang())
hook_file.write(before + TEMPLATE_START)
for line in to_template.splitlines():
|
{"golden_diff": "diff --git a/pre_commit/commands/install_uninstall.py b/pre_commit/commands/install_uninstall.py\n--- a/pre_commit/commands/install_uninstall.py\n+++ b/pre_commit/commands/install_uninstall.py\n@@ -107,7 +107,7 @@\n before, rest = contents.split(TEMPLATE_START)\n to_template, after = rest.split(TEMPLATE_END)\n \n- before = before.replace('#!/usr/bin/env python', shebang())\n+ before = before.replace('#!/usr/bin/env python3', shebang())\n \n hook_file.write(before + TEMPLATE_START)\n for line in to_template.splitlines():\n", "issue": "Error when building RPM\nWhen building RPM for Fedora i got this:\r\n\r\n`*** ERROR: ambiguous python shebang in /usr/lib/python3.7/site-packages/pre_commit/resources/hook-tmpl: #!/usr/bin/env python. Change it to python3 (or python2) explicitly.`\r\n\r\nWould be great if this fixed.\n", "before_files": [{"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport io\nimport itertools\nimport logging\nimport os.path\nimport sys\n\nfrom pre_commit import git\nfrom pre_commit import output\nfrom pre_commit.clientlib import load_config\nfrom pre_commit.repository import all_hooks\nfrom pre_commit.repository import install_hook_envs\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import make_executable\nfrom pre_commit.util import mkdirp\nfrom pre_commit.util import resource_text\n\n\nlogger = logging.getLogger(__name__)\n\n# This is used to identify the hook file we install\nPRIOR_HASHES = (\n '4d9958c90bc262f47553e2c073f14cfe',\n 'd8ee923c46731b42cd95cc869add4062',\n '49fd668cb42069aa1b6048464be5d395',\n '79f09a650522a87b0da915d0d983b2de',\n 'e358c9dae00eac5d06b38dfdb1e33a8c',\n)\nCURRENT_HASH = '138fd403232d2ddd5efb44317e38bf03'\nTEMPLATE_START = '# start templated\\n'\nTEMPLATE_END = '# end templated\\n'\n\n\ndef _hook_paths(hook_type):\n pth = os.path.join(git.get_git_dir(), 'hooks', hook_type)\n return pth, '{}.legacy'.format(pth)\n\n\ndef is_our_script(filename):\n if not os.path.exists(filename): # pragma: windows no cover (symlink)\n return False\n with io.open(filename) as f:\n contents = f.read()\n return any(h in contents for h in (CURRENT_HASH,) + PRIOR_HASHES)\n\n\ndef shebang():\n if sys.platform == 'win32':\n py = 'python'\n else:\n # Homebrew/homebrew-core#35825: be more timid about appropriate `PATH`\n path_choices = [p for p in os.defpath.split(os.pathsep) if p]\n exe_choices = [\n 'python{}'.format('.'.join(str(v) for v in sys.version_info[:i]))\n for i in range(3)\n ]\n for path, exe in itertools.product(path_choices, exe_choices):\n if os.path.exists(os.path.join(path, exe)):\n py = exe\n break\n else:\n py = 'python'\n return '#!/usr/bin/env {}'.format(py)\n\n\ndef install(\n config_file, store,\n overwrite=False, hooks=False, hook_type='pre-commit',\n skip_on_missing_conf=False,\n):\n \"\"\"Install the pre-commit hooks.\"\"\"\n if cmd_output('git', 'config', 'core.hooksPath', retcode=None)[1].strip():\n logger.error(\n 'Cowardly refusing to install hooks with `core.hooksPath` set.\\n'\n 'hint: `git config --unset-all core.hooksPath`',\n )\n return 1\n\n hook_path, legacy_path = _hook_paths(hook_type)\n\n mkdirp(os.path.dirname(hook_path))\n\n # If we have an existing hook, move it to pre-commit.legacy\n if os.path.lexists(hook_path) and not is_our_script(hook_path):\n os.rename(hook_path, legacy_path)\n\n # If we specify overwrite, we simply delete the legacy file\n if overwrite and os.path.exists(legacy_path):\n os.remove(legacy_path)\n elif os.path.exists(legacy_path):\n output.write_line(\n 'Running in migration mode with existing hooks at {}\\n'\n 'Use -f to use only pre-commit.'.format(legacy_path),\n )\n\n params = {\n 'CONFIG': config_file,\n 'HOOK_TYPE': hook_type,\n 'INSTALL_PYTHON': sys.executable,\n 'SKIP_ON_MISSING_CONFIG': skip_on_missing_conf,\n }\n\n with io.open(hook_path, 'w') as hook_file:\n contents = resource_text('hook-tmpl')\n before, rest = contents.split(TEMPLATE_START)\n to_template, after = rest.split(TEMPLATE_END)\n\n before = before.replace('#!/usr/bin/env python', shebang())\n\n hook_file.write(before + TEMPLATE_START)\n for line in to_template.splitlines():\n var = line.split()[0]\n hook_file.write('{} = {!r}\\n'.format(var, params[var]))\n hook_file.write(TEMPLATE_END + after)\n make_executable(hook_path)\n\n output.write_line('pre-commit installed at {}'.format(hook_path))\n\n # If they requested we install all of the hooks, do so.\n if hooks:\n install_hooks(config_file, store)\n\n return 0\n\n\ndef install_hooks(config_file, store):\n install_hook_envs(all_hooks(load_config(config_file), store), store)\n\n\ndef uninstall(hook_type='pre-commit'):\n \"\"\"Uninstall the pre-commit hooks.\"\"\"\n hook_path, legacy_path = _hook_paths(hook_type)\n\n # If our file doesn't exist or it isn't ours, gtfo.\n if not os.path.exists(hook_path) or not is_our_script(hook_path):\n return 0\n\n os.remove(hook_path)\n output.write_line('{} uninstalled'.format(hook_type))\n\n if os.path.exists(legacy_path):\n os.rename(legacy_path, hook_path)\n output.write_line('Restored previous hooks to {}'.format(hook_path))\n\n return 0\n", "path": "pre_commit/commands/install_uninstall.py"}], "after_files": [{"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport io\nimport itertools\nimport logging\nimport os.path\nimport sys\n\nfrom pre_commit import git\nfrom pre_commit import output\nfrom pre_commit.clientlib import load_config\nfrom pre_commit.repository import all_hooks\nfrom pre_commit.repository import install_hook_envs\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import make_executable\nfrom pre_commit.util import mkdirp\nfrom pre_commit.util import resource_text\n\n\nlogger = logging.getLogger(__name__)\n\n# This is used to identify the hook file we install\nPRIOR_HASHES = (\n '4d9958c90bc262f47553e2c073f14cfe',\n 'd8ee923c46731b42cd95cc869add4062',\n '49fd668cb42069aa1b6048464be5d395',\n '79f09a650522a87b0da915d0d983b2de',\n 'e358c9dae00eac5d06b38dfdb1e33a8c',\n)\nCURRENT_HASH = '138fd403232d2ddd5efb44317e38bf03'\nTEMPLATE_START = '# start templated\\n'\nTEMPLATE_END = '# end templated\\n'\n\n\ndef _hook_paths(hook_type):\n pth = os.path.join(git.get_git_dir(), 'hooks', hook_type)\n return pth, '{}.legacy'.format(pth)\n\n\ndef is_our_script(filename):\n if not os.path.exists(filename): # pragma: windows no cover (symlink)\n return False\n with io.open(filename) as f:\n contents = f.read()\n return any(h in contents for h in (CURRENT_HASH,) + PRIOR_HASHES)\n\n\ndef shebang():\n if sys.platform == 'win32':\n py = 'python'\n else:\n # Homebrew/homebrew-core#35825: be more timid about appropriate `PATH`\n path_choices = [p for p in os.defpath.split(os.pathsep) if p]\n exe_choices = [\n 'python{}'.format('.'.join(str(v) for v in sys.version_info[:i]))\n for i in range(3)\n ]\n for path, exe in itertools.product(path_choices, exe_choices):\n if os.path.exists(os.path.join(path, exe)):\n py = exe\n break\n else:\n py = 'python'\n return '#!/usr/bin/env {}'.format(py)\n\n\ndef install(\n config_file, store,\n overwrite=False, hooks=False, hook_type='pre-commit',\n skip_on_missing_conf=False,\n):\n \"\"\"Install the pre-commit hooks.\"\"\"\n if cmd_output('git', 'config', 'core.hooksPath', retcode=None)[1].strip():\n logger.error(\n 'Cowardly refusing to install hooks with `core.hooksPath` set.\\n'\n 'hint: `git config --unset-all core.hooksPath`',\n )\n return 1\n\n hook_path, legacy_path = _hook_paths(hook_type)\n\n mkdirp(os.path.dirname(hook_path))\n\n # If we have an existing hook, move it to pre-commit.legacy\n if os.path.lexists(hook_path) and not is_our_script(hook_path):\n os.rename(hook_path, legacy_path)\n\n # If we specify overwrite, we simply delete the legacy file\n if overwrite and os.path.exists(legacy_path):\n os.remove(legacy_path)\n elif os.path.exists(legacy_path):\n output.write_line(\n 'Running in migration mode with existing hooks at {}\\n'\n 'Use -f to use only pre-commit.'.format(legacy_path),\n )\n\n params = {\n 'CONFIG': config_file,\n 'HOOK_TYPE': hook_type,\n 'INSTALL_PYTHON': sys.executable,\n 'SKIP_ON_MISSING_CONFIG': skip_on_missing_conf,\n }\n\n with io.open(hook_path, 'w') as hook_file:\n contents = resource_text('hook-tmpl')\n before, rest = contents.split(TEMPLATE_START)\n to_template, after = rest.split(TEMPLATE_END)\n\n before = before.replace('#!/usr/bin/env python3', shebang())\n\n hook_file.write(before + TEMPLATE_START)\n for line in to_template.splitlines():\n var = line.split()[0]\n hook_file.write('{} = {!r}\\n'.format(var, params[var]))\n hook_file.write(TEMPLATE_END + after)\n make_executable(hook_path)\n\n output.write_line('pre-commit installed at {}'.format(hook_path))\n\n # If they requested we install all of the hooks, do so.\n if hooks:\n install_hooks(config_file, store)\n\n return 0\n\n\ndef install_hooks(config_file, store):\n install_hook_envs(all_hooks(load_config(config_file), store), store)\n\n\ndef uninstall(hook_type='pre-commit'):\n \"\"\"Uninstall the pre-commit hooks.\"\"\"\n hook_path, legacy_path = _hook_paths(hook_type)\n\n # If our file doesn't exist or it isn't ours, gtfo.\n if not os.path.exists(hook_path) or not is_our_script(hook_path):\n return 0\n\n os.remove(hook_path)\n output.write_line('{} uninstalled'.format(hook_type))\n\n if os.path.exists(legacy_path):\n os.rename(legacy_path, hook_path)\n output.write_line('Restored previous hooks to {}'.format(hook_path))\n\n return 0\n", "path": "pre_commit/commands/install_uninstall.py"}]}
| 1,921 | 134 |
gh_patches_debug_42533
|
rasdani/github-patches
|
git_diff
|
dmlc__dgl-5914
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
On Disk Node & Edge feature & file loader
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/dgl/graphbolt/feature_store.py`
Content:
```
1 """Feature store for GraphBolt."""
2 import torch
3
4
5 class FeatureStore:
6 r"""Base class for feature store."""
7
8 def __init__(self):
9 pass
10
11 def read(self, key: str, ids: torch.Tensor = None):
12 """Read a feature from the feature store.
13
14 Parameters
15 ----------
16 key : str
17 The key that uniquely identifies the feature in the feature store.
18 ids : torch.Tensor, optional
19 The index of the feature. If specified, only the specified indices
20 of the feature are read. If None, the entire feature is returned.
21
22 Returns
23 -------
24 torch.Tensor
25 The read feature.
26 """
27 raise NotImplementedError
28
29 def update(self, key: str, value: torch.Tensor, ids: torch.Tensor = None):
30 """Update a feature in the feature store.
31
32 This function is used to update a feature in the feature store. The
33 feature is identified by a unique key, and its value is specified using
34 a tensor.
35
36 Parameters
37 ----------
38 key : str
39 The key that uniquely identifies the feature in the feature store.
40 value : torch.Tensor
41 The updated value of the feature.
42 ids : torch.Tensor, optional
43 The indices of the feature to update. If specified, only the
44 specified indices of the feature will be updated. For the feature,
45 the `ids[i]` row is updated to `value[i]`. So the indices and value
46 must have the same length. If None, the entire feature will be
47 updated.
48 """
49 raise NotImplementedError
50
51
52 class InMemoryFeatureStore(FeatureStore):
53 r"""In-memory key-value feature store, where the key is a string and value
54 is Pytorch tensor."""
55
56 def __init__(self, feature_dict: dict):
57 """Initialize an in-memory feature store.
58
59 The feature store is initialized with a dictionary of tensors, where the
60 key is the name of a feature and the value is the tensor. The value can
61 be multi-dimensional, where the first dimension is the index of the
62 feature.
63
64 Parameters
65 ----------
66 feature_dict : dict, optional
67 A dictionary of tensors.
68
69 Examples
70 --------
71 >>> import torch
72 >>> feature_dict = {
73 ... "user": torch.arange(0, 5),
74 ... "item": torch.arange(0, 6),
75 ... "rel": torch.arange(0, 6).view(2, 3),
76 ... }
77 >>> feature_store = InMemoryFeatureStore(feature_dict)
78 >>> feature_store.read("user", torch.tensor([0, 1, 2]))
79 tensor([0, 1, 2])
80 >>> feature_store.read("item", torch.tensor([0, 1, 2]))
81 tensor([0, 1, 2])
82 >>> feature_store.read("rel", torch.tensor([0]))
83 tensor([[0, 1, 2]])
84 >>> feature_store.update("user",
85 ... torch.ones(3, dtype=torch.long), torch.tensor([0, 1, 2]))
86 >>> feature_store.read("user", torch.tensor([0, 1, 2]))
87 tensor([1, 1, 1])
88 """
89 super(InMemoryFeatureStore, self).__init__()
90 assert isinstance(feature_dict, dict), (
91 f"feature_dict in InMemoryFeatureStore must be dict, "
92 f"but got {type(feature_dict)}."
93 )
94 for k, v in feature_dict.items():
95 assert isinstance(
96 k, str
97 ), f"Key in InMemoryFeatureStore must be str, but got {k}."
98 assert isinstance(v, torch.Tensor), (
99 f"Value in InMemoryFeatureStore must be torch.Tensor,"
100 f"but got {v}."
101 )
102
103 self._feature_dict = feature_dict
104
105 def read(self, key: str, ids: torch.Tensor = None):
106 """Read a feature from the feature store by index.
107
108 Parameters
109 ----------
110 key : str
111 The key of the feature.
112 ids : torch.Tensor, optional
113 The index of the feature. If specified, only the specified indices
114 of the feature are read. If None, the entire feature is returned.
115
116 Returns
117 -------
118 torch.Tensor
119 The read feature.
120 """
121 assert (
122 key in self._feature_dict
123 ), f"key {key} not in {self._feature_dict.keys()}"
124 if ids is None:
125 return self._feature_dict[key]
126 return self._feature_dict[key][ids]
127
128 def update(self, key: str, value: torch.Tensor, ids: torch.Tensor = None):
129 """Update a feature in the feature store.
130
131 This function is used to update a feature in the feature store. The
132 feature is identified by a unique key, and its value is specified using
133 a tensor.
134
135 Parameters
136 ----------
137 key : str
138 The key that uniquely identifies the feature in the feature store.
139 value : torch.Tensor
140 The updated value of the feature.
141 ids : torch.Tensor, optional
142 The indices of the feature to update. If specified, only the
143 specified indices of the feature will be updated. For the feature,
144 the `ids[i]` row is updated to `value[i]`. So the indices and value
145 must have the same length. If None, the entire feature will be
146 updated.
147 """
148 assert (
149 key in self._feature_dict
150 ), f"key {key} not in {self._feature_dict.keys()}"
151 if ids is None:
152 self._feature_dict[key] = value
153 else:
154 assert ids.shape[0] == value.shape[0], (
155 f"ids and value must have the same length, "
156 f"but got {ids.shape[0]} and {value.shape[0]}."
157 )
158 self._feature_dict[key][ids] = value
159
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/dgl/graphbolt/feature_store.py b/python/dgl/graphbolt/feature_store.py
--- a/python/dgl/graphbolt/feature_store.py
+++ b/python/dgl/graphbolt/feature_store.py
@@ -49,18 +49,20 @@
raise NotImplementedError
-class InMemoryFeatureStore(FeatureStore):
- r"""In-memory key-value feature store, where the key is a string and value
- is Pytorch tensor."""
+class TorchBasedFeatureStore(FeatureStore):
+ r"""Torch based key-value feature store, where the key are strings and
+ values are Pytorch tensors."""
def __init__(self, feature_dict: dict):
- """Initialize an in-memory feature store.
+ """Initialize a torch based feature store.
The feature store is initialized with a dictionary of tensors, where the
key is the name of a feature and the value is the tensor. The value can
be multi-dimensional, where the first dimension is the index of the
feature.
+ Note that the values can be in memory or on disk.
+
Parameters
----------
feature_dict : dict, optional
@@ -74,7 +76,7 @@
... "item": torch.arange(0, 6),
... "rel": torch.arange(0, 6).view(2, 3),
... }
- >>> feature_store = InMemoryFeatureStore(feature_dict)
+ >>> feature_store = TorchBasedFeatureStore(feature_dict)
>>> feature_store.read("user", torch.tensor([0, 1, 2]))
tensor([0, 1, 2])
>>> feature_store.read("item", torch.tensor([0, 1, 2]))
@@ -85,18 +87,35 @@
... torch.ones(3, dtype=torch.long), torch.tensor([0, 1, 2]))
>>> feature_store.read("user", torch.tensor([0, 1, 2]))
tensor([1, 1, 1])
+
+ >>> import numpy as np
+ >>> user = np.arange(0, 5)
+ >>> item = np.arange(0, 6)
+ >>> np.save("/tmp/user.npy", user)
+ >>. np.save("/tmp/item.npy", item)
+ >>> feature_dict = {
+ ... "user": torch.as_tensor(np.load("/tmp/user.npy",
+ ... mmap_mode="r+")),
+ ... "item": torch.as_tensor(np.load("/tmp/item.npy",
+ ... mmap_mode="r+")),
+ ... }
+ >>> feature_store = TorchBasedFeatureStore(feature_dict)
+ >>> feature_store.read("user", torch.tensor([0, 1, 2]))
+ tensor([0, 1, 2])
+ >>> feature_store.read("item", torch.tensor([3, 4, 2]))
+ tensor([3, 4, 2])
"""
- super(InMemoryFeatureStore, self).__init__()
+ super(TorchBasedFeatureStore, self).__init__()
assert isinstance(feature_dict, dict), (
- f"feature_dict in InMemoryFeatureStore must be dict, "
+ f"feature_dict in TorchBasedFeatureStore must be dict, "
f"but got {type(feature_dict)}."
)
for k, v in feature_dict.items():
assert isinstance(
k, str
- ), f"Key in InMemoryFeatureStore must be str, but got {k}."
+ ), f"Key in TorchBasedFeatureStore must be str, but got {k}."
assert isinstance(v, torch.Tensor), (
- f"Value in InMemoryFeatureStore must be torch.Tensor,"
+ f"Value in TorchBasedFeatureStore must be torch.Tensor,"
f"but got {v}."
)
@@ -105,6 +124,9 @@
def read(self, key: str, ids: torch.Tensor = None):
"""Read a feature from the feature store by index.
+ The returned feature is always in memory, no matter whether the feature
+ to read is in memory or on disk.
+
Parameters
----------
key : str
|
{"golden_diff": "diff --git a/python/dgl/graphbolt/feature_store.py b/python/dgl/graphbolt/feature_store.py\n--- a/python/dgl/graphbolt/feature_store.py\n+++ b/python/dgl/graphbolt/feature_store.py\n@@ -49,18 +49,20 @@\n raise NotImplementedError\n \n \n-class InMemoryFeatureStore(FeatureStore):\n- r\"\"\"In-memory key-value feature store, where the key is a string and value\n- is Pytorch tensor.\"\"\"\n+class TorchBasedFeatureStore(FeatureStore):\n+ r\"\"\"Torch based key-value feature store, where the key are strings and\n+ values are Pytorch tensors.\"\"\"\n \n def __init__(self, feature_dict: dict):\n- \"\"\"Initialize an in-memory feature store.\n+ \"\"\"Initialize a torch based feature store.\n \n The feature store is initialized with a dictionary of tensors, where the\n key is the name of a feature and the value is the tensor. The value can\n be multi-dimensional, where the first dimension is the index of the\n feature.\n \n+ Note that the values can be in memory or on disk.\n+\n Parameters\n ----------\n feature_dict : dict, optional\n@@ -74,7 +76,7 @@\n ... \"item\": torch.arange(0, 6),\n ... \"rel\": torch.arange(0, 6).view(2, 3),\n ... }\n- >>> feature_store = InMemoryFeatureStore(feature_dict)\n+ >>> feature_store = TorchBasedFeatureStore(feature_dict)\n >>> feature_store.read(\"user\", torch.tensor([0, 1, 2]))\n tensor([0, 1, 2])\n >>> feature_store.read(\"item\", torch.tensor([0, 1, 2]))\n@@ -85,18 +87,35 @@\n ... torch.ones(3, dtype=torch.long), torch.tensor([0, 1, 2]))\n >>> feature_store.read(\"user\", torch.tensor([0, 1, 2]))\n tensor([1, 1, 1])\n+\n+ >>> import numpy as np\n+ >>> user = np.arange(0, 5)\n+ >>> item = np.arange(0, 6)\n+ >>> np.save(\"/tmp/user.npy\", user)\n+ >>. np.save(\"/tmp/item.npy\", item)\n+ >>> feature_dict = {\n+ ... \"user\": torch.as_tensor(np.load(\"/tmp/user.npy\",\n+ ... mmap_mode=\"r+\")),\n+ ... \"item\": torch.as_tensor(np.load(\"/tmp/item.npy\",\n+ ... mmap_mode=\"r+\")),\n+ ... }\n+ >>> feature_store = TorchBasedFeatureStore(feature_dict)\n+ >>> feature_store.read(\"user\", torch.tensor([0, 1, 2]))\n+ tensor([0, 1, 2])\n+ >>> feature_store.read(\"item\", torch.tensor([3, 4, 2]))\n+ tensor([3, 4, 2])\n \"\"\"\n- super(InMemoryFeatureStore, self).__init__()\n+ super(TorchBasedFeatureStore, self).__init__()\n assert isinstance(feature_dict, dict), (\n- f\"feature_dict in InMemoryFeatureStore must be dict, \"\n+ f\"feature_dict in TorchBasedFeatureStore must be dict, \"\n f\"but got {type(feature_dict)}.\"\n )\n for k, v in feature_dict.items():\n assert isinstance(\n k, str\n- ), f\"Key in InMemoryFeatureStore must be str, but got {k}.\"\n+ ), f\"Key in TorchBasedFeatureStore must be str, but got {k}.\"\n assert isinstance(v, torch.Tensor), (\n- f\"Value in InMemoryFeatureStore must be torch.Tensor,\"\n+ f\"Value in TorchBasedFeatureStore must be torch.Tensor,\"\n f\"but got {v}.\"\n )\n \n@@ -105,6 +124,9 @@\n def read(self, key: str, ids: torch.Tensor = None):\n \"\"\"Read a feature from the feature store by index.\n \n+ The returned feature is always in memory, no matter whether the feature\n+ to read is in memory or on disk.\n+\n Parameters\n ----------\n key : str\n", "issue": "On Disk Node & Edge feature & file loader\n\n", "before_files": [{"content": "\"\"\"Feature store for GraphBolt.\"\"\"\nimport torch\n\n\nclass FeatureStore:\n r\"\"\"Base class for feature store.\"\"\"\n\n def __init__(self):\n pass\n\n def read(self, key: str, ids: torch.Tensor = None):\n \"\"\"Read a feature from the feature store.\n\n Parameters\n ----------\n key : str\n The key that uniquely identifies the feature in the feature store.\n ids : torch.Tensor, optional\n The index of the feature. If specified, only the specified indices\n of the feature are read. If None, the entire feature is returned.\n\n Returns\n -------\n torch.Tensor\n The read feature.\n \"\"\"\n raise NotImplementedError\n\n def update(self, key: str, value: torch.Tensor, ids: torch.Tensor = None):\n \"\"\"Update a feature in the feature store.\n\n This function is used to update a feature in the feature store. The\n feature is identified by a unique key, and its value is specified using\n a tensor.\n\n Parameters\n ----------\n key : str\n The key that uniquely identifies the feature in the feature store.\n value : torch.Tensor\n The updated value of the feature.\n ids : torch.Tensor, optional\n The indices of the feature to update. If specified, only the\n specified indices of the feature will be updated. For the feature,\n the `ids[i]` row is updated to `value[i]`. So the indices and value\n must have the same length. If None, the entire feature will be\n updated.\n \"\"\"\n raise NotImplementedError\n\n\nclass InMemoryFeatureStore(FeatureStore):\n r\"\"\"In-memory key-value feature store, where the key is a string and value\n is Pytorch tensor.\"\"\"\n\n def __init__(self, feature_dict: dict):\n \"\"\"Initialize an in-memory feature store.\n\n The feature store is initialized with a dictionary of tensors, where the\n key is the name of a feature and the value is the tensor. The value can\n be multi-dimensional, where the first dimension is the index of the\n feature.\n\n Parameters\n ----------\n feature_dict : dict, optional\n A dictionary of tensors.\n\n Examples\n --------\n >>> import torch\n >>> feature_dict = {\n ... \"user\": torch.arange(0, 5),\n ... \"item\": torch.arange(0, 6),\n ... \"rel\": torch.arange(0, 6).view(2, 3),\n ... }\n >>> feature_store = InMemoryFeatureStore(feature_dict)\n >>> feature_store.read(\"user\", torch.tensor([0, 1, 2]))\n tensor([0, 1, 2])\n >>> feature_store.read(\"item\", torch.tensor([0, 1, 2]))\n tensor([0, 1, 2])\n >>> feature_store.read(\"rel\", torch.tensor([0]))\n tensor([[0, 1, 2]])\n >>> feature_store.update(\"user\",\n ... torch.ones(3, dtype=torch.long), torch.tensor([0, 1, 2]))\n >>> feature_store.read(\"user\", torch.tensor([0, 1, 2]))\n tensor([1, 1, 1])\n \"\"\"\n super(InMemoryFeatureStore, self).__init__()\n assert isinstance(feature_dict, dict), (\n f\"feature_dict in InMemoryFeatureStore must be dict, \"\n f\"but got {type(feature_dict)}.\"\n )\n for k, v in feature_dict.items():\n assert isinstance(\n k, str\n ), f\"Key in InMemoryFeatureStore must be str, but got {k}.\"\n assert isinstance(v, torch.Tensor), (\n f\"Value in InMemoryFeatureStore must be torch.Tensor,\"\n f\"but got {v}.\"\n )\n\n self._feature_dict = feature_dict\n\n def read(self, key: str, ids: torch.Tensor = None):\n \"\"\"Read a feature from the feature store by index.\n\n Parameters\n ----------\n key : str\n The key of the feature.\n ids : torch.Tensor, optional\n The index of the feature. If specified, only the specified indices\n of the feature are read. If None, the entire feature is returned.\n\n Returns\n -------\n torch.Tensor\n The read feature.\n \"\"\"\n assert (\n key in self._feature_dict\n ), f\"key {key} not in {self._feature_dict.keys()}\"\n if ids is None:\n return self._feature_dict[key]\n return self._feature_dict[key][ids]\n\n def update(self, key: str, value: torch.Tensor, ids: torch.Tensor = None):\n \"\"\"Update a feature in the feature store.\n\n This function is used to update a feature in the feature store. The\n feature is identified by a unique key, and its value is specified using\n a tensor.\n\n Parameters\n ----------\n key : str\n The key that uniquely identifies the feature in the feature store.\n value : torch.Tensor\n The updated value of the feature.\n ids : torch.Tensor, optional\n The indices of the feature to update. If specified, only the\n specified indices of the feature will be updated. For the feature,\n the `ids[i]` row is updated to `value[i]`. So the indices and value\n must have the same length. If None, the entire feature will be\n updated.\n \"\"\"\n assert (\n key in self._feature_dict\n ), f\"key {key} not in {self._feature_dict.keys()}\"\n if ids is None:\n self._feature_dict[key] = value\n else:\n assert ids.shape[0] == value.shape[0], (\n f\"ids and value must have the same length, \"\n f\"but got {ids.shape[0]} and {value.shape[0]}.\"\n )\n self._feature_dict[key][ids] = value\n", "path": "python/dgl/graphbolt/feature_store.py"}], "after_files": [{"content": "\"\"\"Feature store for GraphBolt.\"\"\"\nimport torch\n\n\nclass FeatureStore:\n r\"\"\"Base class for feature store.\"\"\"\n\n def __init__(self):\n pass\n\n def read(self, key: str, ids: torch.Tensor = None):\n \"\"\"Read a feature from the feature store.\n\n Parameters\n ----------\n key : str\n The key that uniquely identifies the feature in the feature store.\n ids : torch.Tensor, optional\n The index of the feature. If specified, only the specified indices\n of the feature are read. If None, the entire feature is returned.\n\n Returns\n -------\n torch.Tensor\n The read feature.\n \"\"\"\n raise NotImplementedError\n\n def update(self, key: str, value: torch.Tensor, ids: torch.Tensor = None):\n \"\"\"Update a feature in the feature store.\n\n This function is used to update a feature in the feature store. The\n feature is identified by a unique key, and its value is specified using\n a tensor.\n\n Parameters\n ----------\n key : str\n The key that uniquely identifies the feature in the feature store.\n value : torch.Tensor\n The updated value of the feature.\n ids : torch.Tensor, optional\n The indices of the feature to update. If specified, only the\n specified indices of the feature will be updated. For the feature,\n the `ids[i]` row is updated to `value[i]`. So the indices and value\n must have the same length. If None, the entire feature will be\n updated.\n \"\"\"\n raise NotImplementedError\n\n\nclass TorchBasedFeatureStore(FeatureStore):\n r\"\"\"Torch based key-value feature store, where the key are strings and\n values are Pytorch tensors.\"\"\"\n\n def __init__(self, feature_dict: dict):\n \"\"\"Initialize a torch based feature store.\n\n The feature store is initialized with a dictionary of tensors, where the\n key is the name of a feature and the value is the tensor. The value can\n be multi-dimensional, where the first dimension is the index of the\n feature.\n\n Note that the values can be in memory or on disk.\n\n Parameters\n ----------\n feature_dict : dict, optional\n A dictionary of tensors.\n\n Examples\n --------\n >>> import torch\n >>> feature_dict = {\n ... \"user\": torch.arange(0, 5),\n ... \"item\": torch.arange(0, 6),\n ... \"rel\": torch.arange(0, 6).view(2, 3),\n ... }\n >>> feature_store = TorchBasedFeatureStore(feature_dict)\n >>> feature_store.read(\"user\", torch.tensor([0, 1, 2]))\n tensor([0, 1, 2])\n >>> feature_store.read(\"item\", torch.tensor([0, 1, 2]))\n tensor([0, 1, 2])\n >>> feature_store.read(\"rel\", torch.tensor([0]))\n tensor([[0, 1, 2]])\n >>> feature_store.update(\"user\",\n ... torch.ones(3, dtype=torch.long), torch.tensor([0, 1, 2]))\n >>> feature_store.read(\"user\", torch.tensor([0, 1, 2]))\n tensor([1, 1, 1])\n\n >>> import numpy as np\n >>> user = np.arange(0, 5)\n >>> item = np.arange(0, 6)\n >>> np.save(\"/tmp/user.npy\", user)\n >>. np.save(\"/tmp/item.npy\", item)\n >>> feature_dict = {\n ... \"user\": torch.as_tensor(np.load(\"/tmp/user.npy\",\n ... mmap_mode=\"r+\")),\n ... \"item\": torch.as_tensor(np.load(\"/tmp/item.npy\",\n ... mmap_mode=\"r+\")),\n ... }\n >>> feature_store = TorchBasedFeatureStore(feature_dict)\n >>> feature_store.read(\"user\", torch.tensor([0, 1, 2]))\n tensor([0, 1, 2])\n >>> feature_store.read(\"item\", torch.tensor([3, 4, 2]))\n tensor([3, 4, 2])\n \"\"\"\n super(TorchBasedFeatureStore, self).__init__()\n assert isinstance(feature_dict, dict), (\n f\"feature_dict in TorchBasedFeatureStore must be dict, \"\n f\"but got {type(feature_dict)}.\"\n )\n for k, v in feature_dict.items():\n assert isinstance(\n k, str\n ), f\"Key in TorchBasedFeatureStore must be str, but got {k}.\"\n assert isinstance(v, torch.Tensor), (\n f\"Value in TorchBasedFeatureStore must be torch.Tensor,\"\n f\"but got {v}.\"\n )\n\n self._feature_dict = feature_dict\n\n def read(self, key: str, ids: torch.Tensor = None):\n \"\"\"Read a feature from the feature store by index.\n\n The returned feature is always in memory, no matter whether the feature\n to read is in memory or on disk.\n\n Parameters\n ----------\n key : str\n The key of the feature.\n ids : torch.Tensor, optional\n The index of the feature. If specified, only the specified indices\n of the feature are read. If None, the entire feature is returned.\n\n Returns\n -------\n torch.Tensor\n The read feature.\n \"\"\"\n assert (\n key in self._feature_dict\n ), f\"key {key} not in {self._feature_dict.keys()}\"\n if ids is None:\n return self._feature_dict[key]\n return self._feature_dict[key][ids]\n\n def update(self, key: str, value: torch.Tensor, ids: torch.Tensor = None):\n \"\"\"Update a feature in the feature store.\n\n This function is used to update a feature in the feature store. The\n feature is identified by a unique key, and its value is specified using\n a tensor.\n\n Parameters\n ----------\n key : str\n The key that uniquely identifies the feature in the feature store.\n value : torch.Tensor\n The updated value of the feature.\n ids : torch.Tensor, optional\n The indices of the feature to update. If specified, only the\n specified indices of the feature will be updated. For the feature,\n the `ids[i]` row is updated to `value[i]`. So the indices and value\n must have the same length. If None, the entire feature will be\n updated.\n \"\"\"\n assert (\n key in self._feature_dict\n ), f\"key {key} not in {self._feature_dict.keys()}\"\n if ids is None:\n self._feature_dict[key] = value\n else:\n assert ids.shape[0] == value.shape[0], (\n f\"ids and value must have the same length, \"\n f\"but got {ids.shape[0]} and {value.shape[0]}.\"\n )\n self._feature_dict[key][ids] = value\n", "path": "python/dgl/graphbolt/feature_store.py"}]}
| 1,922 | 922 |
gh_patches_debug_20957
|
rasdani/github-patches
|
git_diff
|
obspy__obspy-1908
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
https FDSN web service end points not working
When checking the `InsecureRequestWarning` in #1779, I noticed that those FDSNWS end points we have that go to https addresses don't seem to work..
```python
In [1]: from obspy.clients.fdsn import Client
In [2]: client = Client('GFZ')
---------------------------------------------------------------------------
FDSNException Traceback (most recent call last)
<ipython-input-2-1d7f32e7dd73> in <module>()
----> 1 client = Client('GFZ')
/home/megies/git/obspy-master/obspy/clients/fdsn/client.pyc in __init__(self, base_url, major_versions, user, password, user_agent, debug, timeout, service_mappings, force_redirect)
260 print("Request Headers: %s" % str(self.request_headers))
261
--> 262 self._discover_services()
263
264 def get_events(self, starttime=None, endtime=None, minlatitude=None,
/home/megies/git/obspy-master/obspy/clients/fdsn/client.pyc in _discover_services(self)
1479 "be due to a temporary service outage or an invalid FDSN "
1480 "service address." % self.base_url)
-> 1481 raise FDSNException(msg)
1482
1483 # Cache.
FDSNException: No FDSN services could be discovered at 'https://geofon.gfz-potsdam.de'. This could be due to a temporary service outage or an invalid FDSN service address.
In [3]: client = Client('SCEDC')
---------------------------------------------------------------------------
FDSNException Traceback (most recent call last)
<ipython-input-3-7c19ea139f48> in <module>()
----> 1 client = Client('SCEDC')
/home/megies/git/obspy-master/obspy/clients/fdsn/client.pyc in __init__(self, base_url, major_versions, user, password, user_agent, debug, timeout, service_mappings, force_redirect)
260 print("Request Headers: %s" % str(self.request_headers))
261
--> 262 self._discover_services()
263
264 def get_events(self, starttime=None, endtime=None, minlatitude=None,
/home/megies/git/obspy-master/obspy/clients/fdsn/client.pyc in _discover_services(self)
1479 "be due to a temporary service outage or an invalid FDSN "
1480 "service address." % self.base_url)
-> 1481 raise FDSNException(msg)
1482
1483 # Cache.
FDSNException: No FDSN services could be discovered at 'https://service.scedc.caltech.edu'. This could be due to a temporary service outage or an invalid FDSN service address.
In [4]: client = Client('USGS')
---------------------------------------------------------------------------
FDSNException Traceback (most recent call last)
<ipython-input-4-2d8335176e68> in <module>()
----> 1 client = Client('USGS')
/home/megies/git/obspy-master/obspy/clients/fdsn/client.pyc in __init__(self, base_url, major_versions, user, password, user_agent, debug, timeout, service_mappings, force_redirect)
260 print("Request Headers: %s" % str(self.request_headers))
261
--> 262 self._discover_services()
263
264 def get_events(self, starttime=None, endtime=None, minlatitude=None,
/home/megies/git/obspy-master/obspy/clients/fdsn/client.pyc in _discover_services(self)
1479 "be due to a temporary service outage or an invalid FDSN "
1480 "service address." % self.base_url)
-> 1481 raise FDSNException(msg)
1482
1483 # Cache.
FDSNException: No FDSN services could be discovered at 'https://earthquake.usgs.gov'. This could be due to a temporary service outage or an invalid FDSN service address.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `obspy/clients/fdsn/header.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 Header files for the FDSN webservice.
4
5 :copyright:
6 The ObsPy Development Team ([email protected])
7 :license:
8 GNU Lesser General Public License, Version 3
9 (https://www.gnu.org/copyleft/lesser.html)
10 """
11 from __future__ import (absolute_import, division, print_function,
12 unicode_literals)
13 from future.builtins import * # NOQA
14 from future.utils import PY2
15
16 import platform
17 import sys
18
19 from obspy import UTCDateTime, __version__
20
21
22 class FDSNException(Exception):
23 def __init__(self, value, server_info=None):
24 if server_info is not None:
25 value = "\n".join([value, "Detailed response of server:", "",
26 server_info])
27 super(FDSNException, self).__init__(value)
28
29
30 class FDSNRedirectException(FDSNException):
31 pass
32
33
34 class FDSNNoDataException(FDSNException):
35 pass
36
37
38 # A curated list collecting some implementations:
39 # https://www.fdsn.org/webservices/datacenters/
40 # http://www.orfeus-eu.org/eida/eida_odc.html
41 URL_MAPPINGS = {
42 "BGR": "http://eida.bgr.de",
43 "ETH": "http://eida.ethz.ch",
44 "EMSC": "http://www.seismicportal.eu",
45 "GEONET": "http://service.geonet.org.nz",
46 "GFZ": "https://geofon.gfz-potsdam.de",
47 "ICGC": "http://ws.icgc.cat",
48 "INGV": "http://webservices.rm.ingv.it",
49 "IPGP": "http://eida.ipgp.fr",
50 "IRIS": "http://service.iris.edu",
51 "ISC": "http://isc-mirror.iris.washington.edu",
52 "KOERI": "http://eida.koeri.boun.edu.tr",
53 "LMU": "http://erde.geophysik.uni-muenchen.de",
54 "NCEDC": "http://service.ncedc.org",
55 "NIEP": "http://eida-sc3.infp.ro",
56 "NOA": "http://eida.gein.noa.gr",
57 "ODC": "http://www.orfeus-eu.org",
58 "ORFEUS": "http://www.orfeus-eu.org",
59 "RESIF": "http://ws.resif.fr",
60 "SCEDC": "https://service.scedc.caltech.edu",
61 "TEXNET": "http://rtserve.beg.utexas.edu",
62 "USGS": "https://earthquake.usgs.gov",
63 "USP": "http://sismo.iag.usp.br"}
64
65 FDSNWS = ("dataselect", "event", "station")
66
67 if PY2:
68 platform_ = platform.platform().decode("ascii", "ignore")
69 else:
70 encoding = sys.getdefaultencoding() or "UTF-8"
71 platform_ = platform.platform().encode(encoding).decode("ascii", "ignore")
72 # The default User Agent that will be sent with every request.
73 DEFAULT_USER_AGENT = "ObsPy/%s (%s, Python %s)" % (
74 __version__, platform_, platform.python_version())
75
76
77 # The default parameters. Different services can choose to add more. It always
78 # contains the long name first and the short name second. If it has no short
79 # name, it is simply a tuple with only one entry.
80 DEFAULT_DATASELECT_PARAMETERS = [
81 "starttime", "endtime", "network", "station", "location", "channel"]
82
83 OPTIONAL_DATASELECT_PARAMETERS = [
84 "quality", "minimumlength", "longestonly"]
85
86 DEFAULT_STATION_PARAMETERS = [
87 "starttime", "endtime", "network", "station", "location", "channel",
88 "minlatitude", "maxlatitude", "minlongitude", "maxlongitude", "level"]
89
90 OPTIONAL_STATION_PARAMETERS = [
91 "startbefore", "startafter", "endbefore", "endafter", "latitude",
92 "longitude", "minradius", "maxradius", "includerestricted",
93 "includeavailability", "updatedafter", "matchtimeseries", "format"]
94
95 DEFAULT_EVENT_PARAMETERS = [
96 "starttime", "endtime", "minlatitude", "maxlatitude", "minlongitude",
97 "maxlongitude", "mindepth", "maxdepth", "minmagnitude", "maxmagnitude",
98 "orderby"]
99
100 OPTIONAL_EVENT_PARAMETERS = [
101 "latitude", "longitude", "minradius", "maxradius", "magnitudetype",
102 "includeallorigins", "includeallmagnitudes", "includearrivals", "eventid",
103 "limit", "offset", "catalog", "contributor", "updatedafter"]
104
105 DEFAULT_PARAMETERS = {
106 "dataselect": DEFAULT_DATASELECT_PARAMETERS,
107 "event": DEFAULT_EVENT_PARAMETERS,
108 "station": DEFAULT_STATION_PARAMETERS}
109
110 OPTIONAL_PARAMETERS = {
111 "dataselect": OPTIONAL_DATASELECT_PARAMETERS,
112 "event": OPTIONAL_EVENT_PARAMETERS,
113 "station": OPTIONAL_STATION_PARAMETERS}
114
115 PARAMETER_ALIASES = {
116 "net": "network",
117 "sta": "station",
118 "loc": "location",
119 "cha": "channel",
120 "start": "starttime",
121 "end": "endtime",
122 "minlat": "minlatitude",
123 "maxlat": "maxlatitude",
124 "minlon": "minlongitude",
125 "maxlon": "maxlongitude",
126 "lat": "latitude",
127 "lon": "longitude",
128 "minmag": "minmagnitude",
129 "maxmag": "maxmagnitude",
130 "magtype": "magnitudetype",
131 }
132
133
134 # The default types if none are given. If the parameter can not be found in
135 # here and has no specified type, the type will be assumed to be a string.
136 DEFAULT_TYPES = {
137 "starttime": UTCDateTime,
138 "endtime": UTCDateTime,
139 "network": str,
140 "station": str,
141 "location": str,
142 "channel": str,
143 "quality": str,
144 "minimumlength": float,
145 "longestonly": bool,
146 "startbefore": UTCDateTime,
147 "startafter": UTCDateTime,
148 "endbefore": UTCDateTime,
149 "endafter": UTCDateTime,
150 "maxlongitude": float,
151 "minlongitude": float,
152 "longitude": float,
153 "maxlatitude": float,
154 "minlatitude": float,
155 "latitude": float,
156 "maxdepth": float,
157 "mindepth": float,
158 "maxmagnitude": float,
159 "minmagnitude": float,
160 "magnitudetype": str,
161 "maxradius": float,
162 "minradius": float,
163 "level": str,
164 "includerestricted": bool,
165 "includeavailability": bool,
166 "includeallorigins": bool,
167 "includeallmagnitudes": bool,
168 "includearrivals": bool,
169 "matchtimeseries": bool,
170 "eventid": str,
171 "limit": int,
172 "offset": int,
173 "orderby": str,
174 "catalog": str,
175 "contributor": str,
176 "updatedafter": UTCDateTime}
177
178 # This list collects WADL parameters that will not be parsed because they are
179 # not useful for the ObsPy client.
180 # Current the nodata parameter used by IRIS is part of that list. The ObsPy
181 # client relies on the HTTP codes.
182 WADL_PARAMETERS_NOT_TO_BE_PARSED = ["nodata"]
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/obspy/clients/fdsn/header.py b/obspy/clients/fdsn/header.py
--- a/obspy/clients/fdsn/header.py
+++ b/obspy/clients/fdsn/header.py
@@ -43,7 +43,7 @@
"ETH": "http://eida.ethz.ch",
"EMSC": "http://www.seismicportal.eu",
"GEONET": "http://service.geonet.org.nz",
- "GFZ": "https://geofon.gfz-potsdam.de",
+ "GFZ": "http://geofon.gfz-potsdam.de",
"ICGC": "http://ws.icgc.cat",
"INGV": "http://webservices.rm.ingv.it",
"IPGP": "http://eida.ipgp.fr",
@@ -57,9 +57,9 @@
"ODC": "http://www.orfeus-eu.org",
"ORFEUS": "http://www.orfeus-eu.org",
"RESIF": "http://ws.resif.fr",
- "SCEDC": "https://service.scedc.caltech.edu",
+ "SCEDC": "http://service.scedc.caltech.edu",
"TEXNET": "http://rtserve.beg.utexas.edu",
- "USGS": "https://earthquake.usgs.gov",
+ "USGS": "http://earthquake.usgs.gov",
"USP": "http://sismo.iag.usp.br"}
FDSNWS = ("dataselect", "event", "station")
|
{"golden_diff": "diff --git a/obspy/clients/fdsn/header.py b/obspy/clients/fdsn/header.py\n--- a/obspy/clients/fdsn/header.py\n+++ b/obspy/clients/fdsn/header.py\n@@ -43,7 +43,7 @@\n \"ETH\": \"http://eida.ethz.ch\",\n \"EMSC\": \"http://www.seismicportal.eu\",\n \"GEONET\": \"http://service.geonet.org.nz\",\n- \"GFZ\": \"https://geofon.gfz-potsdam.de\",\n+ \"GFZ\": \"http://geofon.gfz-potsdam.de\",\n \"ICGC\": \"http://ws.icgc.cat\",\n \"INGV\": \"http://webservices.rm.ingv.it\",\n \"IPGP\": \"http://eida.ipgp.fr\",\n@@ -57,9 +57,9 @@\n \"ODC\": \"http://www.orfeus-eu.org\",\n \"ORFEUS\": \"http://www.orfeus-eu.org\",\n \"RESIF\": \"http://ws.resif.fr\",\n- \"SCEDC\": \"https://service.scedc.caltech.edu\",\n+ \"SCEDC\": \"http://service.scedc.caltech.edu\",\n \"TEXNET\": \"http://rtserve.beg.utexas.edu\",\n- \"USGS\": \"https://earthquake.usgs.gov\",\n+ \"USGS\": \"http://earthquake.usgs.gov\",\n \"USP\": \"http://sismo.iag.usp.br\"}\n \n FDSNWS = (\"dataselect\", \"event\", \"station\")\n", "issue": "https FDSN web service end points not working\nWhen checking the `InsecureRequestWarning` in #1779, I noticed that those FDSNWS end points we have that go to https addresses don't seem to work..\r\n\r\n```python\r\nIn [1]: from obspy.clients.fdsn import Client\r\n\r\nIn [2]: client = Client('GFZ')\r\n---------------------------------------------------------------------------\r\nFDSNException Traceback (most recent call last)\r\n<ipython-input-2-1d7f32e7dd73> in <module>()\r\n----> 1 client = Client('GFZ')\r\n\r\n/home/megies/git/obspy-master/obspy/clients/fdsn/client.pyc in __init__(self, base_url, major_versions, user, password, user_agent, debug, timeout, service_mappings, force_redirect)\r\n 260 print(\"Request Headers: %s\" % str(self.request_headers))\r\n 261 \r\n--> 262 self._discover_services()\r\n 263 \r\n 264 def get_events(self, starttime=None, endtime=None, minlatitude=None,\r\n\r\n/home/megies/git/obspy-master/obspy/clients/fdsn/client.pyc in _discover_services(self)\r\n 1479 \"be due to a temporary service outage or an invalid FDSN \"\r\n 1480 \"service address.\" % self.base_url)\r\n-> 1481 raise FDSNException(msg)\r\n 1482 \r\n 1483 # Cache.\r\n\r\nFDSNException: No FDSN services could be discovered at 'https://geofon.gfz-potsdam.de'. This could be due to a temporary service outage or an invalid FDSN service address.\r\n\r\nIn [3]: client = Client('SCEDC')\r\n---------------------------------------------------------------------------\r\nFDSNException Traceback (most recent call last)\r\n<ipython-input-3-7c19ea139f48> in <module>()\r\n----> 1 client = Client('SCEDC')\r\n\r\n/home/megies/git/obspy-master/obspy/clients/fdsn/client.pyc in __init__(self, base_url, major_versions, user, password, user_agent, debug, timeout, service_mappings, force_redirect)\r\n 260 print(\"Request Headers: %s\" % str(self.request_headers))\r\n 261 \r\n--> 262 self._discover_services()\r\n 263 \r\n 264 def get_events(self, starttime=None, endtime=None, minlatitude=None,\r\n\r\n/home/megies/git/obspy-master/obspy/clients/fdsn/client.pyc in _discover_services(self)\r\n 1479 \"be due to a temporary service outage or an invalid FDSN \"\r\n 1480 \"service address.\" % self.base_url)\r\n-> 1481 raise FDSNException(msg)\r\n 1482 \r\n 1483 # Cache.\r\n\r\nFDSNException: No FDSN services could be discovered at 'https://service.scedc.caltech.edu'. This could be due to a temporary service outage or an invalid FDSN service address.\r\n\r\nIn [4]: client = Client('USGS')\r\n---------------------------------------------------------------------------\r\nFDSNException Traceback (most recent call last)\r\n<ipython-input-4-2d8335176e68> in <module>()\r\n----> 1 client = Client('USGS')\r\n\r\n/home/megies/git/obspy-master/obspy/clients/fdsn/client.pyc in __init__(self, base_url, major_versions, user, password, user_agent, debug, timeout, service_mappings, force_redirect)\r\n 260 print(\"Request Headers: %s\" % str(self.request_headers))\r\n 261 \r\n--> 262 self._discover_services()\r\n 263 \r\n 264 def get_events(self, starttime=None, endtime=None, minlatitude=None,\r\n\r\n/home/megies/git/obspy-master/obspy/clients/fdsn/client.pyc in _discover_services(self)\r\n 1479 \"be due to a temporary service outage or an invalid FDSN \"\r\n 1480 \"service address.\" % self.base_url)\r\n-> 1481 raise FDSNException(msg)\r\n 1482 \r\n 1483 # Cache.\r\n\r\nFDSNException: No FDSN services could be discovered at 'https://earthquake.usgs.gov'. This could be due to a temporary service outage or an invalid FDSN service address.\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nHeader files for the FDSN webservice.\n\n:copyright:\n The ObsPy Development Team ([email protected])\n:license:\n GNU Lesser General Public License, Version 3\n (https://www.gnu.org/copyleft/lesser.html)\n\"\"\"\nfrom __future__ import (absolute_import, division, print_function,\n unicode_literals)\nfrom future.builtins import * # NOQA\nfrom future.utils import PY2\n\nimport platform\nimport sys\n\nfrom obspy import UTCDateTime, __version__\n\n\nclass FDSNException(Exception):\n def __init__(self, value, server_info=None):\n if server_info is not None:\n value = \"\\n\".join([value, \"Detailed response of server:\", \"\",\n server_info])\n super(FDSNException, self).__init__(value)\n\n\nclass FDSNRedirectException(FDSNException):\n pass\n\n\nclass FDSNNoDataException(FDSNException):\n pass\n\n\n# A curated list collecting some implementations:\n# https://www.fdsn.org/webservices/datacenters/\n# http://www.orfeus-eu.org/eida/eida_odc.html\nURL_MAPPINGS = {\n \"BGR\": \"http://eida.bgr.de\",\n \"ETH\": \"http://eida.ethz.ch\",\n \"EMSC\": \"http://www.seismicportal.eu\",\n \"GEONET\": \"http://service.geonet.org.nz\",\n \"GFZ\": \"https://geofon.gfz-potsdam.de\",\n \"ICGC\": \"http://ws.icgc.cat\",\n \"INGV\": \"http://webservices.rm.ingv.it\",\n \"IPGP\": \"http://eida.ipgp.fr\",\n \"IRIS\": \"http://service.iris.edu\",\n \"ISC\": \"http://isc-mirror.iris.washington.edu\",\n \"KOERI\": \"http://eida.koeri.boun.edu.tr\",\n \"LMU\": \"http://erde.geophysik.uni-muenchen.de\",\n \"NCEDC\": \"http://service.ncedc.org\",\n \"NIEP\": \"http://eida-sc3.infp.ro\",\n \"NOA\": \"http://eida.gein.noa.gr\",\n \"ODC\": \"http://www.orfeus-eu.org\",\n \"ORFEUS\": \"http://www.orfeus-eu.org\",\n \"RESIF\": \"http://ws.resif.fr\",\n \"SCEDC\": \"https://service.scedc.caltech.edu\",\n \"TEXNET\": \"http://rtserve.beg.utexas.edu\",\n \"USGS\": \"https://earthquake.usgs.gov\",\n \"USP\": \"http://sismo.iag.usp.br\"}\n\nFDSNWS = (\"dataselect\", \"event\", \"station\")\n\nif PY2:\n platform_ = platform.platform().decode(\"ascii\", \"ignore\")\nelse:\n encoding = sys.getdefaultencoding() or \"UTF-8\"\n platform_ = platform.platform().encode(encoding).decode(\"ascii\", \"ignore\")\n# The default User Agent that will be sent with every request.\nDEFAULT_USER_AGENT = \"ObsPy/%s (%s, Python %s)\" % (\n __version__, platform_, platform.python_version())\n\n\n# The default parameters. Different services can choose to add more. It always\n# contains the long name first and the short name second. If it has no short\n# name, it is simply a tuple with only one entry.\nDEFAULT_DATASELECT_PARAMETERS = [\n \"starttime\", \"endtime\", \"network\", \"station\", \"location\", \"channel\"]\n\nOPTIONAL_DATASELECT_PARAMETERS = [\n \"quality\", \"minimumlength\", \"longestonly\"]\n\nDEFAULT_STATION_PARAMETERS = [\n \"starttime\", \"endtime\", \"network\", \"station\", \"location\", \"channel\",\n \"minlatitude\", \"maxlatitude\", \"minlongitude\", \"maxlongitude\", \"level\"]\n\nOPTIONAL_STATION_PARAMETERS = [\n \"startbefore\", \"startafter\", \"endbefore\", \"endafter\", \"latitude\",\n \"longitude\", \"minradius\", \"maxradius\", \"includerestricted\",\n \"includeavailability\", \"updatedafter\", \"matchtimeseries\", \"format\"]\n\nDEFAULT_EVENT_PARAMETERS = [\n \"starttime\", \"endtime\", \"minlatitude\", \"maxlatitude\", \"minlongitude\",\n \"maxlongitude\", \"mindepth\", \"maxdepth\", \"minmagnitude\", \"maxmagnitude\",\n \"orderby\"]\n\nOPTIONAL_EVENT_PARAMETERS = [\n \"latitude\", \"longitude\", \"minradius\", \"maxradius\", \"magnitudetype\",\n \"includeallorigins\", \"includeallmagnitudes\", \"includearrivals\", \"eventid\",\n \"limit\", \"offset\", \"catalog\", \"contributor\", \"updatedafter\"]\n\nDEFAULT_PARAMETERS = {\n \"dataselect\": DEFAULT_DATASELECT_PARAMETERS,\n \"event\": DEFAULT_EVENT_PARAMETERS,\n \"station\": DEFAULT_STATION_PARAMETERS}\n\nOPTIONAL_PARAMETERS = {\n \"dataselect\": OPTIONAL_DATASELECT_PARAMETERS,\n \"event\": OPTIONAL_EVENT_PARAMETERS,\n \"station\": OPTIONAL_STATION_PARAMETERS}\n\nPARAMETER_ALIASES = {\n \"net\": \"network\",\n \"sta\": \"station\",\n \"loc\": \"location\",\n \"cha\": \"channel\",\n \"start\": \"starttime\",\n \"end\": \"endtime\",\n \"minlat\": \"minlatitude\",\n \"maxlat\": \"maxlatitude\",\n \"minlon\": \"minlongitude\",\n \"maxlon\": \"maxlongitude\",\n \"lat\": \"latitude\",\n \"lon\": \"longitude\",\n \"minmag\": \"minmagnitude\",\n \"maxmag\": \"maxmagnitude\",\n \"magtype\": \"magnitudetype\",\n}\n\n\n# The default types if none are given. If the parameter can not be found in\n# here and has no specified type, the type will be assumed to be a string.\nDEFAULT_TYPES = {\n \"starttime\": UTCDateTime,\n \"endtime\": UTCDateTime,\n \"network\": str,\n \"station\": str,\n \"location\": str,\n \"channel\": str,\n \"quality\": str,\n \"minimumlength\": float,\n \"longestonly\": bool,\n \"startbefore\": UTCDateTime,\n \"startafter\": UTCDateTime,\n \"endbefore\": UTCDateTime,\n \"endafter\": UTCDateTime,\n \"maxlongitude\": float,\n \"minlongitude\": float,\n \"longitude\": float,\n \"maxlatitude\": float,\n \"minlatitude\": float,\n \"latitude\": float,\n \"maxdepth\": float,\n \"mindepth\": float,\n \"maxmagnitude\": float,\n \"minmagnitude\": float,\n \"magnitudetype\": str,\n \"maxradius\": float,\n \"minradius\": float,\n \"level\": str,\n \"includerestricted\": bool,\n \"includeavailability\": bool,\n \"includeallorigins\": bool,\n \"includeallmagnitudes\": bool,\n \"includearrivals\": bool,\n \"matchtimeseries\": bool,\n \"eventid\": str,\n \"limit\": int,\n \"offset\": int,\n \"orderby\": str,\n \"catalog\": str,\n \"contributor\": str,\n \"updatedafter\": UTCDateTime}\n\n# This list collects WADL parameters that will not be parsed because they are\n# not useful for the ObsPy client.\n# Current the nodata parameter used by IRIS is part of that list. The ObsPy\n# client relies on the HTTP codes.\nWADL_PARAMETERS_NOT_TO_BE_PARSED = [\"nodata\"]\n", "path": "obspy/clients/fdsn/header.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nHeader files for the FDSN webservice.\n\n:copyright:\n The ObsPy Development Team ([email protected])\n:license:\n GNU Lesser General Public License, Version 3\n (https://www.gnu.org/copyleft/lesser.html)\n\"\"\"\nfrom __future__ import (absolute_import, division, print_function,\n unicode_literals)\nfrom future.builtins import * # NOQA\nfrom future.utils import PY2\n\nimport platform\nimport sys\n\nfrom obspy import UTCDateTime, __version__\n\n\nclass FDSNException(Exception):\n def __init__(self, value, server_info=None):\n if server_info is not None:\n value = \"\\n\".join([value, \"Detailed response of server:\", \"\",\n server_info])\n super(FDSNException, self).__init__(value)\n\n\nclass FDSNRedirectException(FDSNException):\n pass\n\n\nclass FDSNNoDataException(FDSNException):\n pass\n\n\n# A curated list collecting some implementations:\n# https://www.fdsn.org/webservices/datacenters/\n# http://www.orfeus-eu.org/eida/eida_odc.html\nURL_MAPPINGS = {\n \"BGR\": \"http://eida.bgr.de\",\n \"ETH\": \"http://eida.ethz.ch\",\n \"EMSC\": \"http://www.seismicportal.eu\",\n \"GEONET\": \"http://service.geonet.org.nz\",\n \"GFZ\": \"http://geofon.gfz-potsdam.de\",\n \"ICGC\": \"http://ws.icgc.cat\",\n \"INGV\": \"http://webservices.rm.ingv.it\",\n \"IPGP\": \"http://eida.ipgp.fr\",\n \"IRIS\": \"http://service.iris.edu\",\n \"ISC\": \"http://isc-mirror.iris.washington.edu\",\n \"KOERI\": \"http://eida.koeri.boun.edu.tr\",\n \"LMU\": \"http://erde.geophysik.uni-muenchen.de\",\n \"NCEDC\": \"http://service.ncedc.org\",\n \"NIEP\": \"http://eida-sc3.infp.ro\",\n \"NOA\": \"http://eida.gein.noa.gr\",\n \"ODC\": \"http://www.orfeus-eu.org\",\n \"ORFEUS\": \"http://www.orfeus-eu.org\",\n \"RESIF\": \"http://ws.resif.fr\",\n \"SCEDC\": \"http://service.scedc.caltech.edu\",\n \"TEXNET\": \"http://rtserve.beg.utexas.edu\",\n \"USGS\": \"http://earthquake.usgs.gov\",\n \"USP\": \"http://sismo.iag.usp.br\"}\n\nFDSNWS = (\"dataselect\", \"event\", \"station\")\n\nif PY2:\n platform_ = platform.platform().decode(\"ascii\", \"ignore\")\nelse:\n encoding = sys.getdefaultencoding() or \"UTF-8\"\n platform_ = platform.platform().encode(encoding).decode(\"ascii\", \"ignore\")\n# The default User Agent that will be sent with every request.\nDEFAULT_USER_AGENT = \"ObsPy/%s (%s, Python %s)\" % (\n __version__, platform_, platform.python_version())\n\n\n# The default parameters. Different services can choose to add more. It always\n# contains the long name first and the short name second. If it has no short\n# name, it is simply a tuple with only one entry.\nDEFAULT_DATASELECT_PARAMETERS = [\n \"starttime\", \"endtime\", \"network\", \"station\", \"location\", \"channel\"]\n\nOPTIONAL_DATASELECT_PARAMETERS = [\n \"quality\", \"minimumlength\", \"longestonly\"]\n\nDEFAULT_STATION_PARAMETERS = [\n \"starttime\", \"endtime\", \"network\", \"station\", \"location\", \"channel\",\n \"minlatitude\", \"maxlatitude\", \"minlongitude\", \"maxlongitude\", \"level\"]\n\nOPTIONAL_STATION_PARAMETERS = [\n \"startbefore\", \"startafter\", \"endbefore\", \"endafter\", \"latitude\",\n \"longitude\", \"minradius\", \"maxradius\", \"includerestricted\",\n \"includeavailability\", \"updatedafter\", \"matchtimeseries\", \"format\"]\n\nDEFAULT_EVENT_PARAMETERS = [\n \"starttime\", \"endtime\", \"minlatitude\", \"maxlatitude\", \"minlongitude\",\n \"maxlongitude\", \"mindepth\", \"maxdepth\", \"minmagnitude\", \"maxmagnitude\",\n \"orderby\"]\n\nOPTIONAL_EVENT_PARAMETERS = [\n \"latitude\", \"longitude\", \"minradius\", \"maxradius\", \"magnitudetype\",\n \"includeallorigins\", \"includeallmagnitudes\", \"includearrivals\", \"eventid\",\n \"limit\", \"offset\", \"catalog\", \"contributor\", \"updatedafter\"]\n\nDEFAULT_PARAMETERS = {\n \"dataselect\": DEFAULT_DATASELECT_PARAMETERS,\n \"event\": DEFAULT_EVENT_PARAMETERS,\n \"station\": DEFAULT_STATION_PARAMETERS}\n\nOPTIONAL_PARAMETERS = {\n \"dataselect\": OPTIONAL_DATASELECT_PARAMETERS,\n \"event\": OPTIONAL_EVENT_PARAMETERS,\n \"station\": OPTIONAL_STATION_PARAMETERS}\n\nPARAMETER_ALIASES = {\n \"net\": \"network\",\n \"sta\": \"station\",\n \"loc\": \"location\",\n \"cha\": \"channel\",\n \"start\": \"starttime\",\n \"end\": \"endtime\",\n \"minlat\": \"minlatitude\",\n \"maxlat\": \"maxlatitude\",\n \"minlon\": \"minlongitude\",\n \"maxlon\": \"maxlongitude\",\n \"lat\": \"latitude\",\n \"lon\": \"longitude\",\n \"minmag\": \"minmagnitude\",\n \"maxmag\": \"maxmagnitude\",\n \"magtype\": \"magnitudetype\",\n}\n\n\n# The default types if none are given. If the parameter can not be found in\n# here and has no specified type, the type will be assumed to be a string.\nDEFAULT_TYPES = {\n \"starttime\": UTCDateTime,\n \"endtime\": UTCDateTime,\n \"network\": str,\n \"station\": str,\n \"location\": str,\n \"channel\": str,\n \"quality\": str,\n \"minimumlength\": float,\n \"longestonly\": bool,\n \"startbefore\": UTCDateTime,\n \"startafter\": UTCDateTime,\n \"endbefore\": UTCDateTime,\n \"endafter\": UTCDateTime,\n \"maxlongitude\": float,\n \"minlongitude\": float,\n \"longitude\": float,\n \"maxlatitude\": float,\n \"minlatitude\": float,\n \"latitude\": float,\n \"maxdepth\": float,\n \"mindepth\": float,\n \"maxmagnitude\": float,\n \"minmagnitude\": float,\n \"magnitudetype\": str,\n \"maxradius\": float,\n \"minradius\": float,\n \"level\": str,\n \"includerestricted\": bool,\n \"includeavailability\": bool,\n \"includeallorigins\": bool,\n \"includeallmagnitudes\": bool,\n \"includearrivals\": bool,\n \"matchtimeseries\": bool,\n \"eventid\": str,\n \"limit\": int,\n \"offset\": int,\n \"orderby\": str,\n \"catalog\": str,\n \"contributor\": str,\n \"updatedafter\": UTCDateTime}\n\n# This list collects WADL parameters that will not be parsed because they are\n# not useful for the ObsPy client.\n# Current the nodata parameter used by IRIS is part of that list. The ObsPy\n# client relies on the HTTP codes.\nWADL_PARAMETERS_NOT_TO_BE_PARSED = [\"nodata\"]\n", "path": "obspy/clients/fdsn/header.py"}]}
| 3,341 | 356 |
gh_patches_debug_40293
|
rasdani/github-patches
|
git_diff
|
beetbox__beets-1295
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
info: Select specific fields
Right now beet info spews all tags, but it should have an option for pulling a single key/value, like so:
> $ beet info -k acoustid_id "05 Le Freak.aiff"
> 616e1881-cb7e-4d9f-9988-fa2b546c7085
This is useful if you wanted to do a multi-step operation cleanly from the command line.
Here is the bash script I am using to get around this right now:
> # !/usr/bin/env bash
> #
>
> FILE=$1
>
> if [ -z "$FILE" ]; then
> echo "need filename"
> exit 0
> fi
>
> beet info "$FILE" | grep acoustid_id | sed -e 's/^ *acoustid_id: //g'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `beetsplug/info.py`
Content:
```
1 # This file is part of beets.
2 # Copyright 2015, Adrian Sampson.
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining
5 # a copy of this software and associated documentation files (the
6 # "Software"), to deal in the Software without restriction, including
7 # without limitation the rights to use, copy, modify, merge, publish,
8 # distribute, sublicense, and/or sell copies of the Software, and to
9 # permit persons to whom the Software is furnished to do so, subject to
10 # the following conditions:
11 #
12 # The above copyright notice and this permission notice shall be
13 # included in all copies or substantial portions of the Software.
14
15 """Shows file metadata.
16 """
17
18 from __future__ import (division, absolute_import, print_function,
19 unicode_literals)
20
21 import os
22
23 from beets.plugins import BeetsPlugin
24 from beets import ui
25 from beets import mediafile
26 from beets.util import displayable_path, normpath, syspath
27
28
29 def tag_data(lib, args):
30 query = []
31 for arg in args:
32 path = normpath(arg)
33 if os.path.isfile(syspath(path)):
34 yield tag_data_emitter(path)
35 else:
36 query.append(arg)
37
38 if query:
39 for item in lib.items(query):
40 yield tag_data_emitter(item.path)
41
42
43 def tag_data_emitter(path):
44 def emitter():
45 fields = list(mediafile.MediaFile.readable_fields())
46 fields.remove('images')
47 mf = mediafile.MediaFile(syspath(path))
48 tags = {}
49 for field in fields:
50 tags[field] = getattr(mf, field)
51 tags['art'] = mf.art is not None
52 tags['path'] = displayable_path(path)
53 return tags
54 return emitter
55
56
57 def library_data(lib, args):
58 for item in lib.items(args):
59 yield library_data_emitter(item)
60
61
62 def library_data_emitter(item):
63 def emitter():
64 data = dict(item.formatted())
65 data['path'] = displayable_path(item.path)
66 return data
67 return emitter
68
69
70 def update_summary(summary, tags):
71 for key, value in tags.iteritems():
72 if key not in summary:
73 summary[key] = value
74 elif summary[key] != value:
75 summary[key] = '[various]'
76 return summary
77
78
79 def print_data(data):
80 path = data.pop('path')
81 formatted = {}
82 for key, value in data.iteritems():
83 if isinstance(value, list):
84 formatted[key] = u'; '.join(value)
85 if value is not None:
86 formatted[key] = value
87
88 maxwidth = max(len(key) for key in formatted)
89 lineformat = u'{{0:>{0}}}: {{1}}'.format(maxwidth)
90
91 if path:
92 ui.print_(displayable_path(path))
93
94 for field in sorted(formatted):
95 value = formatted[field]
96 if isinstance(value, list):
97 value = u'; '.join(value)
98 ui.print_(lineformat.format(field, value))
99
100
101 class InfoPlugin(BeetsPlugin):
102
103 def commands(self):
104 cmd = ui.Subcommand('info', help='show file metadata')
105 cmd.func = self.run
106 cmd.parser.add_option('-l', '--library', action='store_true',
107 help='show library fields instead of tags')
108 cmd.parser.add_option('-s', '--summarize', action='store_true',
109 help='summarize the tags of all files')
110 return [cmd]
111
112 def run(self, lib, opts, args):
113 """Print tag info or library data for each file referenced by args.
114
115 Main entry point for the `beet info ARGS...` command.
116
117 If an argument is a path pointing to an existing file, then the tags
118 of that file are printed. All other arguments are considered
119 queries, and for each item matching all those queries the tags from
120 the file are printed.
121
122 If `opts.summarize` is true, the function merges all tags into one
123 dictionary and only prints that. If two files have different values
124 for the same tag, the value is set to '[various]'
125 """
126 if opts.library:
127 data_collector = library_data
128 else:
129 data_collector = tag_data
130
131 first = True
132 summary = {}
133 for data_emitter in data_collector(lib, ui.decargs(args)):
134 try:
135 data = data_emitter()
136 except mediafile.UnreadableFileError as ex:
137 self._log.error(u'cannot read file: {0}', ex)
138 continue
139
140 if opts.summarize:
141 update_summary(summary, data)
142 else:
143 if not first:
144 ui.print_()
145 print_data(data)
146 first = False
147
148 if opts.summarize:
149 print_data(summary)
150
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/beetsplug/info.py b/beetsplug/info.py
--- a/beetsplug/info.py
+++ b/beetsplug/info.py
@@ -19,6 +19,7 @@
unicode_literals)
import os
+import re
from beets.plugins import BeetsPlugin
from beets import ui
@@ -77,7 +78,7 @@
def print_data(data):
- path = data.pop('path')
+ path = data.pop('path', None)
formatted = {}
for key, value in data.iteritems():
if isinstance(value, list):
@@ -85,6 +86,9 @@
if value is not None:
formatted[key] = value
+ if len(formatted) == 0:
+ return
+
maxwidth = max(len(key) for key in formatted)
lineformat = u'{{0:>{0}}}: {{1}}'.format(maxwidth)
@@ -107,6 +111,9 @@
help='show library fields instead of tags')
cmd.parser.add_option('-s', '--summarize', action='store_true',
help='summarize the tags of all files')
+ cmd.parser.add_option('-i', '--include-keys', default=[],
+ action='append', dest='included_keys',
+ help='comma separated list of keys to show')
return [cmd]
def run(self, lib, opts, args):
@@ -128,6 +135,11 @@
else:
data_collector = tag_data
+ included_keys = []
+ for keys in opts.included_keys:
+ included_keys.extend(keys.split(','))
+ key_filter = make_key_filter(included_keys)
+
first = True
summary = {}
for data_emitter in data_collector(lib, ui.decargs(args)):
@@ -137,6 +149,9 @@
self._log.error(u'cannot read file: {0}', ex)
continue
+ path = data.get('path')
+ data = key_filter(data)
+ data['path'] = path # always show path
if opts.summarize:
update_summary(summary, data)
else:
@@ -147,3 +162,33 @@
if opts.summarize:
print_data(summary)
+
+
+def make_key_filter(include):
+ """Return a function that filters a dictionary.
+
+ The returned filter takes a dictionary and returns another
+ dictionary that only includes the key-value pairs where the key
+ glob-matches one of the keys in `include`.
+ """
+ if not include:
+ return identity
+
+ matchers = []
+ for key in include:
+ key = re.escape(key)
+ key = key.replace(r'\*', '.*')
+ matchers.append(re.compile(key + '$'))
+
+ def filter(data):
+ filtered = dict()
+ for key, value in data.items():
+ if any(map(lambda m: m.match(key), matchers)):
+ filtered[key] = value
+ return filtered
+
+ return filter
+
+
+def identity(val):
+ return val
|
{"golden_diff": "diff --git a/beetsplug/info.py b/beetsplug/info.py\n--- a/beetsplug/info.py\n+++ b/beetsplug/info.py\n@@ -19,6 +19,7 @@\n unicode_literals)\n \n import os\n+import re\n \n from beets.plugins import BeetsPlugin\n from beets import ui\n@@ -77,7 +78,7 @@\n \n \n def print_data(data):\n- path = data.pop('path')\n+ path = data.pop('path', None)\n formatted = {}\n for key, value in data.iteritems():\n if isinstance(value, list):\n@@ -85,6 +86,9 @@\n if value is not None:\n formatted[key] = value\n \n+ if len(formatted) == 0:\n+ return\n+\n maxwidth = max(len(key) for key in formatted)\n lineformat = u'{{0:>{0}}}: {{1}}'.format(maxwidth)\n \n@@ -107,6 +111,9 @@\n help='show library fields instead of tags')\n cmd.parser.add_option('-s', '--summarize', action='store_true',\n help='summarize the tags of all files')\n+ cmd.parser.add_option('-i', '--include-keys', default=[],\n+ action='append', dest='included_keys',\n+ help='comma separated list of keys to show')\n return [cmd]\n \n def run(self, lib, opts, args):\n@@ -128,6 +135,11 @@\n else:\n data_collector = tag_data\n \n+ included_keys = []\n+ for keys in opts.included_keys:\n+ included_keys.extend(keys.split(','))\n+ key_filter = make_key_filter(included_keys)\n+\n first = True\n summary = {}\n for data_emitter in data_collector(lib, ui.decargs(args)):\n@@ -137,6 +149,9 @@\n self._log.error(u'cannot read file: {0}', ex)\n continue\n \n+ path = data.get('path')\n+ data = key_filter(data)\n+ data['path'] = path # always show path\n if opts.summarize:\n update_summary(summary, data)\n else:\n@@ -147,3 +162,33 @@\n \n if opts.summarize:\n print_data(summary)\n+\n+\n+def make_key_filter(include):\n+ \"\"\"Return a function that filters a dictionary.\n+\n+ The returned filter takes a dictionary and returns another\n+ dictionary that only includes the key-value pairs where the key\n+ glob-matches one of the keys in `include`.\n+ \"\"\"\n+ if not include:\n+ return identity\n+\n+ matchers = []\n+ for key in include:\n+ key = re.escape(key)\n+ key = key.replace(r'\\*', '.*')\n+ matchers.append(re.compile(key + '$'))\n+\n+ def filter(data):\n+ filtered = dict()\n+ for key, value in data.items():\n+ if any(map(lambda m: m.match(key), matchers)):\n+ filtered[key] = value\n+ return filtered\n+\n+ return filter\n+\n+\n+def identity(val):\n+ return val\n", "issue": "info: Select specific fields\nRight now beet info spews all tags, but it should have an option for pulling a single key/value, like so:\n\n> $ beet info -k acoustid_id \"05 Le Freak.aiff\"\n> 616e1881-cb7e-4d9f-9988-fa2b546c7085\n\nThis is useful if you wanted to do a multi-step operation cleanly from the command line.\n\nHere is the bash script I am using to get around this right now:\n\n> # !/usr/bin/env bash\n> # \n> \n> FILE=$1\n> \n> if [ -z \"$FILE\" ]; then\n> echo \"need filename\"\n> exit 0\n> fi\n> \n> beet info \"$FILE\" | grep acoustid_id | sed -e 's/^ *acoustid_id: //g' \n\n", "before_files": [{"content": "# This file is part of beets.\n# Copyright 2015, Adrian Sampson.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Shows file metadata.\n\"\"\"\n\nfrom __future__ import (division, absolute_import, print_function,\n unicode_literals)\n\nimport os\n\nfrom beets.plugins import BeetsPlugin\nfrom beets import ui\nfrom beets import mediafile\nfrom beets.util import displayable_path, normpath, syspath\n\n\ndef tag_data(lib, args):\n query = []\n for arg in args:\n path = normpath(arg)\n if os.path.isfile(syspath(path)):\n yield tag_data_emitter(path)\n else:\n query.append(arg)\n\n if query:\n for item in lib.items(query):\n yield tag_data_emitter(item.path)\n\n\ndef tag_data_emitter(path):\n def emitter():\n fields = list(mediafile.MediaFile.readable_fields())\n fields.remove('images')\n mf = mediafile.MediaFile(syspath(path))\n tags = {}\n for field in fields:\n tags[field] = getattr(mf, field)\n tags['art'] = mf.art is not None\n tags['path'] = displayable_path(path)\n return tags\n return emitter\n\n\ndef library_data(lib, args):\n for item in lib.items(args):\n yield library_data_emitter(item)\n\n\ndef library_data_emitter(item):\n def emitter():\n data = dict(item.formatted())\n data['path'] = displayable_path(item.path)\n return data\n return emitter\n\n\ndef update_summary(summary, tags):\n for key, value in tags.iteritems():\n if key not in summary:\n summary[key] = value\n elif summary[key] != value:\n summary[key] = '[various]'\n return summary\n\n\ndef print_data(data):\n path = data.pop('path')\n formatted = {}\n for key, value in data.iteritems():\n if isinstance(value, list):\n formatted[key] = u'; '.join(value)\n if value is not None:\n formatted[key] = value\n\n maxwidth = max(len(key) for key in formatted)\n lineformat = u'{{0:>{0}}}: {{1}}'.format(maxwidth)\n\n if path:\n ui.print_(displayable_path(path))\n\n for field in sorted(formatted):\n value = formatted[field]\n if isinstance(value, list):\n value = u'; '.join(value)\n ui.print_(lineformat.format(field, value))\n\n\nclass InfoPlugin(BeetsPlugin):\n\n def commands(self):\n cmd = ui.Subcommand('info', help='show file metadata')\n cmd.func = self.run\n cmd.parser.add_option('-l', '--library', action='store_true',\n help='show library fields instead of tags')\n cmd.parser.add_option('-s', '--summarize', action='store_true',\n help='summarize the tags of all files')\n return [cmd]\n\n def run(self, lib, opts, args):\n \"\"\"Print tag info or library data for each file referenced by args.\n\n Main entry point for the `beet info ARGS...` command.\n\n If an argument is a path pointing to an existing file, then the tags\n of that file are printed. All other arguments are considered\n queries, and for each item matching all those queries the tags from\n the file are printed.\n\n If `opts.summarize` is true, the function merges all tags into one\n dictionary and only prints that. If two files have different values\n for the same tag, the value is set to '[various]'\n \"\"\"\n if opts.library:\n data_collector = library_data\n else:\n data_collector = tag_data\n\n first = True\n summary = {}\n for data_emitter in data_collector(lib, ui.decargs(args)):\n try:\n data = data_emitter()\n except mediafile.UnreadableFileError as ex:\n self._log.error(u'cannot read file: {0}', ex)\n continue\n\n if opts.summarize:\n update_summary(summary, data)\n else:\n if not first:\n ui.print_()\n print_data(data)\n first = False\n\n if opts.summarize:\n print_data(summary)\n", "path": "beetsplug/info.py"}], "after_files": [{"content": "# This file is part of beets.\n# Copyright 2015, Adrian Sampson.\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Shows file metadata.\n\"\"\"\n\nfrom __future__ import (division, absolute_import, print_function,\n unicode_literals)\n\nimport os\nimport re\n\nfrom beets.plugins import BeetsPlugin\nfrom beets import ui\nfrom beets import mediafile\nfrom beets.util import displayable_path, normpath, syspath\n\n\ndef tag_data(lib, args):\n query = []\n for arg in args:\n path = normpath(arg)\n if os.path.isfile(syspath(path)):\n yield tag_data_emitter(path)\n else:\n query.append(arg)\n\n if query:\n for item in lib.items(query):\n yield tag_data_emitter(item.path)\n\n\ndef tag_data_emitter(path):\n def emitter():\n fields = list(mediafile.MediaFile.readable_fields())\n fields.remove('images')\n mf = mediafile.MediaFile(syspath(path))\n tags = {}\n for field in fields:\n tags[field] = getattr(mf, field)\n tags['art'] = mf.art is not None\n tags['path'] = displayable_path(path)\n return tags\n return emitter\n\n\ndef library_data(lib, args):\n for item in lib.items(args):\n yield library_data_emitter(item)\n\n\ndef library_data_emitter(item):\n def emitter():\n data = dict(item.formatted())\n data['path'] = displayable_path(item.path)\n return data\n return emitter\n\n\ndef update_summary(summary, tags):\n for key, value in tags.iteritems():\n if key not in summary:\n summary[key] = value\n elif summary[key] != value:\n summary[key] = '[various]'\n return summary\n\n\ndef print_data(data):\n path = data.pop('path', None)\n formatted = {}\n for key, value in data.iteritems():\n if isinstance(value, list):\n formatted[key] = u'; '.join(value)\n if value is not None:\n formatted[key] = value\n\n if len(formatted) == 0:\n return\n\n maxwidth = max(len(key) for key in formatted)\n lineformat = u'{{0:>{0}}}: {{1}}'.format(maxwidth)\n\n if path:\n ui.print_(displayable_path(path))\n\n for field in sorted(formatted):\n value = formatted[field]\n if isinstance(value, list):\n value = u'; '.join(value)\n ui.print_(lineformat.format(field, value))\n\n\nclass InfoPlugin(BeetsPlugin):\n\n def commands(self):\n cmd = ui.Subcommand('info', help='show file metadata')\n cmd.func = self.run\n cmd.parser.add_option('-l', '--library', action='store_true',\n help='show library fields instead of tags')\n cmd.parser.add_option('-s', '--summarize', action='store_true',\n help='summarize the tags of all files')\n cmd.parser.add_option('-i', '--include-keys', default=[],\n action='append', dest='included_keys',\n help='comma separated list of keys to show')\n return [cmd]\n\n def run(self, lib, opts, args):\n \"\"\"Print tag info or library data for each file referenced by args.\n\n Main entry point for the `beet info ARGS...` command.\n\n If an argument is a path pointing to an existing file, then the tags\n of that file are printed. All other arguments are considered\n queries, and for each item matching all those queries the tags from\n the file are printed.\n\n If `opts.summarize` is true, the function merges all tags into one\n dictionary and only prints that. If two files have different values\n for the same tag, the value is set to '[various]'\n \"\"\"\n if opts.library:\n data_collector = library_data\n else:\n data_collector = tag_data\n\n included_keys = []\n for keys in opts.included_keys:\n included_keys.extend(keys.split(','))\n key_filter = make_key_filter(included_keys)\n\n first = True\n summary = {}\n for data_emitter in data_collector(lib, ui.decargs(args)):\n try:\n data = data_emitter()\n except mediafile.UnreadableFileError as ex:\n self._log.error(u'cannot read file: {0}', ex)\n continue\n\n path = data.get('path')\n data = key_filter(data)\n data['path'] = path # always show path\n if opts.summarize:\n update_summary(summary, data)\n else:\n if not first:\n ui.print_()\n print_data(data)\n first = False\n\n if opts.summarize:\n print_data(summary)\n\n\ndef make_key_filter(include):\n \"\"\"Return a function that filters a dictionary.\n\n The returned filter takes a dictionary and returns another\n dictionary that only includes the key-value pairs where the key\n glob-matches one of the keys in `include`.\n \"\"\"\n if not include:\n return identity\n\n matchers = []\n for key in include:\n key = re.escape(key)\n key = key.replace(r'\\*', '.*')\n matchers.append(re.compile(key + '$'))\n\n def filter(data):\n filtered = dict()\n for key, value in data.items():\n if any(map(lambda m: m.match(key), matchers)):\n filtered[key] = value\n return filtered\n\n return filter\n\n\ndef identity(val):\n return val\n", "path": "beetsplug/info.py"}]}
| 1,831 | 702 |
gh_patches_debug_32468
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-1102
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pizza Ranch
Spider #29 is not collecting all locations, it's only capturing the 1st page on each state list.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/pizzaranch.py`
Content:
```
1 import scrapy
2 import re
3 from locations.items import GeojsonPointItem
4 import json
5 class CVSSpider(scrapy.Spider):
6
7 name = "pizzaranch"
8 allowed_domains = ["pizzaranch.com"]
9 download_delay = 0.5
10 start_urls = (
11 'https://pizzaranch.com/locations',
12 )
13 def parse_times(self, times):
14 if times.strip() == 'Open 24 hours':
15 return '24/7'
16 hours_to = [x.strip() for x in times.split('-')]
17 cleaned_times = []
18
19 for hour in hours_to:
20 if re.search('PM$', hour):
21 hour = re.sub('PM', '', hour).strip()
22 hour_min = hour.split(":")
23 if int(hour_min[0]) < 12:
24 hour_min[0] = str(12 + int(hour_min[0]))
25 cleaned_times.append(":".join(hour_min))
26
27 if re.search('AM$', hour):
28 hour = re.sub('AM', '', hour).strip()
29 hour_min = hour.split(":")
30 if len(hour_min[0]) <2:
31 hour_min[0] = hour_min[0].zfill(2)
32 else:
33 hour_min[0] = str(int(hour_min[0]))
34
35 cleaned_times.append(":".join(hour_min))
36 return "-".join(cleaned_times)
37
38 def parse_hours(self, lis):
39 hours = []
40 for li in lis:
41 day = li.xpath('normalize-space(.//td[@class="c-location-hours-details-row-day"]/text())').extract_first()[:2]
42 times = li.xpath('.//td[@class="c-location-hours-details-row-intervals"]/span/span/text()').extract()
43 times = "".join(x for x in times)
44 if times and day:
45 parsed_time = self.parse_times(times)
46 hours.append(day + ' ' + parsed_time)
47
48 return "; ".join(hours)
49
50 def parse_stores(self, response):
51 map_data = response.xpath('normalize-space(//script[@id="js-map-config-dir-map-nap-map"]/text())').extract_first()
52 map_json = json.loads(map_data)
53 properties = {
54 'addr_full': response.xpath('normalize-space(//span[@itemprop="streetAddress"]/span/text())').extract_first(),
55 'phone': response.xpath('normalize-space(//div[@class="c-phone-number c-phone-main-number"]/span[@class="c-phone-number-span c-phone-main-number-span"]/text())').extract_first(),
56 'city': response.xpath('normalize-space(//span[@itemprop="addressLocality"]/text())').extract_first(),
57 'state': response.xpath('normalize-space(//abbr[@itemprop="addressRegion"]/text())').extract_first(),
58 'postcode': response.xpath('normalize-space(//span[@itemprop="postalCode"]/text())').extract_first(),
59 'ref': map_json['locs'][0]['id'],
60 'website': response.url,
61 'lat': float( map_json['locs'][0]['latitude']),
62 'lon': float( map_json['locs'][0]['longitude']),
63 }
64 hours = self.parse_hours(response.xpath('//div[@class="hours-primary hidden-xs"]/div[@class="c-location-hours"]/div[@class="c-location-hours-details-wrapper js-location-hours"]/table/tbody/tr'))
65
66 if hours:
67 properties['opening_hours'] = hours
68
69 yield GeojsonPointItem(**properties)
70
71 def parse_state_stores(self, response):
72 stores = response.xpath('//h3[@class="title"]/a/@href').extract()
73 for store in stores:
74 yield scrapy.Request(response.urljoin(store), callback=self.parse_stores)
75
76 def parse(self, response):
77 urls = response.xpath('//ol[@class="state-list"]/li/a/@href').extract()
78 for path in urls:
79 yield scrapy.Request(response.urljoin(path), callback=self.parse_state_stores)
80
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/locations/spiders/pizzaranch.py b/locations/spiders/pizzaranch.py
--- a/locations/spiders/pizzaranch.py
+++ b/locations/spiders/pizzaranch.py
@@ -51,6 +51,7 @@
map_data = response.xpath('normalize-space(//script[@id="js-map-config-dir-map-nap-map"]/text())').extract_first()
map_json = json.loads(map_data)
properties = {
+ 'name': response.xpath('//span[@class="location-name-geo"]/text()').extract_first(),
'addr_full': response.xpath('normalize-space(//span[@itemprop="streetAddress"]/span/text())').extract_first(),
'phone': response.xpath('normalize-space(//div[@class="c-phone-number c-phone-main-number"]/span[@class="c-phone-number-span c-phone-main-number-span"]/text())').extract_first(),
'city': response.xpath('normalize-space(//span[@itemprop="addressLocality"]/text())').extract_first(),
@@ -68,11 +69,17 @@
yield GeojsonPointItem(**properties)
+
def parse_state_stores(self, response):
stores = response.xpath('//h3[@class="title"]/a/@href').extract()
for store in stores:
yield scrapy.Request(response.urljoin(store), callback=self.parse_stores)
+ next_page_url = response.xpath('//div[@class="pagination"]//li[@class="next"]/a/@href').extract_first()
+
+ if next_page_url:
+ yield scrapy.Request(next_page_url, callback=self.parse_state_stores)
+
def parse(self, response):
urls = response.xpath('//ol[@class="state-list"]/li/a/@href').extract()
for path in urls:
|
{"golden_diff": "diff --git a/locations/spiders/pizzaranch.py b/locations/spiders/pizzaranch.py\n--- a/locations/spiders/pizzaranch.py\n+++ b/locations/spiders/pizzaranch.py\n@@ -51,6 +51,7 @@\n map_data = response.xpath('normalize-space(//script[@id=\"js-map-config-dir-map-nap-map\"]/text())').extract_first()\n map_json = json.loads(map_data)\n properties = {\n+ 'name': response.xpath('//span[@class=\"location-name-geo\"]/text()').extract_first(),\n 'addr_full': response.xpath('normalize-space(//span[@itemprop=\"streetAddress\"]/span/text())').extract_first(),\n 'phone': response.xpath('normalize-space(//div[@class=\"c-phone-number c-phone-main-number\"]/span[@class=\"c-phone-number-span c-phone-main-number-span\"]/text())').extract_first(),\n 'city': response.xpath('normalize-space(//span[@itemprop=\"addressLocality\"]/text())').extract_first(),\n@@ -68,11 +69,17 @@\n \n yield GeojsonPointItem(**properties)\n \n+\n def parse_state_stores(self, response):\n stores = response.xpath('//h3[@class=\"title\"]/a/@href').extract()\n for store in stores:\n yield scrapy.Request(response.urljoin(store), callback=self.parse_stores)\n \n+ next_page_url = response.xpath('//div[@class=\"pagination\"]//li[@class=\"next\"]/a/@href').extract_first()\n+\n+ if next_page_url:\n+ yield scrapy.Request(next_page_url, callback=self.parse_state_stores)\n+\n def parse(self, response):\n urls = response.xpath('//ol[@class=\"state-list\"]/li/a/@href').extract()\n for path in urls:\n", "issue": "Pizza Ranch \nSpider #29 is not collecting all locations, it's only capturing the 1st page on each state list.\r\n\n", "before_files": [{"content": "import scrapy\nimport re\nfrom locations.items import GeojsonPointItem\nimport json\nclass CVSSpider(scrapy.Spider):\n\n name = \"pizzaranch\"\n allowed_domains = [\"pizzaranch.com\"]\n download_delay = 0.5\n start_urls = (\n 'https://pizzaranch.com/locations',\n )\n def parse_times(self, times):\n if times.strip() == 'Open 24 hours':\n return '24/7'\n hours_to = [x.strip() for x in times.split('-')]\n cleaned_times = []\n\n for hour in hours_to:\n if re.search('PM$', hour):\n hour = re.sub('PM', '', hour).strip()\n hour_min = hour.split(\":\")\n if int(hour_min[0]) < 12:\n hour_min[0] = str(12 + int(hour_min[0]))\n cleaned_times.append(\":\".join(hour_min))\n\n if re.search('AM$', hour):\n hour = re.sub('AM', '', hour).strip()\n hour_min = hour.split(\":\")\n if len(hour_min[0]) <2:\n hour_min[0] = hour_min[0].zfill(2)\n else:\n hour_min[0] = str(int(hour_min[0]))\n\n cleaned_times.append(\":\".join(hour_min))\n return \"-\".join(cleaned_times)\n\n def parse_hours(self, lis):\n hours = []\n for li in lis:\n day = li.xpath('normalize-space(.//td[@class=\"c-location-hours-details-row-day\"]/text())').extract_first()[:2]\n times = li.xpath('.//td[@class=\"c-location-hours-details-row-intervals\"]/span/span/text()').extract()\n times = \"\".join(x for x in times)\n if times and day:\n parsed_time = self.parse_times(times)\n hours.append(day + ' ' + parsed_time)\n\n return \"; \".join(hours)\n\n def parse_stores(self, response):\n map_data = response.xpath('normalize-space(//script[@id=\"js-map-config-dir-map-nap-map\"]/text())').extract_first()\n map_json = json.loads(map_data)\n properties = {\n 'addr_full': response.xpath('normalize-space(//span[@itemprop=\"streetAddress\"]/span/text())').extract_first(),\n 'phone': response.xpath('normalize-space(//div[@class=\"c-phone-number c-phone-main-number\"]/span[@class=\"c-phone-number-span c-phone-main-number-span\"]/text())').extract_first(),\n 'city': response.xpath('normalize-space(//span[@itemprop=\"addressLocality\"]/text())').extract_first(),\n 'state': response.xpath('normalize-space(//abbr[@itemprop=\"addressRegion\"]/text())').extract_first(),\n 'postcode': response.xpath('normalize-space(//span[@itemprop=\"postalCode\"]/text())').extract_first(),\n 'ref': map_json['locs'][0]['id'],\n 'website': response.url,\n 'lat': float( map_json['locs'][0]['latitude']),\n 'lon': float( map_json['locs'][0]['longitude']),\n }\n hours = self.parse_hours(response.xpath('//div[@class=\"hours-primary hidden-xs\"]/div[@class=\"c-location-hours\"]/div[@class=\"c-location-hours-details-wrapper js-location-hours\"]/table/tbody/tr'))\n\n if hours:\n properties['opening_hours'] = hours\n\n yield GeojsonPointItem(**properties)\n\n def parse_state_stores(self, response):\n stores = response.xpath('//h3[@class=\"title\"]/a/@href').extract()\n for store in stores:\n yield scrapy.Request(response.urljoin(store), callback=self.parse_stores)\n\n def parse(self, response):\n urls = response.xpath('//ol[@class=\"state-list\"]/li/a/@href').extract()\n for path in urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_state_stores)\n", "path": "locations/spiders/pizzaranch.py"}], "after_files": [{"content": "import scrapy\nimport re\nfrom locations.items import GeojsonPointItem\nimport json\nclass CVSSpider(scrapy.Spider):\n\n name = \"pizzaranch\"\n allowed_domains = [\"pizzaranch.com\"]\n download_delay = 0.5\n start_urls = (\n 'https://pizzaranch.com/locations',\n )\n def parse_times(self, times):\n if times.strip() == 'Open 24 hours':\n return '24/7'\n hours_to = [x.strip() for x in times.split('-')]\n cleaned_times = []\n\n for hour in hours_to:\n if re.search('PM$', hour):\n hour = re.sub('PM', '', hour).strip()\n hour_min = hour.split(\":\")\n if int(hour_min[0]) < 12:\n hour_min[0] = str(12 + int(hour_min[0]))\n cleaned_times.append(\":\".join(hour_min))\n\n if re.search('AM$', hour):\n hour = re.sub('AM', '', hour).strip()\n hour_min = hour.split(\":\")\n if len(hour_min[0]) <2:\n hour_min[0] = hour_min[0].zfill(2)\n else:\n hour_min[0] = str(int(hour_min[0]))\n\n cleaned_times.append(\":\".join(hour_min))\n return \"-\".join(cleaned_times)\n\n def parse_hours(self, lis):\n hours = []\n for li in lis:\n day = li.xpath('normalize-space(.//td[@class=\"c-location-hours-details-row-day\"]/text())').extract_first()[:2]\n times = li.xpath('.//td[@class=\"c-location-hours-details-row-intervals\"]/span/span/text()').extract()\n times = \"\".join(x for x in times)\n if times and day:\n parsed_time = self.parse_times(times)\n hours.append(day + ' ' + parsed_time)\n\n return \"; \".join(hours)\n\n def parse_stores(self, response):\n map_data = response.xpath('normalize-space(//script[@id=\"js-map-config-dir-map-nap-map\"]/text())').extract_first()\n map_json = json.loads(map_data)\n properties = {\n 'name': response.xpath('//span[@class=\"location-name-geo\"]/text()').extract_first(),\n 'addr_full': response.xpath('normalize-space(//span[@itemprop=\"streetAddress\"]/span/text())').extract_first(),\n 'phone': response.xpath('normalize-space(//div[@class=\"c-phone-number c-phone-main-number\"]/span[@class=\"c-phone-number-span c-phone-main-number-span\"]/text())').extract_first(),\n 'city': response.xpath('normalize-space(//span[@itemprop=\"addressLocality\"]/text())').extract_first(),\n 'state': response.xpath('normalize-space(//abbr[@itemprop=\"addressRegion\"]/text())').extract_first(),\n 'postcode': response.xpath('normalize-space(//span[@itemprop=\"postalCode\"]/text())').extract_first(),\n 'ref': map_json['locs'][0]['id'],\n 'website': response.url,\n 'lat': float( map_json['locs'][0]['latitude']),\n 'lon': float( map_json['locs'][0]['longitude']),\n }\n hours = self.parse_hours(response.xpath('//div[@class=\"hours-primary hidden-xs\"]/div[@class=\"c-location-hours\"]/div[@class=\"c-location-hours-details-wrapper js-location-hours\"]/table/tbody/tr'))\n\n if hours:\n properties['opening_hours'] = hours\n\n yield GeojsonPointItem(**properties)\n\n\n def parse_state_stores(self, response):\n stores = response.xpath('//h3[@class=\"title\"]/a/@href').extract()\n for store in stores:\n yield scrapy.Request(response.urljoin(store), callback=self.parse_stores)\n\n next_page_url = response.xpath('//div[@class=\"pagination\"]//li[@class=\"next\"]/a/@href').extract_first()\n\n if next_page_url:\n yield scrapy.Request(next_page_url, callback=self.parse_state_stores)\n\n def parse(self, response):\n urls = response.xpath('//ol[@class=\"state-list\"]/li/a/@href').extract()\n for path in urls:\n yield scrapy.Request(response.urljoin(path), callback=self.parse_state_stores)\n", "path": "locations/spiders/pizzaranch.py"}]}
| 1,283 | 386 |
gh_patches_debug_4925
|
rasdani/github-patches
|
git_diff
|
scalableminds__webknossos-libs-324
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BoundingBox and BoundingBoxNamedTuple should have bidirectional conversion
A lot of voxelytics code does nothing but converting these, which could be DRYer by moving the conversion into the API
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wkcuber/api/bounding_box.py`
Content:
```
1 # mypy: allow-untyped-defs
2 import json
3 import re
4 from typing import (
5 Dict,
6 Generator,
7 Iterable,
8 List,
9 Optional,
10 Tuple,
11 Union,
12 NamedTuple,
13 cast,
14 )
15
16 import numpy as np
17
18 from wkcuber.mag import Mag
19
20 Shape3D = Union[List[int], Tuple[int, int, int], np.ndarray]
21
22
23 class BoundingBoxNamedTuple(NamedTuple):
24 topleft: Tuple[int, int, int]
25 size: Tuple[int, int, int]
26
27
28 class BoundingBox:
29 def __init__(self, topleft: Shape3D, size: Shape3D):
30
31 self.topleft = np.array(topleft, dtype=np.int)
32 self.size = np.array(size, dtype=np.int)
33
34 @property
35 def bottomright(self) -> np.ndarray:
36
37 return self.topleft + self.size
38
39 @staticmethod
40 def from_wkw(bbox: Dict) -> "BoundingBox":
41 return BoundingBox(
42 bbox["topLeft"], [bbox["width"], bbox["height"], bbox["depth"]]
43 )
44
45 @staticmethod
46 def from_config(bbox: Dict) -> "BoundingBox":
47 return BoundingBox(bbox["topleft"], bbox["size"])
48
49 @staticmethod
50 def from_tuple6(tuple6: Tuple[int, int, int, int, int, int]) -> "BoundingBox":
51 return BoundingBox(tuple6[0:3], tuple6[3:6])
52
53 @staticmethod
54 def from_tuple2(tuple2: Tuple[Shape3D, Shape3D]) -> "BoundingBox":
55 return BoundingBox(tuple2[0], tuple2[1])
56
57 @staticmethod
58 def from_points(points: Iterable[Shape3D]) -> "BoundingBox":
59
60 all_points = np.array(points)
61 topleft = all_points.min(axis=0)
62 bottomright = all_points.max(axis=0)
63
64 # bottomright is exclusive
65 bottomright += 1
66
67 return BoundingBox(topleft, bottomright - topleft)
68
69 @staticmethod
70 def from_named_tuple(bb_named_tuple: BoundingBoxNamedTuple):
71
72 return BoundingBox(bb_named_tuple.topleft, bb_named_tuple.size)
73
74 @staticmethod
75 def from_checkpoint_name(checkpoint_name: str) -> "BoundingBox":
76 """This function extracts a bounding box in the format x_y_z_sx_sy_xz which is contained in a string."""
77 regex = r"(([0-9]+_){5}([0-9]+))"
78 match = re.search(regex, checkpoint_name)
79 assert (
80 match is not None
81 ), f"Could not extract bounding box from {checkpoint_name}"
82 bbox_tuple = tuple(int(value) for value in match.group().split("_"))
83 topleft = cast(Tuple[int, int, int], bbox_tuple[:3])
84 size = cast(Tuple[int, int, int], bbox_tuple[3:6])
85 return BoundingBox.from_tuple2((topleft, size))
86
87 @staticmethod
88 def from_csv(csv_bbox: str) -> "BoundingBox":
89 bbox_tuple = tuple(int(x) for x in csv_bbox.split(","))
90 return BoundingBox.from_tuple6(
91 cast(Tuple[int, int, int, int, int, int], bbox_tuple)
92 )
93
94 @staticmethod
95 def from_auto(obj) -> "BoundingBox":
96 if isinstance(obj, BoundingBox):
97 return obj
98 elif isinstance(obj, str):
99 if ":" in obj:
100 return BoundingBox.from_auto(json.loads(obj))
101 else:
102 return BoundingBox.from_csv(obj)
103 elif isinstance(obj, dict):
104 return BoundingBox.from_wkw(obj)
105 elif isinstance(obj, BoundingBoxNamedTuple):
106 return BoundingBox.from_named_tuple(obj)
107 elif isinstance(obj, list) or isinstance(obj, tuple):
108 if len(obj) == 2:
109 return BoundingBox.from_tuple2(obj) # type: ignore
110 elif len(obj) == 6:
111 return BoundingBox.from_tuple6(obj) # type: ignore
112
113 raise Exception("Unknown bounding box format.")
114
115 def as_wkw(self) -> dict:
116
117 width, height, depth = self.size.tolist()
118
119 return {
120 "topLeft": self.topleft.tolist(),
121 "width": width,
122 "height": height,
123 "depth": depth,
124 }
125
126 def as_config(self) -> dict:
127
128 return {"topleft": self.topleft.tolist(), "size": self.size.tolist()}
129
130 def as_checkpoint_name(self) -> str:
131
132 x, y, z = self.topleft
133 width, height, depth = self.size
134 return "{x}_{y}_{z}_{width}_{height}_{depth}".format(
135 x=x, y=y, z=z, width=width, height=height, depth=depth
136 )
137
138 def as_tuple6(self) -> Tuple[int, int, int, int, int, int]:
139
140 return tuple(self.topleft.tolist() + self.size.tolist()) # type: ignore
141
142 def as_csv(self) -> str:
143
144 return ",".join(map(str, self.as_tuple6()))
145
146 def __repr__(self) -> str:
147
148 return "BoundingBox(topleft={}, size={})".format(
149 str(tuple(self.topleft)), str(tuple(self.size))
150 )
151
152 def __str__(self) -> str:
153
154 return self.__repr__()
155
156 def __eq__(self, other) -> bool:
157
158 return np.array_equal(self.topleft, other.topleft) and np.array_equal(
159 self.size, other.size
160 )
161
162 def padded_with_margins(
163 self, margins_left: Shape3D, margins_right: Optional[Shape3D] = None
164 ) -> "BoundingBox":
165
166 if margins_right is None:
167 margins_right = margins_left
168
169 margins_left = np.array(margins_left)
170 margins_right = np.array(margins_right)
171
172 return BoundingBox(
173 topleft=self.topleft - margins_left,
174 size=self.size + (margins_left + margins_right),
175 )
176
177 def intersected_with(
178 self, other: "BoundingBox", dont_assert=False
179 ) -> "BoundingBox":
180 """ If dont_assert is set to False, this method may return empty bounding boxes (size == (0, 0, 0)) """
181
182 topleft = np.maximum(self.topleft, other.topleft)
183 bottomright = np.minimum(self.bottomright, other.bottomright)
184 size = np.maximum(bottomright - topleft, (0, 0, 0))
185
186 intersection = BoundingBox(topleft, size)
187
188 if not dont_assert:
189 assert (
190 not intersection.is_empty()
191 ), f"No intersection between bounding boxes {self} and {other}."
192
193 return intersection
194
195 def extended_by(self, other: "BoundingBox") -> "BoundingBox":
196
197 topleft = np.minimum(self.topleft, other.topleft)
198 bottomright = np.maximum(self.bottomright, other.bottomright)
199 size = bottomright - topleft
200
201 return BoundingBox(topleft, size)
202
203 def is_empty(self) -> bool:
204
205 return not all(self.size > 0)
206
207 def in_mag(self, mag: Mag) -> "BoundingBox":
208
209 np_mag = np.array(mag.to_array())
210
211 assert (
212 np.count_nonzero(self.topleft % np_mag) == 0
213 ), f"topleft {self.topleft} is not aligned with the mag {mag}. Use BoundingBox.align_with_mag()."
214 assert (
215 np.count_nonzero(self.bottomright % np_mag) == 0
216 ), f"bottomright {self.bottomright} is not aligned with the mag {mag}. Use BoundingBox.align_with_mag()."
217
218 return BoundingBox(
219 topleft=(self.topleft // np_mag).astype(np.int),
220 size=(self.size // np_mag).astype(np.int),
221 )
222
223 def align_with_mag(self, mag: Mag, ceil=False):
224 """Rounds the bounding box, so that both topleft and bottomright are divisible by mag.
225
226 :argument ceil: If true, the bounding box is enlarged when necessary. If false, it's shrinked when necessary.
227 """
228
229 np_mag = np.array(mag.to_array())
230
231 align = lambda point, round_fn: round_fn(point / np_mag).astype(np.int) * np_mag
232
233 if ceil:
234 topleft = align(self.topleft, np.floor)
235 bottomright = align(self.bottomright, np.ceil)
236 else:
237 topleft = align(self.topleft, np.ceil)
238 bottomright = align(self.bottomright, np.floor)
239 return BoundingBox(topleft, bottomright - topleft)
240
241 def contains(self, coord: Shape3D) -> bool:
242
243 coord = np.array(coord)
244
245 return np.all(coord >= self.topleft) and np.all(
246 coord < self.topleft + self.size
247 )
248
249 def contains_bbox(self, inner_bbox: "BoundingBox") -> bool:
250 return inner_bbox.intersected_with(self, dont_assert=True) == inner_bbox
251
252 def chunk(
253 self, chunk_size: Shape3D, chunk_border_alignments: Optional[List[int]] = None
254 ) -> Generator["BoundingBox", None, None]:
255 """Decompose the bounding box into smaller chunks of size `chunk_size`.
256
257 Chunks at the border of the bounding box might be smaller than chunk_size.
258 If `chunk_border_alignment` is set, all border coordinates
259 *between two chunks* will be divisible by that value.
260 """
261
262 start = self.topleft.copy()
263 chunk_size = np.array(chunk_size)
264
265 start_adjust = np.array([0, 0, 0])
266 if chunk_border_alignments is not None:
267
268 chunk_border_alignments = np.array(chunk_border_alignments)
269 assert np.all(
270 chunk_size % chunk_border_alignments == 0
271 ), f"{chunk_size} not divisible by {chunk_border_alignments}"
272
273 # Move the start to be aligned correctly. This doesn't actually change
274 # the start of the first chunk, because we'll intersect with `self`,
275 # but it'll lead to all chunk borders being aligned correctly.
276 start_adjust = start % chunk_border_alignments
277
278 for x in range(
279 start[0] - start_adjust[0], start[0] + self.size[0], chunk_size[0]
280 ):
281 for y in range(
282 start[1] - start_adjust[1], start[1] + self.size[1], chunk_size[1]
283 ):
284 for z in range(
285 start[2] - start_adjust[2], start[2] + self.size[2], chunk_size[2]
286 ):
287
288 yield BoundingBox([x, y, z], chunk_size).intersected_with(self)
289
290 def volume(self) -> int:
291
292 return self.size.prod()
293
294 def slice_array(self, array: np.ndarray) -> np.ndarray:
295
296 return array[
297 self.topleft[0] : self.bottomright[0],
298 self.topleft[1] : self.bottomright[1],
299 self.topleft[2] : self.bottomright[2],
300 ]
301
302 def as_slices(self) -> Tuple[slice, slice, slice]:
303 return np.index_exp[
304 self.topleft[0] : self.bottomright[0],
305 self.topleft[1] : self.bottomright[1],
306 self.topleft[2] : self.bottomright[2],
307 ]
308
309 def copy(self) -> "BoundingBox":
310
311 return BoundingBox(self.topleft.copy(), self.bottomright.copy())
312
313 def offset(self, vector: Tuple[int, int, int]) -> "BoundingBox":
314
315 return BoundingBox(self.topleft + np.array(vector), self.size.copy())
316
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wkcuber/api/bounding_box.py b/wkcuber/api/bounding_box.py
--- a/wkcuber/api/bounding_box.py
+++ b/wkcuber/api/bounding_box.py
@@ -143,6 +143,12 @@
return ",".join(map(str, self.as_tuple6()))
+ def as_named_tuple(self) -> BoundingBoxNamedTuple:
+ return BoundingBoxNamedTuple(
+ topleft=cast(Tuple[int, int, int], tuple(self.topleft)),
+ size=cast(Tuple[int, int, int], tuple(self.size)),
+ )
+
def __repr__(self) -> str:
return "BoundingBox(topleft={}, size={})".format(
|
{"golden_diff": "diff --git a/wkcuber/api/bounding_box.py b/wkcuber/api/bounding_box.py\n--- a/wkcuber/api/bounding_box.py\n+++ b/wkcuber/api/bounding_box.py\n@@ -143,6 +143,12 @@\n \n return \",\".join(map(str, self.as_tuple6()))\n \n+ def as_named_tuple(self) -> BoundingBoxNamedTuple:\n+ return BoundingBoxNamedTuple(\n+ topleft=cast(Tuple[int, int, int], tuple(self.topleft)),\n+ size=cast(Tuple[int, int, int], tuple(self.size)),\n+ )\n+\n def __repr__(self) -> str:\n \n return \"BoundingBox(topleft={}, size={})\".format(\n", "issue": "BoundingBox and BoundingBoxNamedTuple should have bidirectional conversion\nA lot of voxelytics code does nothing but converting these, which could be DRYer by moving the conversion into the API\n", "before_files": [{"content": "# mypy: allow-untyped-defs\nimport json\nimport re\nfrom typing import (\n Dict,\n Generator,\n Iterable,\n List,\n Optional,\n Tuple,\n Union,\n NamedTuple,\n cast,\n)\n\nimport numpy as np\n\nfrom wkcuber.mag import Mag\n\nShape3D = Union[List[int], Tuple[int, int, int], np.ndarray]\n\n\nclass BoundingBoxNamedTuple(NamedTuple):\n topleft: Tuple[int, int, int]\n size: Tuple[int, int, int]\n\n\nclass BoundingBox:\n def __init__(self, topleft: Shape3D, size: Shape3D):\n\n self.topleft = np.array(topleft, dtype=np.int)\n self.size = np.array(size, dtype=np.int)\n\n @property\n def bottomright(self) -> np.ndarray:\n\n return self.topleft + self.size\n\n @staticmethod\n def from_wkw(bbox: Dict) -> \"BoundingBox\":\n return BoundingBox(\n bbox[\"topLeft\"], [bbox[\"width\"], bbox[\"height\"], bbox[\"depth\"]]\n )\n\n @staticmethod\n def from_config(bbox: Dict) -> \"BoundingBox\":\n return BoundingBox(bbox[\"topleft\"], bbox[\"size\"])\n\n @staticmethod\n def from_tuple6(tuple6: Tuple[int, int, int, int, int, int]) -> \"BoundingBox\":\n return BoundingBox(tuple6[0:3], tuple6[3:6])\n\n @staticmethod\n def from_tuple2(tuple2: Tuple[Shape3D, Shape3D]) -> \"BoundingBox\":\n return BoundingBox(tuple2[0], tuple2[1])\n\n @staticmethod\n def from_points(points: Iterable[Shape3D]) -> \"BoundingBox\":\n\n all_points = np.array(points)\n topleft = all_points.min(axis=0)\n bottomright = all_points.max(axis=0)\n\n # bottomright is exclusive\n bottomright += 1\n\n return BoundingBox(topleft, bottomright - topleft)\n\n @staticmethod\n def from_named_tuple(bb_named_tuple: BoundingBoxNamedTuple):\n\n return BoundingBox(bb_named_tuple.topleft, bb_named_tuple.size)\n\n @staticmethod\n def from_checkpoint_name(checkpoint_name: str) -> \"BoundingBox\":\n \"\"\"This function extracts a bounding box in the format x_y_z_sx_sy_xz which is contained in a string.\"\"\"\n regex = r\"(([0-9]+_){5}([0-9]+))\"\n match = re.search(regex, checkpoint_name)\n assert (\n match is not None\n ), f\"Could not extract bounding box from {checkpoint_name}\"\n bbox_tuple = tuple(int(value) for value in match.group().split(\"_\"))\n topleft = cast(Tuple[int, int, int], bbox_tuple[:3])\n size = cast(Tuple[int, int, int], bbox_tuple[3:6])\n return BoundingBox.from_tuple2((topleft, size))\n\n @staticmethod\n def from_csv(csv_bbox: str) -> \"BoundingBox\":\n bbox_tuple = tuple(int(x) for x in csv_bbox.split(\",\"))\n return BoundingBox.from_tuple6(\n cast(Tuple[int, int, int, int, int, int], bbox_tuple)\n )\n\n @staticmethod\n def from_auto(obj) -> \"BoundingBox\":\n if isinstance(obj, BoundingBox):\n return obj\n elif isinstance(obj, str):\n if \":\" in obj:\n return BoundingBox.from_auto(json.loads(obj))\n else:\n return BoundingBox.from_csv(obj)\n elif isinstance(obj, dict):\n return BoundingBox.from_wkw(obj)\n elif isinstance(obj, BoundingBoxNamedTuple):\n return BoundingBox.from_named_tuple(obj)\n elif isinstance(obj, list) or isinstance(obj, tuple):\n if len(obj) == 2:\n return BoundingBox.from_tuple2(obj) # type: ignore\n elif len(obj) == 6:\n return BoundingBox.from_tuple6(obj) # type: ignore\n\n raise Exception(\"Unknown bounding box format.\")\n\n def as_wkw(self) -> dict:\n\n width, height, depth = self.size.tolist()\n\n return {\n \"topLeft\": self.topleft.tolist(),\n \"width\": width,\n \"height\": height,\n \"depth\": depth,\n }\n\n def as_config(self) -> dict:\n\n return {\"topleft\": self.topleft.tolist(), \"size\": self.size.tolist()}\n\n def as_checkpoint_name(self) -> str:\n\n x, y, z = self.topleft\n width, height, depth = self.size\n return \"{x}_{y}_{z}_{width}_{height}_{depth}\".format(\n x=x, y=y, z=z, width=width, height=height, depth=depth\n )\n\n def as_tuple6(self) -> Tuple[int, int, int, int, int, int]:\n\n return tuple(self.topleft.tolist() + self.size.tolist()) # type: ignore\n\n def as_csv(self) -> str:\n\n return \",\".join(map(str, self.as_tuple6()))\n\n def __repr__(self) -> str:\n\n return \"BoundingBox(topleft={}, size={})\".format(\n str(tuple(self.topleft)), str(tuple(self.size))\n )\n\n def __str__(self) -> str:\n\n return self.__repr__()\n\n def __eq__(self, other) -> bool:\n\n return np.array_equal(self.topleft, other.topleft) and np.array_equal(\n self.size, other.size\n )\n\n def padded_with_margins(\n self, margins_left: Shape3D, margins_right: Optional[Shape3D] = None\n ) -> \"BoundingBox\":\n\n if margins_right is None:\n margins_right = margins_left\n\n margins_left = np.array(margins_left)\n margins_right = np.array(margins_right)\n\n return BoundingBox(\n topleft=self.topleft - margins_left,\n size=self.size + (margins_left + margins_right),\n )\n\n def intersected_with(\n self, other: \"BoundingBox\", dont_assert=False\n ) -> \"BoundingBox\":\n \"\"\" If dont_assert is set to False, this method may return empty bounding boxes (size == (0, 0, 0)) \"\"\"\n\n topleft = np.maximum(self.topleft, other.topleft)\n bottomright = np.minimum(self.bottomright, other.bottomright)\n size = np.maximum(bottomright - topleft, (0, 0, 0))\n\n intersection = BoundingBox(topleft, size)\n\n if not dont_assert:\n assert (\n not intersection.is_empty()\n ), f\"No intersection between bounding boxes {self} and {other}.\"\n\n return intersection\n\n def extended_by(self, other: \"BoundingBox\") -> \"BoundingBox\":\n\n topleft = np.minimum(self.topleft, other.topleft)\n bottomright = np.maximum(self.bottomright, other.bottomright)\n size = bottomright - topleft\n\n return BoundingBox(topleft, size)\n\n def is_empty(self) -> bool:\n\n return not all(self.size > 0)\n\n def in_mag(self, mag: Mag) -> \"BoundingBox\":\n\n np_mag = np.array(mag.to_array())\n\n assert (\n np.count_nonzero(self.topleft % np_mag) == 0\n ), f\"topleft {self.topleft} is not aligned with the mag {mag}. Use BoundingBox.align_with_mag().\"\n assert (\n np.count_nonzero(self.bottomright % np_mag) == 0\n ), f\"bottomright {self.bottomright} is not aligned with the mag {mag}. Use BoundingBox.align_with_mag().\"\n\n return BoundingBox(\n topleft=(self.topleft // np_mag).astype(np.int),\n size=(self.size // np_mag).astype(np.int),\n )\n\n def align_with_mag(self, mag: Mag, ceil=False):\n \"\"\"Rounds the bounding box, so that both topleft and bottomright are divisible by mag.\n\n :argument ceil: If true, the bounding box is enlarged when necessary. If false, it's shrinked when necessary.\n \"\"\"\n\n np_mag = np.array(mag.to_array())\n\n align = lambda point, round_fn: round_fn(point / np_mag).astype(np.int) * np_mag\n\n if ceil:\n topleft = align(self.topleft, np.floor)\n bottomright = align(self.bottomright, np.ceil)\n else:\n topleft = align(self.topleft, np.ceil)\n bottomright = align(self.bottomright, np.floor)\n return BoundingBox(topleft, bottomright - topleft)\n\n def contains(self, coord: Shape3D) -> bool:\n\n coord = np.array(coord)\n\n return np.all(coord >= self.topleft) and np.all(\n coord < self.topleft + self.size\n )\n\n def contains_bbox(self, inner_bbox: \"BoundingBox\") -> bool:\n return inner_bbox.intersected_with(self, dont_assert=True) == inner_bbox\n\n def chunk(\n self, chunk_size: Shape3D, chunk_border_alignments: Optional[List[int]] = None\n ) -> Generator[\"BoundingBox\", None, None]:\n \"\"\"Decompose the bounding box into smaller chunks of size `chunk_size`.\n\n Chunks at the border of the bounding box might be smaller than chunk_size.\n If `chunk_border_alignment` is set, all border coordinates\n *between two chunks* will be divisible by that value.\n \"\"\"\n\n start = self.topleft.copy()\n chunk_size = np.array(chunk_size)\n\n start_adjust = np.array([0, 0, 0])\n if chunk_border_alignments is not None:\n\n chunk_border_alignments = np.array(chunk_border_alignments)\n assert np.all(\n chunk_size % chunk_border_alignments == 0\n ), f\"{chunk_size} not divisible by {chunk_border_alignments}\"\n\n # Move the start to be aligned correctly. This doesn't actually change\n # the start of the first chunk, because we'll intersect with `self`,\n # but it'll lead to all chunk borders being aligned correctly.\n start_adjust = start % chunk_border_alignments\n\n for x in range(\n start[0] - start_adjust[0], start[0] + self.size[0], chunk_size[0]\n ):\n for y in range(\n start[1] - start_adjust[1], start[1] + self.size[1], chunk_size[1]\n ):\n for z in range(\n start[2] - start_adjust[2], start[2] + self.size[2], chunk_size[2]\n ):\n\n yield BoundingBox([x, y, z], chunk_size).intersected_with(self)\n\n def volume(self) -> int:\n\n return self.size.prod()\n\n def slice_array(self, array: np.ndarray) -> np.ndarray:\n\n return array[\n self.topleft[0] : self.bottomright[0],\n self.topleft[1] : self.bottomright[1],\n self.topleft[2] : self.bottomright[2],\n ]\n\n def as_slices(self) -> Tuple[slice, slice, slice]:\n return np.index_exp[\n self.topleft[0] : self.bottomright[0],\n self.topleft[1] : self.bottomright[1],\n self.topleft[2] : self.bottomright[2],\n ]\n\n def copy(self) -> \"BoundingBox\":\n\n return BoundingBox(self.topleft.copy(), self.bottomright.copy())\n\n def offset(self, vector: Tuple[int, int, int]) -> \"BoundingBox\":\n\n return BoundingBox(self.topleft + np.array(vector), self.size.copy())\n", "path": "wkcuber/api/bounding_box.py"}], "after_files": [{"content": "# mypy: allow-untyped-defs\nimport json\nimport re\nfrom typing import (\n Dict,\n Generator,\n Iterable,\n List,\n Optional,\n Tuple,\n Union,\n NamedTuple,\n cast,\n)\n\nimport numpy as np\n\nfrom wkcuber.mag import Mag\n\nShape3D = Union[List[int], Tuple[int, int, int], np.ndarray]\n\n\nclass BoundingBoxNamedTuple(NamedTuple):\n topleft: Tuple[int, int, int]\n size: Tuple[int, int, int]\n\n\nclass BoundingBox:\n def __init__(self, topleft: Shape3D, size: Shape3D):\n\n self.topleft = np.array(topleft, dtype=np.int)\n self.size = np.array(size, dtype=np.int)\n\n @property\n def bottomright(self) -> np.ndarray:\n\n return self.topleft + self.size\n\n @staticmethod\n def from_wkw(bbox: Dict) -> \"BoundingBox\":\n return BoundingBox(\n bbox[\"topLeft\"], [bbox[\"width\"], bbox[\"height\"], bbox[\"depth\"]]\n )\n\n @staticmethod\n def from_config(bbox: Dict) -> \"BoundingBox\":\n return BoundingBox(bbox[\"topleft\"], bbox[\"size\"])\n\n @staticmethod\n def from_tuple6(tuple6: Tuple[int, int, int, int, int, int]) -> \"BoundingBox\":\n return BoundingBox(tuple6[0:3], tuple6[3:6])\n\n @staticmethod\n def from_tuple2(tuple2: Tuple[Shape3D, Shape3D]) -> \"BoundingBox\":\n return BoundingBox(tuple2[0], tuple2[1])\n\n @staticmethod\n def from_points(points: Iterable[Shape3D]) -> \"BoundingBox\":\n\n all_points = np.array(points)\n topleft = all_points.min(axis=0)\n bottomright = all_points.max(axis=0)\n\n # bottomright is exclusive\n bottomright += 1\n\n return BoundingBox(topleft, bottomright - topleft)\n\n @staticmethod\n def from_named_tuple(bb_named_tuple: BoundingBoxNamedTuple):\n\n return BoundingBox(bb_named_tuple.topleft, bb_named_tuple.size)\n\n @staticmethod\n def from_checkpoint_name(checkpoint_name: str) -> \"BoundingBox\":\n \"\"\"This function extracts a bounding box in the format x_y_z_sx_sy_xz which is contained in a string.\"\"\"\n regex = r\"(([0-9]+_){5}([0-9]+))\"\n match = re.search(regex, checkpoint_name)\n assert (\n match is not None\n ), f\"Could not extract bounding box from {checkpoint_name}\"\n bbox_tuple = tuple(int(value) for value in match.group().split(\"_\"))\n topleft = cast(Tuple[int, int, int], bbox_tuple[:3])\n size = cast(Tuple[int, int, int], bbox_tuple[3:6])\n return BoundingBox.from_tuple2((topleft, size))\n\n @staticmethod\n def from_csv(csv_bbox: str) -> \"BoundingBox\":\n bbox_tuple = tuple(int(x) for x in csv_bbox.split(\",\"))\n return BoundingBox.from_tuple6(\n cast(Tuple[int, int, int, int, int, int], bbox_tuple)\n )\n\n @staticmethod\n def from_auto(obj) -> \"BoundingBox\":\n if isinstance(obj, BoundingBox):\n return obj\n elif isinstance(obj, str):\n if \":\" in obj:\n return BoundingBox.from_auto(json.loads(obj))\n else:\n return BoundingBox.from_csv(obj)\n elif isinstance(obj, dict):\n return BoundingBox.from_wkw(obj)\n elif isinstance(obj, BoundingBoxNamedTuple):\n return BoundingBox.from_named_tuple(obj)\n elif isinstance(obj, list) or isinstance(obj, tuple):\n if len(obj) == 2:\n return BoundingBox.from_tuple2(obj) # type: ignore\n elif len(obj) == 6:\n return BoundingBox.from_tuple6(obj) # type: ignore\n\n raise Exception(\"Unknown bounding box format.\")\n\n def as_wkw(self) -> dict:\n\n width, height, depth = self.size.tolist()\n\n return {\n \"topLeft\": self.topleft.tolist(),\n \"width\": width,\n \"height\": height,\n \"depth\": depth,\n }\n\n def as_config(self) -> dict:\n\n return {\"topleft\": self.topleft.tolist(), \"size\": self.size.tolist()}\n\n def as_checkpoint_name(self) -> str:\n\n x, y, z = self.topleft\n width, height, depth = self.size\n return \"{x}_{y}_{z}_{width}_{height}_{depth}\".format(\n x=x, y=y, z=z, width=width, height=height, depth=depth\n )\n\n def as_tuple6(self) -> Tuple[int, int, int, int, int, int]:\n\n return tuple(self.topleft.tolist() + self.size.tolist()) # type: ignore\n\n def as_csv(self) -> str:\n\n return \",\".join(map(str, self.as_tuple6()))\n\n def as_named_tuple(self) -> BoundingBoxNamedTuple:\n return BoundingBoxNamedTuple(\n topleft=cast(Tuple[int, int, int], tuple(self.topleft)),\n size=cast(Tuple[int, int, int], tuple(self.size)),\n )\n\n def __repr__(self) -> str:\n\n return \"BoundingBox(topleft={}, size={})\".format(\n str(tuple(self.topleft)), str(tuple(self.size))\n )\n\n def __str__(self) -> str:\n\n return self.__repr__()\n\n def __eq__(self, other) -> bool:\n\n return np.array_equal(self.topleft, other.topleft) and np.array_equal(\n self.size, other.size\n )\n\n def padded_with_margins(\n self, margins_left: Shape3D, margins_right: Optional[Shape3D] = None\n ) -> \"BoundingBox\":\n\n if margins_right is None:\n margins_right = margins_left\n\n margins_left = np.array(margins_left)\n margins_right = np.array(margins_right)\n\n return BoundingBox(\n topleft=self.topleft - margins_left,\n size=self.size + (margins_left + margins_right),\n )\n\n def intersected_with(\n self, other: \"BoundingBox\", dont_assert=False\n ) -> \"BoundingBox\":\n \"\"\" If dont_assert is set to False, this method may return empty bounding boxes (size == (0, 0, 0)) \"\"\"\n\n topleft = np.maximum(self.topleft, other.topleft)\n bottomright = np.minimum(self.bottomright, other.bottomright)\n size = np.maximum(bottomright - topleft, (0, 0, 0))\n\n intersection = BoundingBox(topleft, size)\n\n if not dont_assert:\n assert (\n not intersection.is_empty()\n ), f\"No intersection between bounding boxes {self} and {other}.\"\n\n return intersection\n\n def extended_by(self, other: \"BoundingBox\") -> \"BoundingBox\":\n\n topleft = np.minimum(self.topleft, other.topleft)\n bottomright = np.maximum(self.bottomright, other.bottomright)\n size = bottomright - topleft\n\n return BoundingBox(topleft, size)\n\n def is_empty(self) -> bool:\n\n return not all(self.size > 0)\n\n def in_mag(self, mag: Mag) -> \"BoundingBox\":\n\n np_mag = np.array(mag.to_array())\n\n assert (\n np.count_nonzero(self.topleft % np_mag) == 0\n ), f\"topleft {self.topleft} is not aligned with the mag {mag}. Use BoundingBox.align_with_mag().\"\n assert (\n np.count_nonzero(self.bottomright % np_mag) == 0\n ), f\"bottomright {self.bottomright} is not aligned with the mag {mag}. Use BoundingBox.align_with_mag().\"\n\n return BoundingBox(\n topleft=(self.topleft // np_mag).astype(np.int),\n size=(self.size // np_mag).astype(np.int),\n )\n\n def align_with_mag(self, mag: Mag, ceil=False):\n \"\"\"Rounds the bounding box, so that both topleft and bottomright are divisible by mag.\n\n :argument ceil: If true, the bounding box is enlarged when necessary. If false, it's shrinked when necessary.\n \"\"\"\n\n np_mag = np.array(mag.to_array())\n\n align = lambda point, round_fn: round_fn(point / np_mag).astype(np.int) * np_mag\n\n if ceil:\n topleft = align(self.topleft, np.floor)\n bottomright = align(self.bottomright, np.ceil)\n else:\n topleft = align(self.topleft, np.ceil)\n bottomright = align(self.bottomright, np.floor)\n return BoundingBox(topleft, bottomright - topleft)\n\n def contains(self, coord: Shape3D) -> bool:\n\n coord = np.array(coord)\n\n return np.all(coord >= self.topleft) and np.all(\n coord < self.topleft + self.size\n )\n\n def contains_bbox(self, inner_bbox: \"BoundingBox\") -> bool:\n return inner_bbox.intersected_with(self, dont_assert=True) == inner_bbox\n\n def chunk(\n self, chunk_size: Shape3D, chunk_border_alignments: Optional[List[int]] = None\n ) -> Generator[\"BoundingBox\", None, None]:\n \"\"\"Decompose the bounding box into smaller chunks of size `chunk_size`.\n\n Chunks at the border of the bounding box might be smaller than chunk_size.\n If `chunk_border_alignment` is set, all border coordinates\n *between two chunks* will be divisible by that value.\n \"\"\"\n\n start = self.topleft.copy()\n chunk_size = np.array(chunk_size)\n\n start_adjust = np.array([0, 0, 0])\n if chunk_border_alignments is not None:\n\n chunk_border_alignments = np.array(chunk_border_alignments)\n assert np.all(\n chunk_size % chunk_border_alignments == 0\n ), f\"{chunk_size} not divisible by {chunk_border_alignments}\"\n\n # Move the start to be aligned correctly. This doesn't actually change\n # the start of the first chunk, because we'll intersect with `self`,\n # but it'll lead to all chunk borders being aligned correctly.\n start_adjust = start % chunk_border_alignments\n\n for x in range(\n start[0] - start_adjust[0], start[0] + self.size[0], chunk_size[0]\n ):\n for y in range(\n start[1] - start_adjust[1], start[1] + self.size[1], chunk_size[1]\n ):\n for z in range(\n start[2] - start_adjust[2], start[2] + self.size[2], chunk_size[2]\n ):\n\n yield BoundingBox([x, y, z], chunk_size).intersected_with(self)\n\n def volume(self) -> int:\n\n return self.size.prod()\n\n def slice_array(self, array: np.ndarray) -> np.ndarray:\n\n return array[\n self.topleft[0] : self.bottomright[0],\n self.topleft[1] : self.bottomright[1],\n self.topleft[2] : self.bottomright[2],\n ]\n\n def as_slices(self) -> Tuple[slice, slice, slice]:\n return np.index_exp[\n self.topleft[0] : self.bottomright[0],\n self.topleft[1] : self.bottomright[1],\n self.topleft[2] : self.bottomright[2],\n ]\n\n def copy(self) -> \"BoundingBox\":\n\n return BoundingBox(self.topleft.copy(), self.bottomright.copy())\n\n def offset(self, vector: Tuple[int, int, int]) -> \"BoundingBox\":\n\n return BoundingBox(self.topleft + np.array(vector), self.size.copy())\n", "path": "wkcuber/api/bounding_box.py"}]}
| 3,684 | 161 |
gh_patches_debug_11513
|
rasdani/github-patches
|
git_diff
|
microsoft__nni-5135
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error: Dispatcher stream error, tuner may have crashed. When using PolicyBasedRL to do nas and set concurrency>1
**Hello guys, i am using PolicyBasedRL strategy to do multi trial NAS experiments, when I set concurrency=2, the tuner crased, and concurrency=1 works just fine. Here is the error message**:
```
[2022-09-08 02:09:28] ERROR: 1
Traceback (most recent call last):
File "/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni/runtime/msg_dispatcher_base.py", line 97, in command_queue_worker
self.process_command(command, data)
File "/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni/runtime/msg_dispatcher_base.py", line 143, in process_command
command_handlers[command](data)
File "/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni/nas/execution/common/integration.py", line 223, in handle_report_metric_data
self.invoke_callback('intermediate_metric', data['parameter_id'], self._process_value(data['value']))
File "/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni/nas/execution/common/integration.py", line 119, in invoke_callback
self.process_queued_callbacks()
File "/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni/nas/execution/common/integration.py", line 104, in process_queued_callbacks
self.intermediate_metric_callback(*call_args) # pylint: disable=not-callable
File "/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni/nas/execution/pytorch/graph.py", line 130, in _intermediate_metric_callback
model = self._running_models[trial_id]
KeyError: 1
[2022-09-08 02:09:29] Dispatcher exiting...
[2022-09-08 02:09:30] Dispatcher terminiated
Error: Dispatcher stream error, tuner may have crashed.
at EventEmitter.<anonymous> (/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni_node/core/nnimanager.js:647:32)
at EventEmitter.emit (node:events:526:28)
at WebSocketChannelImpl.handleError (/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni_node/core/tuner_command_channel/websocket_channel.js:107:22)
at WebSocket.<anonymous> (/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni_node/core/tuner_command_channel/websocket_channel.js:41:37)
at WebSocket.emit (node:events:538:35)
at WebSocket.emitClose (/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni_node/node_modules/express-ws/node_modules/ws/lib/websocket.js:246:10)
at Socket.socketOnClose (/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni_node/node_modules/express-ws/node_modules/ws/lib/websocket.js:1127:15)
at Socket.emit (node:events:526:28)
at TCP.<anonymous> (node:net:687:12)
```
**Environment**:
- NNI version:2.9/2.8
- Training service (local):
- Client OS:
- Server OS (for remote mode [only):](url)
- Python version: 3.8
- PyTorch/TensorFlow version: pytorch 1.21.1
- Is conda/virtualenv/venv used?: conda
- Is running in Docker?: no
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log: [nnimanager.log](https://github.com/microsoft/nni/files/9522943/nnimanager.log)
- dispatcher.log: [dispatcher.log](https://github.com/microsoft/nni/files/9522946/dispatcher.log)
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nni/nas/strategy/_rl_impl.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 # Licensed under the MIT license.
3
4 # This file might cause import error for those who didn't install RL-related dependencies
5
6 import logging
7 import threading
8 from multiprocessing.pool import ThreadPool
9 from typing import Tuple
10
11 import gym
12 import numpy as np
13 import tianshou
14 import torch
15 import torch.nn as nn
16 import torch.nn.functional as F
17
18 from gym import spaces
19 from tianshou.data import to_torch
20 from tianshou.env.worker import EnvWorker
21
22 from typing_extensions import TypedDict
23
24 from nni.nas.execution import submit_models, wait_models
25 from nni.nas.execution.common import ModelStatus
26 from .utils import get_targeted_model
27
28
29 _logger = logging.getLogger(__name__)
30 _thread_lock = threading.Lock()
31
32
33 class MultiThreadEnvWorker(EnvWorker):
34 def __init__(self, env_fn):
35 self.env = env_fn()
36 self.pool = ThreadPool(processes=1)
37 super().__init__(env_fn)
38
39 def get_env_attr(self, key):
40 return getattr(self.env, key)
41
42 def set_env_attr(self, key, value):
43 return setattr(self.env, key, value)
44
45 def __getattr__(self, key):
46 if tianshou.__version__ >= '0.4.5': # not a strict check here
47 return super().__getattr__(key) # https://github.com/thu-ml/tianshou/pull/478
48 return getattr(self.env, key)
49
50 def reset(self):
51 return self.env.reset()
52
53 def send(self, action):
54 # for tianshou >= 0.4.6
55 if action is None:
56 self.result = self.pool.apply_async(self.env.reset)
57 else:
58 self.send_action(action)
59
60 def recv(self):
61 # for tianshou >= 0.4.6
62 return self.result.get()
63
64 @staticmethod
65 def wait(*args, **kwargs):
66 raise NotImplementedError('Async collect is not supported yet.')
67
68 def send_action(self, action) -> None:
69 # self.result is actually a handle
70 self.result = self.pool.apply_async(self.env.step, (action,))
71
72 def get_result(self):
73 return self.result.get()
74
75 def seed(self, seed):
76 super().seed(seed)
77 return self.env.seed(seed)
78
79 def render(self, **kwargs):
80 return self.env.render(**kwargs)
81
82 def close_env(self) -> None:
83 self.pool.terminate()
84 return self.env.close()
85
86 class ObservationType(TypedDict):
87 action_history: np.ndarray
88 cur_step: int
89 action_dim: int
90
91
92 class ModelEvaluationEnv(gym.Env[ObservationType, int]):
93 def __init__(self, base_model, mutators, search_space):
94 self.base_model = base_model
95 self.mutators = mutators
96 self.search_space = search_space
97 self.ss_keys = list(self.search_space.keys())
98 self.action_dim = max(map(lambda v: len(v), self.search_space.values()))
99 self.num_steps = len(self.search_space)
100
101 @property
102 def observation_space(self):
103 return spaces.Dict({
104 'action_history': spaces.MultiDiscrete([self.action_dim] * self.num_steps),
105 'cur_step': spaces.Discrete(self.num_steps + 1),
106 'action_dim': spaces.Discrete(self.action_dim + 1)
107 })
108
109 @property
110 def action_space(self):
111 return spaces.Discrete(self.action_dim)
112
113 def reset(self) -> ObservationType:
114 self.action_history = np.zeros(self.num_steps, dtype=np.int32)
115 self.cur_step = 0
116 self.sample = {}
117 return {
118 'action_history': self.action_history,
119 'cur_step': self.cur_step,
120 'action_dim': len(self.search_space[self.ss_keys[self.cur_step]])
121 }
122
123 def step(self, action: int) -> Tuple[ObservationType, float, bool, dict]:
124 cur_key = self.ss_keys[self.cur_step]
125 assert action < len(self.search_space[cur_key]), \
126 f'Current action {action} out of range {self.search_space[cur_key]}.'
127 self.action_history[self.cur_step] = action
128 self.sample[cur_key] = self.search_space[cur_key][action]
129 self.cur_step += 1
130 obs: ObservationType = {
131 'action_history': self.action_history,
132 'cur_step': self.cur_step,
133 'action_dim': len(self.search_space[self.ss_keys[self.cur_step]]) \
134 if self.cur_step < self.num_steps else self.action_dim
135 }
136 if self.cur_step == self.num_steps:
137 with _thread_lock:
138 model = get_targeted_model(self.base_model, self.mutators, self.sample)
139 _logger.info(f'New model created: {self.sample}')
140 submit_models(model)
141 wait_models(model)
142 if model.status == ModelStatus.Failed:
143 return self.reset(), 0., False, {}
144 rew = float(model.metric) # type: ignore
145 _logger.info(f'Model metric received as reward: {rew}')
146 return obs, rew, True, {}
147 else:
148
149 return obs, 0., False, {}
150
151
152 class Preprocessor(nn.Module):
153 def __init__(self, obs_space, hidden_dim=64, num_layers=1):
154 super().__init__()
155 self.action_dim = obs_space['action_history'].nvec[0]
156 self.hidden_dim = hidden_dim
157 # first token is [SOS]
158 self.embedding = nn.Embedding(self.action_dim + 1, hidden_dim)
159 self.rnn = nn.LSTM(hidden_dim, hidden_dim, num_layers, batch_first=True)
160
161 def forward(self, obs):
162 seq = F.pad(obs['action_history'] + 1, (1, 1)) # pad the start token and end token
163 # end token is used to avoid out-of-range of v_s_. Will not actually affect BP.
164 seq = self.embedding(seq.long())
165 feature, _ = self.rnn(seq)
166 return feature[torch.arange(len(feature), device=feature.device), obs['cur_step'].long() + 1]
167
168
169 class Actor(nn.Module):
170 def __init__(self, action_space, preprocess):
171 super().__init__()
172 self.preprocess = preprocess
173 self.action_dim = action_space.n
174 self.linear = nn.Linear(self.preprocess.hidden_dim, self.action_dim)
175
176 def forward(self, obs, **kwargs):
177 obs = to_torch(obs, device=self.linear.weight.device)
178 out = self.linear(self.preprocess(obs))
179 # to take care of choices with different number of options
180 mask = torch.arange(self.action_dim).expand(len(out), self.action_dim) >= obs['action_dim'].unsqueeze(1)
181 out[mask.to(out.device)] = float('-inf')
182 return F.softmax(out, dim=-1), kwargs.get('state', None)
183
184
185 class Critic(nn.Module):
186 def __init__(self, preprocess):
187 super().__init__()
188 self.preprocess = preprocess
189 self.linear = nn.Linear(self.preprocess.hidden_dim, 1)
190
191 def forward(self, obs, **kwargs):
192 obs = to_torch(obs, device=self.linear.weight.device)
193 return self.linear(self.preprocess(obs)).squeeze(-1)
194
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nni/nas/strategy/_rl_impl.py b/nni/nas/strategy/_rl_impl.py
--- a/nni/nas/strategy/_rl_impl.py
+++ b/nni/nas/strategy/_rl_impl.py
@@ -136,8 +136,8 @@
if self.cur_step == self.num_steps:
with _thread_lock:
model = get_targeted_model(self.base_model, self.mutators, self.sample)
- _logger.info(f'New model created: {self.sample}')
- submit_models(model)
+ _logger.info(f'New model created: {self.sample}')
+ submit_models(model)
wait_models(model)
if model.status == ModelStatus.Failed:
return self.reset(), 0., False, {}
|
{"golden_diff": "diff --git a/nni/nas/strategy/_rl_impl.py b/nni/nas/strategy/_rl_impl.py\n--- a/nni/nas/strategy/_rl_impl.py\n+++ b/nni/nas/strategy/_rl_impl.py\n@@ -136,8 +136,8 @@\n if self.cur_step == self.num_steps:\n with _thread_lock:\n model = get_targeted_model(self.base_model, self.mutators, self.sample)\n- _logger.info(f'New model created: {self.sample}')\n- submit_models(model)\n+ _logger.info(f'New model created: {self.sample}')\n+ submit_models(model)\n wait_models(model)\n if model.status == ModelStatus.Failed:\n return self.reset(), 0., False, {}\n", "issue": "Error: Dispatcher stream error, tuner may have crashed. When using PolicyBasedRL to do nas and set concurrency>1\n**Hello guys, i am using PolicyBasedRL strategy to do multi trial NAS experiments, when I set concurrency=2, the tuner crased, and concurrency=1 works just fine. Here is the error message**:\r\n```\r\n[2022-09-08 02:09:28] ERROR: 1\r\nTraceback (most recent call last):\r\n File \"/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni/runtime/msg_dispatcher_base.py\", line 97, in command_queue_worker\r\n self.process_command(command, data)\r\n File \"/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni/runtime/msg_dispatcher_base.py\", line 143, in process_command\r\n command_handlers[command](data)\r\n File \"/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni/nas/execution/common/integration.py\", line 223, in handle_report_metric_data\r\n self.invoke_callback('intermediate_metric', data['parameter_id'], self._process_value(data['value']))\r\n File \"/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni/nas/execution/common/integration.py\", line 119, in invoke_callback\r\n self.process_queued_callbacks()\r\n File \"/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni/nas/execution/common/integration.py\", line 104, in process_queued_callbacks\r\n self.intermediate_metric_callback(*call_args) # pylint: disable=not-callable\r\n File \"/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni/nas/execution/pytorch/graph.py\", line 130, in _intermediate_metric_callback\r\n model = self._running_models[trial_id]\r\nKeyError: 1\r\n[2022-09-08 02:09:29] Dispatcher exiting...\r\n[2022-09-08 02:09:30] Dispatcher terminiated\r\nError: Dispatcher stream error, tuner may have crashed.\r\n at EventEmitter.<anonymous> (/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni_node/core/nnimanager.js:647:32)\r\n at EventEmitter.emit (node:events:526:28)\r\n at WebSocketChannelImpl.handleError (/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni_node/core/tuner_command_channel/websocket_channel.js:107:22)\r\n at WebSocket.<anonymous> (/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni_node/core/tuner_command_channel/websocket_channel.js:41:37)\r\n at WebSocket.emit (node:events:538:35)\r\n at WebSocket.emitClose (/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni_node/node_modules/express-ws/node_modules/ws/lib/websocket.js:246:10)\r\n at Socket.socketOnClose (/home/liangkai/miniconda3/envs/camp/lib/python3.8/site-packages/nni_node/node_modules/express-ws/node_modules/ws/lib/websocket.js:1127:15)\r\n at Socket.emit (node:events:526:28)\r\n at TCP.<anonymous> (node:net:687:12)\r\n```\r\n\r\n\r\n**Environment**:\r\n- NNI version:2.9/2.8\r\n- Training service (local):\r\n- Client OS: \r\n- Server OS (for remote mode [only):](url)\r\n- Python version: 3.8\r\n- PyTorch/TensorFlow version: pytorch 1.21.1\r\n- Is conda/virtualenv/venv used?: conda\r\n- Is running in Docker?: no\r\n\r\n\r\n**Configuration**:\r\n - Experiment config (remember to remove secrets!):\r\n - Search space:\r\n\r\n\r\n**Log message**:\r\n\r\n - nnimanager.log: [nnimanager.log](https://github.com/microsoft/nni/files/9522943/nnimanager.log)\r\n\r\n - dispatcher.log: [dispatcher.log](https://github.com/microsoft/nni/files/9522946/dispatcher.log)\r\n - nnictl stdout and stderr:\r\n \r\n\r\n\r\n<!--\r\nWhere can you find the log files:\r\nLOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director\r\nSTDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout\r\n-->\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n# Licensed under the MIT license.\n\n# This file might cause import error for those who didn't install RL-related dependencies\n\nimport logging\nimport threading\nfrom multiprocessing.pool import ThreadPool\nfrom typing import Tuple\n\nimport gym\nimport numpy as np\nimport tianshou\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom gym import spaces\nfrom tianshou.data import to_torch\nfrom tianshou.env.worker import EnvWorker\n\nfrom typing_extensions import TypedDict\n\nfrom nni.nas.execution import submit_models, wait_models\nfrom nni.nas.execution.common import ModelStatus\nfrom .utils import get_targeted_model\n\n\n_logger = logging.getLogger(__name__)\n_thread_lock = threading.Lock()\n\n\nclass MultiThreadEnvWorker(EnvWorker):\n def __init__(self, env_fn):\n self.env = env_fn()\n self.pool = ThreadPool(processes=1)\n super().__init__(env_fn)\n\n def get_env_attr(self, key):\n return getattr(self.env, key)\n\n def set_env_attr(self, key, value):\n return setattr(self.env, key, value)\n\n def __getattr__(self, key):\n if tianshou.__version__ >= '0.4.5': # not a strict check here\n return super().__getattr__(key) # https://github.com/thu-ml/tianshou/pull/478\n return getattr(self.env, key)\n\n def reset(self):\n return self.env.reset()\n\n def send(self, action):\n # for tianshou >= 0.4.6\n if action is None:\n self.result = self.pool.apply_async(self.env.reset)\n else:\n self.send_action(action)\n\n def recv(self):\n # for tianshou >= 0.4.6\n return self.result.get()\n\n @staticmethod\n def wait(*args, **kwargs):\n raise NotImplementedError('Async collect is not supported yet.')\n\n def send_action(self, action) -> None:\n # self.result is actually a handle\n self.result = self.pool.apply_async(self.env.step, (action,))\n\n def get_result(self):\n return self.result.get()\n\n def seed(self, seed):\n super().seed(seed)\n return self.env.seed(seed)\n\n def render(self, **kwargs):\n return self.env.render(**kwargs)\n\n def close_env(self) -> None:\n self.pool.terminate()\n return self.env.close()\n\nclass ObservationType(TypedDict):\n action_history: np.ndarray\n cur_step: int\n action_dim: int\n\n\nclass ModelEvaluationEnv(gym.Env[ObservationType, int]):\n def __init__(self, base_model, mutators, search_space):\n self.base_model = base_model\n self.mutators = mutators\n self.search_space = search_space\n self.ss_keys = list(self.search_space.keys())\n self.action_dim = max(map(lambda v: len(v), self.search_space.values()))\n self.num_steps = len(self.search_space)\n\n @property\n def observation_space(self):\n return spaces.Dict({\n 'action_history': spaces.MultiDiscrete([self.action_dim] * self.num_steps),\n 'cur_step': spaces.Discrete(self.num_steps + 1),\n 'action_dim': spaces.Discrete(self.action_dim + 1)\n })\n\n @property\n def action_space(self):\n return spaces.Discrete(self.action_dim)\n\n def reset(self) -> ObservationType:\n self.action_history = np.zeros(self.num_steps, dtype=np.int32)\n self.cur_step = 0\n self.sample = {}\n return {\n 'action_history': self.action_history,\n 'cur_step': self.cur_step,\n 'action_dim': len(self.search_space[self.ss_keys[self.cur_step]])\n }\n\n def step(self, action: int) -> Tuple[ObservationType, float, bool, dict]:\n cur_key = self.ss_keys[self.cur_step]\n assert action < len(self.search_space[cur_key]), \\\n f'Current action {action} out of range {self.search_space[cur_key]}.'\n self.action_history[self.cur_step] = action\n self.sample[cur_key] = self.search_space[cur_key][action]\n self.cur_step += 1\n obs: ObservationType = {\n 'action_history': self.action_history,\n 'cur_step': self.cur_step,\n 'action_dim': len(self.search_space[self.ss_keys[self.cur_step]]) \\\n if self.cur_step < self.num_steps else self.action_dim\n }\n if self.cur_step == self.num_steps:\n with _thread_lock:\n model = get_targeted_model(self.base_model, self.mutators, self.sample)\n _logger.info(f'New model created: {self.sample}')\n submit_models(model)\n wait_models(model)\n if model.status == ModelStatus.Failed:\n return self.reset(), 0., False, {}\n rew = float(model.metric) # type: ignore\n _logger.info(f'Model metric received as reward: {rew}')\n return obs, rew, True, {}\n else:\n\n return obs, 0., False, {}\n\n\nclass Preprocessor(nn.Module):\n def __init__(self, obs_space, hidden_dim=64, num_layers=1):\n super().__init__()\n self.action_dim = obs_space['action_history'].nvec[0]\n self.hidden_dim = hidden_dim\n # first token is [SOS]\n self.embedding = nn.Embedding(self.action_dim + 1, hidden_dim)\n self.rnn = nn.LSTM(hidden_dim, hidden_dim, num_layers, batch_first=True)\n\n def forward(self, obs):\n seq = F.pad(obs['action_history'] + 1, (1, 1)) # pad the start token and end token\n # end token is used to avoid out-of-range of v_s_. Will not actually affect BP.\n seq = self.embedding(seq.long())\n feature, _ = self.rnn(seq)\n return feature[torch.arange(len(feature), device=feature.device), obs['cur_step'].long() + 1]\n\n\nclass Actor(nn.Module):\n def __init__(self, action_space, preprocess):\n super().__init__()\n self.preprocess = preprocess\n self.action_dim = action_space.n\n self.linear = nn.Linear(self.preprocess.hidden_dim, self.action_dim)\n\n def forward(self, obs, **kwargs):\n obs = to_torch(obs, device=self.linear.weight.device)\n out = self.linear(self.preprocess(obs))\n # to take care of choices with different number of options\n mask = torch.arange(self.action_dim).expand(len(out), self.action_dim) >= obs['action_dim'].unsqueeze(1)\n out[mask.to(out.device)] = float('-inf')\n return F.softmax(out, dim=-1), kwargs.get('state', None)\n\n\nclass Critic(nn.Module):\n def __init__(self, preprocess):\n super().__init__()\n self.preprocess = preprocess\n self.linear = nn.Linear(self.preprocess.hidden_dim, 1)\n\n def forward(self, obs, **kwargs):\n obs = to_torch(obs, device=self.linear.weight.device)\n return self.linear(self.preprocess(obs)).squeeze(-1)\n", "path": "nni/nas/strategy/_rl_impl.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation.\n# Licensed under the MIT license.\n\n# This file might cause import error for those who didn't install RL-related dependencies\n\nimport logging\nimport threading\nfrom multiprocessing.pool import ThreadPool\nfrom typing import Tuple\n\nimport gym\nimport numpy as np\nimport tianshou\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom gym import spaces\nfrom tianshou.data import to_torch\nfrom tianshou.env.worker import EnvWorker\n\nfrom typing_extensions import TypedDict\n\nfrom nni.nas.execution import submit_models, wait_models\nfrom nni.nas.execution.common import ModelStatus\nfrom .utils import get_targeted_model\n\n\n_logger = logging.getLogger(__name__)\n_thread_lock = threading.Lock()\n\n\nclass MultiThreadEnvWorker(EnvWorker):\n def __init__(self, env_fn):\n self.env = env_fn()\n self.pool = ThreadPool(processes=1)\n super().__init__(env_fn)\n\n def get_env_attr(self, key):\n return getattr(self.env, key)\n\n def set_env_attr(self, key, value):\n return setattr(self.env, key, value)\n\n def __getattr__(self, key):\n if tianshou.__version__ >= '0.4.5': # not a strict check here\n return super().__getattr__(key) # https://github.com/thu-ml/tianshou/pull/478\n return getattr(self.env, key)\n\n def reset(self):\n return self.env.reset()\n\n def send(self, action):\n # for tianshou >= 0.4.6\n if action is None:\n self.result = self.pool.apply_async(self.env.reset)\n else:\n self.send_action(action)\n\n def recv(self):\n # for tianshou >= 0.4.6\n return self.result.get()\n\n @staticmethod\n def wait(*args, **kwargs):\n raise NotImplementedError('Async collect is not supported yet.')\n\n def send_action(self, action) -> None:\n # self.result is actually a handle\n self.result = self.pool.apply_async(self.env.step, (action,))\n\n def get_result(self):\n return self.result.get()\n\n def seed(self, seed):\n super().seed(seed)\n return self.env.seed(seed)\n\n def render(self, **kwargs):\n return self.env.render(**kwargs)\n\n def close_env(self) -> None:\n self.pool.terminate()\n return self.env.close()\n\nclass ObservationType(TypedDict):\n action_history: np.ndarray\n cur_step: int\n action_dim: int\n\n\nclass ModelEvaluationEnv(gym.Env[ObservationType, int]):\n def __init__(self, base_model, mutators, search_space):\n self.base_model = base_model\n self.mutators = mutators\n self.search_space = search_space\n self.ss_keys = list(self.search_space.keys())\n self.action_dim = max(map(lambda v: len(v), self.search_space.values()))\n self.num_steps = len(self.search_space)\n\n @property\n def observation_space(self):\n return spaces.Dict({\n 'action_history': spaces.MultiDiscrete([self.action_dim] * self.num_steps),\n 'cur_step': spaces.Discrete(self.num_steps + 1),\n 'action_dim': spaces.Discrete(self.action_dim + 1)\n })\n\n @property\n def action_space(self):\n return spaces.Discrete(self.action_dim)\n\n def reset(self) -> ObservationType:\n self.action_history = np.zeros(self.num_steps, dtype=np.int32)\n self.cur_step = 0\n self.sample = {}\n return {\n 'action_history': self.action_history,\n 'cur_step': self.cur_step,\n 'action_dim': len(self.search_space[self.ss_keys[self.cur_step]])\n }\n\n def step(self, action: int) -> Tuple[ObservationType, float, bool, dict]:\n cur_key = self.ss_keys[self.cur_step]\n assert action < len(self.search_space[cur_key]), \\\n f'Current action {action} out of range {self.search_space[cur_key]}.'\n self.action_history[self.cur_step] = action\n self.sample[cur_key] = self.search_space[cur_key][action]\n self.cur_step += 1\n obs: ObservationType = {\n 'action_history': self.action_history,\n 'cur_step': self.cur_step,\n 'action_dim': len(self.search_space[self.ss_keys[self.cur_step]]) \\\n if self.cur_step < self.num_steps else self.action_dim\n }\n if self.cur_step == self.num_steps:\n with _thread_lock:\n model = get_targeted_model(self.base_model, self.mutators, self.sample)\n _logger.info(f'New model created: {self.sample}')\n submit_models(model)\n wait_models(model)\n if model.status == ModelStatus.Failed:\n return self.reset(), 0., False, {}\n rew = float(model.metric) # type: ignore\n _logger.info(f'Model metric received as reward: {rew}')\n return obs, rew, True, {}\n else:\n\n return obs, 0., False, {}\n\n\nclass Preprocessor(nn.Module):\n def __init__(self, obs_space, hidden_dim=64, num_layers=1):\n super().__init__()\n self.action_dim = obs_space['action_history'].nvec[0]\n self.hidden_dim = hidden_dim\n # first token is [SOS]\n self.embedding = nn.Embedding(self.action_dim + 1, hidden_dim)\n self.rnn = nn.LSTM(hidden_dim, hidden_dim, num_layers, batch_first=True)\n\n def forward(self, obs):\n seq = F.pad(obs['action_history'] + 1, (1, 1)) # pad the start token and end token\n # end token is used to avoid out-of-range of v_s_. Will not actually affect BP.\n seq = self.embedding(seq.long())\n feature, _ = self.rnn(seq)\n return feature[torch.arange(len(feature), device=feature.device), obs['cur_step'].long() + 1]\n\n\nclass Actor(nn.Module):\n def __init__(self, action_space, preprocess):\n super().__init__()\n self.preprocess = preprocess\n self.action_dim = action_space.n\n self.linear = nn.Linear(self.preprocess.hidden_dim, self.action_dim)\n\n def forward(self, obs, **kwargs):\n obs = to_torch(obs, device=self.linear.weight.device)\n out = self.linear(self.preprocess(obs))\n # to take care of choices with different number of options\n mask = torch.arange(self.action_dim).expand(len(out), self.action_dim) >= obs['action_dim'].unsqueeze(1)\n out[mask.to(out.device)] = float('-inf')\n return F.softmax(out, dim=-1), kwargs.get('state', None)\n\n\nclass Critic(nn.Module):\n def __init__(self, preprocess):\n super().__init__()\n self.preprocess = preprocess\n self.linear = nn.Linear(self.preprocess.hidden_dim, 1)\n\n def forward(self, obs, **kwargs):\n obs = to_torch(obs, device=self.linear.weight.device)\n return self.linear(self.preprocess(obs)).squeeze(-1)\n", "path": "nni/nas/strategy/_rl_impl.py"}]}
| 3,401 | 173 |
gh_patches_debug_19813
|
rasdani/github-patches
|
git_diff
|
pyro-ppl__pyro-1855
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
LDA tutorial: incorrect tensor shape when document size is larger than the vocabulary size
https://github.com/pyro-ppl/pyro/blob/56c06176763f1676bd586c89b2dc503c64315f0a/examples/lda.py#L101
The filler array of ones should have expanded into the size of the sliced data. This is problematic when the number of words per document is larger than the vocabulary size
Try
`counts.scatter_add_(0, data[:, ind], torch.tensor(1.).expand(data[:,ind].shape))`
instead
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/lda.py`
Content:
```
1 """
2 This example demonstrates how to marginalize out discrete assignment variables
3 in a Pyro model.
4
5 Our example model is Latent Dirichlet Allocation. While the model in this
6 example does work, it is not the recommended way of coding up LDA in Pyro.
7 Whereas the model in this example treats documents as vectors of categorical
8 variables (vectors of word ids), it is usually more efficient to treat
9 documents as bags of words (histograms of word counts).
10 """
11 from __future__ import absolute_import, division, print_function
12
13 import argparse
14 import functools
15 import logging
16
17 import torch
18 from torch import nn
19 from torch.distributions import constraints
20
21 import pyro
22 import pyro.distributions as dist
23 from pyro.infer import SVI, JitTraceEnum_ELBO, TraceEnum_ELBO
24 from pyro.optim import Adam
25
26 logging.basicConfig(format='%(relativeCreated) 9d %(message)s', level=logging.INFO)
27
28
29 # This is a fully generative model of a batch of documents.
30 # data is a [num_words_per_doc, num_documents] shaped array of word ids
31 # (specifically it is not a histogram). We assume in this simple example
32 # that all documents have the same number of words.
33 def model(data=None, args=None, batch_size=None):
34 # Globals.
35 with pyro.plate("topics", args.num_topics):
36 topic_weights = pyro.sample("topic_weights", dist.Gamma(1. / args.num_topics, 1.))
37 topic_words = pyro.sample("topic_words",
38 dist.Dirichlet(torch.ones(args.num_words) / args.num_words))
39
40 # Locals.
41 with pyro.plate("documents", args.num_docs) as ind:
42 if data is not None:
43 with pyro.util.ignore_jit_warnings():
44 assert data.shape == (args.num_words_per_doc, args.num_docs)
45 data = data[:, ind]
46 doc_topics = pyro.sample("doc_topics", dist.Dirichlet(topic_weights))
47 with pyro.plate("words", args.num_words_per_doc):
48 # The word_topics variable is marginalized out during inference,
49 # achieved by specifying infer={"enumerate": "parallel"} and using
50 # TraceEnum_ELBO for inference. Thus we can ignore this variable in
51 # the guide.
52 word_topics = pyro.sample("word_topics", dist.Categorical(doc_topics),
53 infer={"enumerate": "parallel"})
54 data = pyro.sample("doc_words", dist.Categorical(topic_words[word_topics]),
55 obs=data)
56
57 return topic_weights, topic_words, data
58
59
60 # We will use amortized inference of the local topic variables, achieved by a
61 # multi-layer perceptron. We'll wrap the guide in an nn.Module.
62 def make_predictor(args):
63 layer_sizes = ([args.num_words] +
64 [int(s) for s in args.layer_sizes.split('-')] +
65 [args.num_topics])
66 logging.info('Creating MLP with sizes {}'.format(layer_sizes))
67 layers = []
68 for in_size, out_size in zip(layer_sizes, layer_sizes[1:]):
69 layer = nn.Linear(in_size, out_size)
70 layer.weight.data.normal_(0, 0.001)
71 layer.bias.data.normal_(0, 0.001)
72 layers.append(layer)
73 layers.append(nn.Sigmoid())
74 layers.append(nn.Softmax(dim=-1))
75 return nn.Sequential(*layers)
76
77
78 def parametrized_guide(predictor, data, args, batch_size=None):
79 # Use a conjugate guide for global variables.
80 topic_weights_posterior = pyro.param(
81 "topic_weights_posterior",
82 lambda: torch.ones(args.num_topics),
83 constraint=constraints.positive)
84 topic_words_posterior = pyro.param(
85 "topic_words_posterior",
86 lambda: torch.ones(args.num_topics, args.num_words),
87 constraint=constraints.greater_than(0.5))
88 with pyro.plate("topics", args.num_topics):
89 pyro.sample("topic_weights", dist.Gamma(topic_weights_posterior, 1.))
90 pyro.sample("topic_words", dist.Dirichlet(topic_words_posterior))
91
92 # Use an amortized guide for local variables.
93 pyro.module("predictor", predictor)
94 with pyro.plate("documents", args.num_docs, batch_size) as ind:
95 # The neural network will operate on histograms rather than word
96 # index vectors, so we'll convert the raw data to a histogram.
97 if torch._C._get_tracing_state():
98 counts = torch.eye(1024)[data[:, ind]].sum(0).t()
99 else:
100 counts = torch.zeros(args.num_words, ind.size(0))
101 counts.scatter_add_(0, data[:, ind], torch.tensor(1.).expand(counts.shape))
102 doc_topics = predictor(counts.transpose(0, 1))
103 pyro.sample("doc_topics", dist.Delta(doc_topics, event_dim=1))
104
105
106 def main(args):
107 logging.info('Generating data')
108 pyro.set_rng_seed(0)
109 pyro.clear_param_store()
110 pyro.enable_validation(True)
111
112 # We can generate synthetic data directly by calling the model.
113 true_topic_weights, true_topic_words, data = model(args=args)
114
115 # We'll train using SVI.
116 logging.info('-' * 40)
117 logging.info('Training on {} documents'.format(args.num_docs))
118 predictor = make_predictor(args)
119 guide = functools.partial(parametrized_guide, predictor)
120 Elbo = JitTraceEnum_ELBO if args.jit else TraceEnum_ELBO
121 elbo = Elbo(max_plate_nesting=2)
122 optim = Adam({'lr': args.learning_rate})
123 svi = SVI(model, guide, optim, elbo)
124 logging.info('Step\tLoss')
125 for step in range(args.num_steps):
126 loss = svi.step(data, args=args, batch_size=args.batch_size)
127 if step % 10 == 0:
128 logging.info('{: >5d}\t{}'.format(step, loss))
129 loss = elbo.loss(model, guide, data, args=args)
130 logging.info('final loss = {}'.format(loss))
131
132
133 if __name__ == '__main__':
134 assert pyro.__version__.startswith('0.3.3')
135 parser = argparse.ArgumentParser(description="Amortized Latent Dirichlet Allocation")
136 parser.add_argument("-t", "--num-topics", default=8, type=int)
137 parser.add_argument("-w", "--num-words", default=1024, type=int)
138 parser.add_argument("-d", "--num-docs", default=1000, type=int)
139 parser.add_argument("-wd", "--num-words-per-doc", default=64, type=int)
140 parser.add_argument("-n", "--num-steps", default=1000, type=int)
141 parser.add_argument("-l", "--layer-sizes", default="100-100")
142 parser.add_argument("-lr", "--learning-rate", default=0.001, type=float)
143 parser.add_argument("-b", "--batch-size", default=32, type=int)
144 parser.add_argument('--jit', action='store_true')
145 args = parser.parse_args()
146 main(args)
147
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/lda.py b/examples/lda.py
--- a/examples/lda.py
+++ b/examples/lda.py
@@ -92,13 +92,11 @@
# Use an amortized guide for local variables.
pyro.module("predictor", predictor)
with pyro.plate("documents", args.num_docs, batch_size) as ind:
+ data = data[:, ind]
# The neural network will operate on histograms rather than word
# index vectors, so we'll convert the raw data to a histogram.
- if torch._C._get_tracing_state():
- counts = torch.eye(1024)[data[:, ind]].sum(0).t()
- else:
- counts = torch.zeros(args.num_words, ind.size(0))
- counts.scatter_add_(0, data[:, ind], torch.tensor(1.).expand(counts.shape))
+ counts = (torch.zeros(args.num_words, ind.size(0))
+ .scatter_add(0, data, torch.ones(data.shape)))
doc_topics = predictor(counts.transpose(0, 1))
pyro.sample("doc_topics", dist.Delta(doc_topics, event_dim=1))
|
{"golden_diff": "diff --git a/examples/lda.py b/examples/lda.py\n--- a/examples/lda.py\n+++ b/examples/lda.py\n@@ -92,13 +92,11 @@\n # Use an amortized guide for local variables.\n pyro.module(\"predictor\", predictor)\n with pyro.plate(\"documents\", args.num_docs, batch_size) as ind:\n+ data = data[:, ind]\n # The neural network will operate on histograms rather than word\n # index vectors, so we'll convert the raw data to a histogram.\n- if torch._C._get_tracing_state():\n- counts = torch.eye(1024)[data[:, ind]].sum(0).t()\n- else:\n- counts = torch.zeros(args.num_words, ind.size(0))\n- counts.scatter_add_(0, data[:, ind], torch.tensor(1.).expand(counts.shape))\n+ counts = (torch.zeros(args.num_words, ind.size(0))\n+ .scatter_add(0, data, torch.ones(data.shape)))\n doc_topics = predictor(counts.transpose(0, 1))\n pyro.sample(\"doc_topics\", dist.Delta(doc_topics, event_dim=1))\n", "issue": "LDA tutorial: incorrect tensor shape when document size is larger than the vocabulary size\nhttps://github.com/pyro-ppl/pyro/blob/56c06176763f1676bd586c89b2dc503c64315f0a/examples/lda.py#L101\r\n\r\nThe filler array of ones should have expanded into the size of the sliced data. This is problematic when the number of words per document is larger than the vocabulary size\r\nTry \r\n`counts.scatter_add_(0, data[:, ind], torch.tensor(1.).expand(data[:,ind].shape))`\r\ninstead\n", "before_files": [{"content": "\"\"\"\nThis example demonstrates how to marginalize out discrete assignment variables\nin a Pyro model.\n\nOur example model is Latent Dirichlet Allocation. While the model in this\nexample does work, it is not the recommended way of coding up LDA in Pyro.\nWhereas the model in this example treats documents as vectors of categorical\nvariables (vectors of word ids), it is usually more efficient to treat\ndocuments as bags of words (histograms of word counts).\n\"\"\"\nfrom __future__ import absolute_import, division, print_function\n\nimport argparse\nimport functools\nimport logging\n\nimport torch\nfrom torch import nn\nfrom torch.distributions import constraints\n\nimport pyro\nimport pyro.distributions as dist\nfrom pyro.infer import SVI, JitTraceEnum_ELBO, TraceEnum_ELBO\nfrom pyro.optim import Adam\n\nlogging.basicConfig(format='%(relativeCreated) 9d %(message)s', level=logging.INFO)\n\n\n# This is a fully generative model of a batch of documents.\n# data is a [num_words_per_doc, num_documents] shaped array of word ids\n# (specifically it is not a histogram). We assume in this simple example\n# that all documents have the same number of words.\ndef model(data=None, args=None, batch_size=None):\n # Globals.\n with pyro.plate(\"topics\", args.num_topics):\n topic_weights = pyro.sample(\"topic_weights\", dist.Gamma(1. / args.num_topics, 1.))\n topic_words = pyro.sample(\"topic_words\",\n dist.Dirichlet(torch.ones(args.num_words) / args.num_words))\n\n # Locals.\n with pyro.plate(\"documents\", args.num_docs) as ind:\n if data is not None:\n with pyro.util.ignore_jit_warnings():\n assert data.shape == (args.num_words_per_doc, args.num_docs)\n data = data[:, ind]\n doc_topics = pyro.sample(\"doc_topics\", dist.Dirichlet(topic_weights))\n with pyro.plate(\"words\", args.num_words_per_doc):\n # The word_topics variable is marginalized out during inference,\n # achieved by specifying infer={\"enumerate\": \"parallel\"} and using\n # TraceEnum_ELBO for inference. Thus we can ignore this variable in\n # the guide.\n word_topics = pyro.sample(\"word_topics\", dist.Categorical(doc_topics),\n infer={\"enumerate\": \"parallel\"})\n data = pyro.sample(\"doc_words\", dist.Categorical(topic_words[word_topics]),\n obs=data)\n\n return topic_weights, topic_words, data\n\n\n# We will use amortized inference of the local topic variables, achieved by a\n# multi-layer perceptron. We'll wrap the guide in an nn.Module.\ndef make_predictor(args):\n layer_sizes = ([args.num_words] +\n [int(s) for s in args.layer_sizes.split('-')] +\n [args.num_topics])\n logging.info('Creating MLP with sizes {}'.format(layer_sizes))\n layers = []\n for in_size, out_size in zip(layer_sizes, layer_sizes[1:]):\n layer = nn.Linear(in_size, out_size)\n layer.weight.data.normal_(0, 0.001)\n layer.bias.data.normal_(0, 0.001)\n layers.append(layer)\n layers.append(nn.Sigmoid())\n layers.append(nn.Softmax(dim=-1))\n return nn.Sequential(*layers)\n\n\ndef parametrized_guide(predictor, data, args, batch_size=None):\n # Use a conjugate guide for global variables.\n topic_weights_posterior = pyro.param(\n \"topic_weights_posterior\",\n lambda: torch.ones(args.num_topics),\n constraint=constraints.positive)\n topic_words_posterior = pyro.param(\n \"topic_words_posterior\",\n lambda: torch.ones(args.num_topics, args.num_words),\n constraint=constraints.greater_than(0.5))\n with pyro.plate(\"topics\", args.num_topics):\n pyro.sample(\"topic_weights\", dist.Gamma(topic_weights_posterior, 1.))\n pyro.sample(\"topic_words\", dist.Dirichlet(topic_words_posterior))\n\n # Use an amortized guide for local variables.\n pyro.module(\"predictor\", predictor)\n with pyro.plate(\"documents\", args.num_docs, batch_size) as ind:\n # The neural network will operate on histograms rather than word\n # index vectors, so we'll convert the raw data to a histogram.\n if torch._C._get_tracing_state():\n counts = torch.eye(1024)[data[:, ind]].sum(0).t()\n else:\n counts = torch.zeros(args.num_words, ind.size(0))\n counts.scatter_add_(0, data[:, ind], torch.tensor(1.).expand(counts.shape))\n doc_topics = predictor(counts.transpose(0, 1))\n pyro.sample(\"doc_topics\", dist.Delta(doc_topics, event_dim=1))\n\n\ndef main(args):\n logging.info('Generating data')\n pyro.set_rng_seed(0)\n pyro.clear_param_store()\n pyro.enable_validation(True)\n\n # We can generate synthetic data directly by calling the model.\n true_topic_weights, true_topic_words, data = model(args=args)\n\n # We'll train using SVI.\n logging.info('-' * 40)\n logging.info('Training on {} documents'.format(args.num_docs))\n predictor = make_predictor(args)\n guide = functools.partial(parametrized_guide, predictor)\n Elbo = JitTraceEnum_ELBO if args.jit else TraceEnum_ELBO\n elbo = Elbo(max_plate_nesting=2)\n optim = Adam({'lr': args.learning_rate})\n svi = SVI(model, guide, optim, elbo)\n logging.info('Step\\tLoss')\n for step in range(args.num_steps):\n loss = svi.step(data, args=args, batch_size=args.batch_size)\n if step % 10 == 0:\n logging.info('{: >5d}\\t{}'.format(step, loss))\n loss = elbo.loss(model, guide, data, args=args)\n logging.info('final loss = {}'.format(loss))\n\n\nif __name__ == '__main__':\n assert pyro.__version__.startswith('0.3.3')\n parser = argparse.ArgumentParser(description=\"Amortized Latent Dirichlet Allocation\")\n parser.add_argument(\"-t\", \"--num-topics\", default=8, type=int)\n parser.add_argument(\"-w\", \"--num-words\", default=1024, type=int)\n parser.add_argument(\"-d\", \"--num-docs\", default=1000, type=int)\n parser.add_argument(\"-wd\", \"--num-words-per-doc\", default=64, type=int)\n parser.add_argument(\"-n\", \"--num-steps\", default=1000, type=int)\n parser.add_argument(\"-l\", \"--layer-sizes\", default=\"100-100\")\n parser.add_argument(\"-lr\", \"--learning-rate\", default=0.001, type=float)\n parser.add_argument(\"-b\", \"--batch-size\", default=32, type=int)\n parser.add_argument('--jit', action='store_true')\n args = parser.parse_args()\n main(args)\n", "path": "examples/lda.py"}], "after_files": [{"content": "\"\"\"\nThis example demonstrates how to marginalize out discrete assignment variables\nin a Pyro model.\n\nOur example model is Latent Dirichlet Allocation. While the model in this\nexample does work, it is not the recommended way of coding up LDA in Pyro.\nWhereas the model in this example treats documents as vectors of categorical\nvariables (vectors of word ids), it is usually more efficient to treat\ndocuments as bags of words (histograms of word counts).\n\"\"\"\nfrom __future__ import absolute_import, division, print_function\n\nimport argparse\nimport functools\nimport logging\n\nimport torch\nfrom torch import nn\nfrom torch.distributions import constraints\n\nimport pyro\nimport pyro.distributions as dist\nfrom pyro.infer import SVI, JitTraceEnum_ELBO, TraceEnum_ELBO\nfrom pyro.optim import Adam\n\nlogging.basicConfig(format='%(relativeCreated) 9d %(message)s', level=logging.INFO)\n\n\n# This is a fully generative model of a batch of documents.\n# data is a [num_words_per_doc, num_documents] shaped array of word ids\n# (specifically it is not a histogram). We assume in this simple example\n# that all documents have the same number of words.\ndef model(data=None, args=None, batch_size=None):\n # Globals.\n with pyro.plate(\"topics\", args.num_topics):\n topic_weights = pyro.sample(\"topic_weights\", dist.Gamma(1. / args.num_topics, 1.))\n topic_words = pyro.sample(\"topic_words\",\n dist.Dirichlet(torch.ones(args.num_words) / args.num_words))\n\n # Locals.\n with pyro.plate(\"documents\", args.num_docs) as ind:\n if data is not None:\n with pyro.util.ignore_jit_warnings():\n assert data.shape == (args.num_words_per_doc, args.num_docs)\n data = data[:, ind]\n doc_topics = pyro.sample(\"doc_topics\", dist.Dirichlet(topic_weights))\n with pyro.plate(\"words\", args.num_words_per_doc):\n # The word_topics variable is marginalized out during inference,\n # achieved by specifying infer={\"enumerate\": \"parallel\"} and using\n # TraceEnum_ELBO for inference. Thus we can ignore this variable in\n # the guide.\n word_topics = pyro.sample(\"word_topics\", dist.Categorical(doc_topics),\n infer={\"enumerate\": \"parallel\"})\n data = pyro.sample(\"doc_words\", dist.Categorical(topic_words[word_topics]),\n obs=data)\n\n return topic_weights, topic_words, data\n\n\n# We will use amortized inference of the local topic variables, achieved by a\n# multi-layer perceptron. We'll wrap the guide in an nn.Module.\ndef make_predictor(args):\n layer_sizes = ([args.num_words] +\n [int(s) for s in args.layer_sizes.split('-')] +\n [args.num_topics])\n logging.info('Creating MLP with sizes {}'.format(layer_sizes))\n layers = []\n for in_size, out_size in zip(layer_sizes, layer_sizes[1:]):\n layer = nn.Linear(in_size, out_size)\n layer.weight.data.normal_(0, 0.001)\n layer.bias.data.normal_(0, 0.001)\n layers.append(layer)\n layers.append(nn.Sigmoid())\n layers.append(nn.Softmax(dim=-1))\n return nn.Sequential(*layers)\n\n\ndef parametrized_guide(predictor, data, args, batch_size=None):\n # Use a conjugate guide for global variables.\n topic_weights_posterior = pyro.param(\n \"topic_weights_posterior\",\n lambda: torch.ones(args.num_topics),\n constraint=constraints.positive)\n topic_words_posterior = pyro.param(\n \"topic_words_posterior\",\n lambda: torch.ones(args.num_topics, args.num_words),\n constraint=constraints.greater_than(0.5))\n with pyro.plate(\"topics\", args.num_topics):\n pyro.sample(\"topic_weights\", dist.Gamma(topic_weights_posterior, 1.))\n pyro.sample(\"topic_words\", dist.Dirichlet(topic_words_posterior))\n\n # Use an amortized guide for local variables.\n pyro.module(\"predictor\", predictor)\n with pyro.plate(\"documents\", args.num_docs, batch_size) as ind:\n data = data[:, ind]\n # The neural network will operate on histograms rather than word\n # index vectors, so we'll convert the raw data to a histogram.\n counts = (torch.zeros(args.num_words, ind.size(0))\n .scatter_add(0, data, torch.ones(data.shape)))\n doc_topics = predictor(counts.transpose(0, 1))\n pyro.sample(\"doc_topics\", dist.Delta(doc_topics, event_dim=1))\n\n\ndef main(args):\n logging.info('Generating data')\n pyro.set_rng_seed(0)\n pyro.clear_param_store()\n pyro.enable_validation(True)\n\n # We can generate synthetic data directly by calling the model.\n true_topic_weights, true_topic_words, data = model(args=args)\n\n # We'll train using SVI.\n logging.info('-' * 40)\n logging.info('Training on {} documents'.format(args.num_docs))\n predictor = make_predictor(args)\n guide = functools.partial(parametrized_guide, predictor)\n Elbo = JitTraceEnum_ELBO if args.jit else TraceEnum_ELBO\n elbo = Elbo(max_plate_nesting=2)\n optim = Adam({'lr': args.learning_rate})\n svi = SVI(model, guide, optim, elbo)\n logging.info('Step\\tLoss')\n for step in range(args.num_steps):\n loss = svi.step(data, args=args, batch_size=args.batch_size)\n if step % 10 == 0:\n logging.info('{: >5d}\\t{}'.format(step, loss))\n loss = elbo.loss(model, guide, data, args=args)\n logging.info('final loss = {}'.format(loss))\n\n\nif __name__ == '__main__':\n assert pyro.__version__.startswith('0.3.3')\n parser = argparse.ArgumentParser(description=\"Amortized Latent Dirichlet Allocation\")\n parser.add_argument(\"-t\", \"--num-topics\", default=8, type=int)\n parser.add_argument(\"-w\", \"--num-words\", default=1024, type=int)\n parser.add_argument(\"-d\", \"--num-docs\", default=1000, type=int)\n parser.add_argument(\"-wd\", \"--num-words-per-doc\", default=64, type=int)\n parser.add_argument(\"-n\", \"--num-steps\", default=1000, type=int)\n parser.add_argument(\"-l\", \"--layer-sizes\", default=\"100-100\")\n parser.add_argument(\"-lr\", \"--learning-rate\", default=0.001, type=float)\n parser.add_argument(\"-b\", \"--batch-size\", default=32, type=int)\n parser.add_argument('--jit', action='store_true')\n args = parser.parse_args()\n main(args)\n", "path": "examples/lda.py"}]}
| 2,279 | 258 |
gh_patches_debug_34766
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-9461
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
bokeh.palettes refguide page header mis-renders
Add lots of space, changes font size, for some reason (just this page)
https://docs.bokeh.org/en/latest/docs/reference/palettes.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bokeh/sphinxext/bokeh_palette_group.py`
Content:
```
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.
3 # All rights reserved.
4 #
5 # The full license is in the file LICENSE.txt, distributed with this software.
6 #-----------------------------------------------------------------------------
7 ''' Generate visual representations of palettes in Bokeh palette groups.
8
9 The ``bokeh.palettes`` modules expose attributes such as ``mpl``, ``brewer``,
10 and ``d3`` that provide groups of palettes. The ``bokeh-palette-group``
11 directive accepts the name of one of these groups, and generates a visual
12 matrix of colors for every palette in the group.
13
14 As an example, the following usage of the the directive:
15
16 .. code-block:: rest
17
18 .. bokeh-palette-group:: mpl
19
20 Generates the output:
21
22 .. bokeh-palette-group:: mpl
23
24 '''
25
26 #-----------------------------------------------------------------------------
27 # Boilerplate
28 #-----------------------------------------------------------------------------
29 import logging # isort:skip
30 log = logging.getLogger(__name__)
31
32 #-----------------------------------------------------------------------------
33 # Imports
34 #-----------------------------------------------------------------------------
35
36 # External imports
37 from docutils import nodes
38 from docutils.parsers.rst import Directive
39 from sphinx.errors import SphinxError
40
41 # Bokeh imports
42 from .. import palettes as bp
43 from .templates import PALETTE_GROUP_DETAIL
44
45 #-----------------------------------------------------------------------------
46 # Globals and constants
47 #-----------------------------------------------------------------------------
48
49 __all__ = (
50 'bokeh_palette_group',
51 'BokehPaletteGroupDirective',
52 'html_visit_bokeh_palette_group',
53 'setup',
54 )
55
56 #-----------------------------------------------------------------------------
57 # General API
58 #-----------------------------------------------------------------------------
59
60 #-----------------------------------------------------------------------------
61 # Dev API
62 #-----------------------------------------------------------------------------
63
64 class bokeh_palette_group(nodes.General, nodes.Element):
65 pass
66
67
68 class BokehPaletteGroupDirective(Directive):
69
70 has_content = False
71 required_arguments = 1
72
73 def run(self):
74 node = bokeh_palette_group()
75 node['group'] = self.arguments[0]
76 return [node]
77
78 def html_visit_bokeh_palette_group(self, node):
79 self.body.append(_BOOTSTRAP_CSS)
80 self.body.append('<div class="container-fluid"><div class="row">"')
81 group = getattr(bp, node['group'], None)
82 if not isinstance(group, dict):
83 raise SphinxError("invalid palette group name %r" % node['group'])
84 names = sorted(group)
85 for name in names:
86 palettes = group[name]
87 # arbitrary cuttoff here, idea is to not show large (e.g 256 length) palettes
88 numbers = [x for x in sorted(palettes) if x < 30]
89 html = PALETTE_GROUP_DETAIL.render(name=name, numbers=numbers, palettes=palettes)
90 self.body.append(html)
91 self.body.append('</div></div>')
92 self.body.append(_BOOTSTRAP_JS)
93 raise nodes.SkipNode
94
95 def setup(app):
96 ''' Required Sphinx extension setup function. '''
97 app.add_node(bokeh_palette_group, html=(html_visit_bokeh_palette_group, None))
98 app.add_directive('bokeh-palette-group', BokehPaletteGroupDirective)
99
100 #-----------------------------------------------------------------------------
101 # Private API
102 #-----------------------------------------------------------------------------
103
104 _BOOTSTRAP_CSS = """
105 <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css">
106 """
107
108 _BOOTSTRAP_JS = """
109 <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
110 <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"></script>
111 """
112
113 #-----------------------------------------------------------------------------
114 # Code
115 #-----------------------------------------------------------------------------
116
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bokeh/sphinxext/bokeh_palette_group.py b/bokeh/sphinxext/bokeh_palette_group.py
--- a/bokeh/sphinxext/bokeh_palette_group.py
+++ b/bokeh/sphinxext/bokeh_palette_group.py
@@ -21,6 +21,11 @@
.. bokeh-palette-group:: mpl
+.. note::
+ This extension assumes both Bootstrap and JQuery are present (which is the
+ case for the Bokeh documentation theme). If using this theme outside the
+ Bokeh documentation, be sure to include those resources by hand.
+
'''
#-----------------------------------------------------------------------------
@@ -75,8 +80,9 @@
node['group'] = self.arguments[0]
return [node]
+# NOTE: This extension now *assumes* both Bootstrap and JQuery are present
+# (which is now the case for the Bokeh docs theme).
def html_visit_bokeh_palette_group(self, node):
- self.body.append(_BOOTSTRAP_CSS)
self.body.append('<div class="container-fluid"><div class="row">"')
group = getattr(bp, node['group'], None)
if not isinstance(group, dict):
@@ -89,7 +95,6 @@
html = PALETTE_GROUP_DETAIL.render(name=name, numbers=numbers, palettes=palettes)
self.body.append(html)
self.body.append('</div></div>')
- self.body.append(_BOOTSTRAP_JS)
raise nodes.SkipNode
def setup(app):
@@ -101,15 +106,6 @@
# Private API
#-----------------------------------------------------------------------------
-_BOOTSTRAP_CSS = """
-<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css">
-"""
-
-_BOOTSTRAP_JS = """
-<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
-<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"></script>
-"""
-
#-----------------------------------------------------------------------------
# Code
#-----------------------------------------------------------------------------
|
{"golden_diff": "diff --git a/bokeh/sphinxext/bokeh_palette_group.py b/bokeh/sphinxext/bokeh_palette_group.py\n--- a/bokeh/sphinxext/bokeh_palette_group.py\n+++ b/bokeh/sphinxext/bokeh_palette_group.py\n@@ -21,6 +21,11 @@\n \n .. bokeh-palette-group:: mpl\n \n+.. note::\n+ This extension assumes both Bootstrap and JQuery are present (which is the\n+ case for the Bokeh documentation theme). If using this theme outside the\n+ Bokeh documentation, be sure to include those resources by hand.\n+\n '''\n \n #-----------------------------------------------------------------------------\n@@ -75,8 +80,9 @@\n node['group'] = self.arguments[0]\n return [node]\n \n+# NOTE: This extension now *assumes* both Bootstrap and JQuery are present\n+# (which is now the case for the Bokeh docs theme).\n def html_visit_bokeh_palette_group(self, node):\n- self.body.append(_BOOTSTRAP_CSS)\n self.body.append('<div class=\"container-fluid\"><div class=\"row\">\"')\n group = getattr(bp, node['group'], None)\n if not isinstance(group, dict):\n@@ -89,7 +95,6 @@\n html = PALETTE_GROUP_DETAIL.render(name=name, numbers=numbers, palettes=palettes)\n self.body.append(html)\n self.body.append('</div></div>')\n- self.body.append(_BOOTSTRAP_JS)\n raise nodes.SkipNode\n \n def setup(app):\n@@ -101,15 +106,6 @@\n # Private API\n #-----------------------------------------------------------------------------\n \n-_BOOTSTRAP_CSS = \"\"\"\n-<link rel=\"stylesheet\" href=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css\">\n-\"\"\"\n-\n-_BOOTSTRAP_JS = \"\"\"\n-<script src=\"https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js\"></script>\n-<script src=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js\"></script>\n-\"\"\"\n-\n #-----------------------------------------------------------------------------\n # Code\n #-----------------------------------------------------------------------------\n", "issue": "bokeh.palettes refguide page header mis-renders\nAdd lots of space, changes font size, for some reason (just this page)\r\n\r\nhttps://docs.bokeh.org/en/latest/docs/reference/palettes.html\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' Generate visual representations of palettes in Bokeh palette groups.\n\nThe ``bokeh.palettes`` modules expose attributes such as ``mpl``, ``brewer``,\nand ``d3`` that provide groups of palettes. The ``bokeh-palette-group``\ndirective accepts the name of one of these groups, and generates a visual\nmatrix of colors for every palette in the group.\n\nAs an example, the following usage of the the directive:\n\n.. code-block:: rest\n\n .. bokeh-palette-group:: mpl\n\nGenerates the output:\n\n .. bokeh-palette-group:: mpl\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nimport logging # isort:skip\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# External imports\nfrom docutils import nodes\nfrom docutils.parsers.rst import Directive\nfrom sphinx.errors import SphinxError\n\n# Bokeh imports\nfrom .. import palettes as bp\nfrom .templates import PALETTE_GROUP_DETAIL\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'bokeh_palette_group',\n 'BokehPaletteGroupDirective',\n 'html_visit_bokeh_palette_group',\n 'setup',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\nclass bokeh_palette_group(nodes.General, nodes.Element):\n pass\n\n\nclass BokehPaletteGroupDirective(Directive):\n\n has_content = False\n required_arguments = 1\n\n def run(self):\n node = bokeh_palette_group()\n node['group'] = self.arguments[0]\n return [node]\n\ndef html_visit_bokeh_palette_group(self, node):\n self.body.append(_BOOTSTRAP_CSS)\n self.body.append('<div class=\"container-fluid\"><div class=\"row\">\"')\n group = getattr(bp, node['group'], None)\n if not isinstance(group, dict):\n raise SphinxError(\"invalid palette group name %r\" % node['group'])\n names = sorted(group)\n for name in names:\n palettes = group[name]\n # arbitrary cuttoff here, idea is to not show large (e.g 256 length) palettes\n numbers = [x for x in sorted(palettes) if x < 30]\n html = PALETTE_GROUP_DETAIL.render(name=name, numbers=numbers, palettes=palettes)\n self.body.append(html)\n self.body.append('</div></div>')\n self.body.append(_BOOTSTRAP_JS)\n raise nodes.SkipNode\n\ndef setup(app):\n ''' Required Sphinx extension setup function. '''\n app.add_node(bokeh_palette_group, html=(html_visit_bokeh_palette_group, None))\n app.add_directive('bokeh-palette-group', BokehPaletteGroupDirective)\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n_BOOTSTRAP_CSS = \"\"\"\n<link rel=\"stylesheet\" href=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css\">\n\"\"\"\n\n_BOOTSTRAP_JS = \"\"\"\n<script src=\"https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js\"></script>\n<script src=\"https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js\"></script>\n\"\"\"\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n", "path": "bokeh/sphinxext/bokeh_palette_group.py"}], "after_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2012 - 2019, Anaconda, Inc., and Bokeh Contributors.\n# All rights reserved.\n#\n# The full license is in the file LICENSE.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n''' Generate visual representations of palettes in Bokeh palette groups.\n\nThe ``bokeh.palettes`` modules expose attributes such as ``mpl``, ``brewer``,\nand ``d3`` that provide groups of palettes. The ``bokeh-palette-group``\ndirective accepts the name of one of these groups, and generates a visual\nmatrix of colors for every palette in the group.\n\nAs an example, the following usage of the the directive:\n\n.. code-block:: rest\n\n .. bokeh-palette-group:: mpl\n\nGenerates the output:\n\n .. bokeh-palette-group:: mpl\n\n.. note::\n This extension assumes both Bootstrap and JQuery are present (which is the\n case for the Bokeh documentation theme). If using this theme outside the\n Bokeh documentation, be sure to include those resources by hand.\n\n'''\n\n#-----------------------------------------------------------------------------\n# Boilerplate\n#-----------------------------------------------------------------------------\nimport logging # isort:skip\nlog = logging.getLogger(__name__)\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# External imports\nfrom docutils import nodes\nfrom docutils.parsers.rst import Directive\nfrom sphinx.errors import SphinxError\n\n# Bokeh imports\nfrom .. import palettes as bp\nfrom .templates import PALETTE_GROUP_DETAIL\n\n#-----------------------------------------------------------------------------\n# Globals and constants\n#-----------------------------------------------------------------------------\n\n__all__ = (\n 'bokeh_palette_group',\n 'BokehPaletteGroupDirective',\n 'html_visit_bokeh_palette_group',\n 'setup',\n)\n\n#-----------------------------------------------------------------------------\n# General API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Dev API\n#-----------------------------------------------------------------------------\n\nclass bokeh_palette_group(nodes.General, nodes.Element):\n pass\n\n\nclass BokehPaletteGroupDirective(Directive):\n\n has_content = False\n required_arguments = 1\n\n def run(self):\n node = bokeh_palette_group()\n node['group'] = self.arguments[0]\n return [node]\n\n# NOTE: This extension now *assumes* both Bootstrap and JQuery are present\n# (which is now the case for the Bokeh docs theme).\ndef html_visit_bokeh_palette_group(self, node):\n self.body.append('<div class=\"container-fluid\"><div class=\"row\">\"')\n group = getattr(bp, node['group'], None)\n if not isinstance(group, dict):\n raise SphinxError(\"invalid palette group name %r\" % node['group'])\n names = sorted(group)\n for name in names:\n palettes = group[name]\n # arbitrary cuttoff here, idea is to not show large (e.g 256 length) palettes\n numbers = [x for x in sorted(palettes) if x < 30]\n html = PALETTE_GROUP_DETAIL.render(name=name, numbers=numbers, palettes=palettes)\n self.body.append(html)\n self.body.append('</div></div>')\n raise nodes.SkipNode\n\ndef setup(app):\n ''' Required Sphinx extension setup function. '''\n app.add_node(bokeh_palette_group, html=(html_visit_bokeh_palette_group, None))\n app.add_directive('bokeh-palette-group', BokehPaletteGroupDirective)\n\n#-----------------------------------------------------------------------------\n# Private API\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Code\n#-----------------------------------------------------------------------------\n", "path": "bokeh/sphinxext/bokeh_palette_group.py"}]}
| 1,319 | 459 |
gh_patches_debug_1831
|
rasdani/github-patches
|
git_diff
|
huggingface__accelerate-811
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error when pickling accelerated optimizers with PyTorch 1.13
### System Info
```Shell
$ pip list
Package Version
------------------------ ----------
accelerate 0.13.2
numpy 1.23.4
nvidia-cublas-cu11 11.10.3.66
nvidia-cuda-nvrtc-cu11 11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11 8.5.0.96
packaging 21.3
pip 22.3
psutil 5.9.3
pyparsing 3.0.9
PyYAML 6.0
setuptools 65.5.0
torch 1.13.0
typing_extensions 4.4.0
wheel 0.37.1
```
Same issue with torch-cpu.
OS: Ubuntu 20.04
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import pickle
import accelerate
import torch
model = torch.nn.Linear(10, 10)
optimizer = torch.optim.SGD(model.parameters(), 0.1)
pickle.loads(pickle.dumps(optimizer)) # works
accelerator = accelerate.Accelerator(mixed_precision='fp16')
optimizer2 = accelerator.prepare(optimizer)
pickle.loads(pickle.dumps(optimizer2)) # fails
```
### Expected behavior
I ran into a problem with accelerate and PyTorch 1.13. It appears that optimizers cannot be pickled anymore after being accelerated. When running the attached script, I get:
```
Traceback (most recent call last):
File ".../skorch/foo.py", line 12, in <module>
pickle.loads(pickle.dumps(optimizer2))
File ".../torch/optim/optimizer.py", line 84, in __setstate__
self.defaults.setdefault('differentiable', False)
File ".../accelerate/optimizer.py", line 90, in defaults
return self.optimizer.defaults
AttributeError: 'AcceleratedOptimizer' object has no attribute 'optimizer'
```
The offending addition on PyTorch seems to be this line:
https://github.com/pytorch/pytorch/blob/23fe6c8ca15ec2cf6ea74f93aa91cae343ea534f/torch/optim/optimizer.py#L84
which was not present in PyTorch 1.12. At object creation time, PyTorch now tries to access the `defaults` attribute, which in turn calls the `defaults` property in accelerate, which requires the `optimizer` attribute, which doesn't exist and thus errors. At first glance, it looks like `AcceleratedOptimizer` might need its own `__getstate__` and `__setstate__` to solve this but I'm not sure.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/accelerate/optimizer.py`
Content:
```
1 # Copyright 2021 The HuggingFace Team. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import inspect
16 import warnings
17
18 import torch
19
20 from .state import AcceleratorState, GradientState
21 from .utils import DistributedType, honor_type, is_torch_version, is_tpu_available
22
23
24 if is_tpu_available(check_device=False):
25 import torch_xla.core.xla_model as xm
26
27
28 def move_to_device(state, device):
29 if isinstance(state, (list, tuple)):
30 return honor_type(state, (move_to_device(t, device) for t in state))
31 elif isinstance(state, dict):
32 return type(state)({k: move_to_device(v, device) for k, v in state.items()})
33 elif isinstance(state, torch.Tensor):
34 return state.to(device)
35 return state
36
37
38 class AcceleratedOptimizer(torch.optim.Optimizer):
39 """
40 Internal wrapper around a torch optimizer.
41
42 Conditionally will perform `step` and `zero_grad` if gradients should be synchronized when performing gradient
43 accumulation.
44
45 Args:
46 optimizer (`torch.optim.optimizer.Optimizer`):
47 The optimizer to wrap.
48 device_placement (`bool`, *optional*, defaults to `True`):
49 Whether or not the optimizer should handle device placement. If so, it will place the state dictionary of
50 `optimizer` on the right device.
51 scaler (`torch.cuda.amp.grad_scaler.GradScaler`, *optional*):
52 The scaler to use in the step function if training with mixed precision.
53 """
54
55 def __init__(self, optimizer, device_placement=True, scaler=None):
56 self.optimizer = optimizer
57 self.scaler = scaler
58 self.accelerator_state = AcceleratorState()
59 self.gradient_state = GradientState()
60 self.device_placement = device_placement
61 self._is_overflow = False
62
63 # Handle device placement
64 if device_placement:
65 state_dict = self.optimizer.state_dict()
66 if self.accelerator_state.distributed_type == DistributedType.TPU:
67 xm.send_cpu_data_to_device(state_dict, self.accelerator_state.device)
68 else:
69 state_dict = move_to_device(state_dict, self.accelerator_state.device)
70 self.optimizer.load_state_dict(state_dict)
71
72 @property
73 def state(self):
74 return self.optimizer.state
75
76 @state.setter
77 def state(self, state):
78 self.optimizer.state = state
79
80 @property
81 def param_groups(self):
82 return self.optimizer.param_groups
83
84 @param_groups.setter
85 def param_groups(self, param_groups):
86 self.optimizer.param_groups = param_groups
87
88 @property
89 def defaults(self):
90 return self.optimizer.defaults
91
92 @defaults.setter
93 def defaults(self, defaults):
94 self.optimizer.defaults = defaults
95
96 def add_param_group(self, param_group):
97 self.optimizer.add_param_group(param_group)
98
99 def load_state_dict(self, state_dict):
100 if self.accelerator_state.distributed_type == DistributedType.TPU and self.device_placement:
101 xm.send_cpu_data_to_device(state_dict, self.accelerator_state.device)
102 self.optimizer.load_state_dict(state_dict)
103
104 def state_dict(self):
105 return self.optimizer.state_dict()
106
107 def zero_grad(self, set_to_none=None):
108 if self.gradient_state.sync_gradients:
109 if is_torch_version("<", "1.7.0"):
110 if set_to_none is not None:
111 raise ValueError(
112 "`set_to_none` for Optimizer.zero_grad` was introduced in PyTorch 1.7.0 and can't be used for "
113 f"earlier versions (found version {torch.__version__})."
114 )
115 self.optimizer.zero_grad()
116 else:
117 accept_arg = "set_to_none" in inspect.signature(self.optimizer.zero_grad).parameters
118 if accept_arg:
119 if set_to_none is None:
120 set_to_none = False
121 self.optimizer.zero_grad(set_to_none=set_to_none)
122 else:
123 if set_to_none is not None:
124 raise ValueError("`set_to_none` for Optimizer.zero_grad` is not supported by this optimizer.")
125 self.optimizer.zero_grad()
126
127 def step(self, closure=None):
128 if self.gradient_state.sync_gradients:
129 if self.accelerator_state.distributed_type == DistributedType.TPU:
130 optimizer_args = {"closure": closure} if closure is not None else {}
131 xm.optimizer_step(self.optimizer, optimizer_args=optimizer_args)
132 elif self.scaler is not None:
133 scale_before = self.scaler.get_scale()
134 self.scaler.step(self.optimizer, closure)
135 self.scaler.update()
136 scale_after = self.scaler.get_scale()
137 # If we reduced the loss scale, it means the optimizer step was skipped because of gradient overflow.
138 self._is_overflow = scale_after < scale_before
139 else:
140 self.optimizer.step(closure)
141
142 def _switch_parameters(self, parameters_map):
143 for param_group in self.optimizer.param_groups:
144 param_group["params"] = [parameters_map.get(p, p) for p in param_group["params"]]
145
146 @property
147 def is_overflow(self):
148 """Whether or not the optimizer step was done, or skipped because of gradient overflow."""
149 warnings.warn(
150 "The `is_overflow` property is deprecated and will be removed in version 1.0 of Accelerate use "
151 "`optimizer.step_was_skipped` instead.",
152 FutureWarning,
153 )
154 return self._is_overflow
155
156 @property
157 def step_was_skipped(self):
158 """Whether or not the optimizer step was skipped."""
159 return self._is_overflow
160
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/accelerate/optimizer.py b/src/accelerate/optimizer.py
--- a/src/accelerate/optimizer.py
+++ b/src/accelerate/optimizer.py
@@ -157,3 +157,9 @@
def step_was_skipped(self):
"""Whether or not the optimizer step was skipped."""
return self._is_overflow
+
+ def __getstate__(self):
+ return self.__dict__.copy()
+
+ def __setstate__(self, state):
+ self.__dict__.update(state)
|
{"golden_diff": "diff --git a/src/accelerate/optimizer.py b/src/accelerate/optimizer.py\n--- a/src/accelerate/optimizer.py\n+++ b/src/accelerate/optimizer.py\n@@ -157,3 +157,9 @@\n def step_was_skipped(self):\n \"\"\"Whether or not the optimizer step was skipped.\"\"\"\n return self._is_overflow\n+\n+ def __getstate__(self):\n+ return self.__dict__.copy()\n+\n+ def __setstate__(self, state):\n+ self.__dict__.update(state)\n", "issue": "Error when pickling accelerated optimizers with PyTorch 1.13\n### System Info\r\n\r\n```Shell\r\n$ pip list\r\nPackage Version\r\n------------------------ ----------\r\naccelerate 0.13.2\r\nnumpy 1.23.4\r\nnvidia-cublas-cu11 11.10.3.66\r\nnvidia-cuda-nvrtc-cu11 11.7.99\r\nnvidia-cuda-runtime-cu11 11.7.99\r\nnvidia-cudnn-cu11 8.5.0.96\r\npackaging 21.3\r\npip 22.3\r\npsutil 5.9.3\r\npyparsing 3.0.9\r\nPyYAML 6.0\r\nsetuptools 65.5.0\r\ntorch 1.13.0\r\ntyping_extensions 4.4.0\r\nwheel 0.37.1\r\n```\r\n\r\nSame issue with torch-cpu.\r\n\r\nOS: Ubuntu 20.04\r\n\r\n### Information\r\n\r\n- [ ] The official example scripts\r\n- [X] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\r\n- [X] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n\r\n```python\r\nimport pickle\r\nimport accelerate\r\nimport torch\r\n\r\nmodel = torch.nn.Linear(10, 10)\r\noptimizer = torch.optim.SGD(model.parameters(), 0.1)\r\npickle.loads(pickle.dumps(optimizer)) # works\r\n\r\naccelerator = accelerate.Accelerator(mixed_precision='fp16')\r\noptimizer2 = accelerator.prepare(optimizer)\r\npickle.loads(pickle.dumps(optimizer2)) # fails\r\n```\r\n\r\n### Expected behavior\r\n\r\nI ran into a problem with accelerate and PyTorch 1.13. It appears that optimizers cannot be pickled anymore after being accelerated. When running the attached script, I get:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \".../skorch/foo.py\", line 12, in <module>\r\n pickle.loads(pickle.dumps(optimizer2))\r\n File \".../torch/optim/optimizer.py\", line 84, in __setstate__\r\n self.defaults.setdefault('differentiable', False)\r\n File \".../accelerate/optimizer.py\", line 90, in defaults\r\n return self.optimizer.defaults\r\nAttributeError: 'AcceleratedOptimizer' object has no attribute 'optimizer'\r\n```\r\n\r\nThe offending addition on PyTorch seems to be this line:\r\n\r\nhttps://github.com/pytorch/pytorch/blob/23fe6c8ca15ec2cf6ea74f93aa91cae343ea534f/torch/optim/optimizer.py#L84\r\n\r\nwhich was not present in PyTorch 1.12. At object creation time, PyTorch now tries to access the `defaults` attribute, which in turn calls the `defaults` property in accelerate, which requires the `optimizer` attribute, which doesn't exist and thus errors. At first glance, it looks like `AcceleratedOptimizer` might need its own `__getstate__` and `__setstate__` to solve this but I'm not sure.\r\n\n", "before_files": [{"content": "# Copyright 2021 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport inspect\nimport warnings\n\nimport torch\n\nfrom .state import AcceleratorState, GradientState\nfrom .utils import DistributedType, honor_type, is_torch_version, is_tpu_available\n\n\nif is_tpu_available(check_device=False):\n import torch_xla.core.xla_model as xm\n\n\ndef move_to_device(state, device):\n if isinstance(state, (list, tuple)):\n return honor_type(state, (move_to_device(t, device) for t in state))\n elif isinstance(state, dict):\n return type(state)({k: move_to_device(v, device) for k, v in state.items()})\n elif isinstance(state, torch.Tensor):\n return state.to(device)\n return state\n\n\nclass AcceleratedOptimizer(torch.optim.Optimizer):\n \"\"\"\n Internal wrapper around a torch optimizer.\n\n Conditionally will perform `step` and `zero_grad` if gradients should be synchronized when performing gradient\n accumulation.\n\n Args:\n optimizer (`torch.optim.optimizer.Optimizer`):\n The optimizer to wrap.\n device_placement (`bool`, *optional*, defaults to `True`):\n Whether or not the optimizer should handle device placement. If so, it will place the state dictionary of\n `optimizer` on the right device.\n scaler (`torch.cuda.amp.grad_scaler.GradScaler`, *optional*):\n The scaler to use in the step function if training with mixed precision.\n \"\"\"\n\n def __init__(self, optimizer, device_placement=True, scaler=None):\n self.optimizer = optimizer\n self.scaler = scaler\n self.accelerator_state = AcceleratorState()\n self.gradient_state = GradientState()\n self.device_placement = device_placement\n self._is_overflow = False\n\n # Handle device placement\n if device_placement:\n state_dict = self.optimizer.state_dict()\n if self.accelerator_state.distributed_type == DistributedType.TPU:\n xm.send_cpu_data_to_device(state_dict, self.accelerator_state.device)\n else:\n state_dict = move_to_device(state_dict, self.accelerator_state.device)\n self.optimizer.load_state_dict(state_dict)\n\n @property\n def state(self):\n return self.optimizer.state\n\n @state.setter\n def state(self, state):\n self.optimizer.state = state\n\n @property\n def param_groups(self):\n return self.optimizer.param_groups\n\n @param_groups.setter\n def param_groups(self, param_groups):\n self.optimizer.param_groups = param_groups\n\n @property\n def defaults(self):\n return self.optimizer.defaults\n\n @defaults.setter\n def defaults(self, defaults):\n self.optimizer.defaults = defaults\n\n def add_param_group(self, param_group):\n self.optimizer.add_param_group(param_group)\n\n def load_state_dict(self, state_dict):\n if self.accelerator_state.distributed_type == DistributedType.TPU and self.device_placement:\n xm.send_cpu_data_to_device(state_dict, self.accelerator_state.device)\n self.optimizer.load_state_dict(state_dict)\n\n def state_dict(self):\n return self.optimizer.state_dict()\n\n def zero_grad(self, set_to_none=None):\n if self.gradient_state.sync_gradients:\n if is_torch_version(\"<\", \"1.7.0\"):\n if set_to_none is not None:\n raise ValueError(\n \"`set_to_none` for Optimizer.zero_grad` was introduced in PyTorch 1.7.0 and can't be used for \"\n f\"earlier versions (found version {torch.__version__}).\"\n )\n self.optimizer.zero_grad()\n else:\n accept_arg = \"set_to_none\" in inspect.signature(self.optimizer.zero_grad).parameters\n if accept_arg:\n if set_to_none is None:\n set_to_none = False\n self.optimizer.zero_grad(set_to_none=set_to_none)\n else:\n if set_to_none is not None:\n raise ValueError(\"`set_to_none` for Optimizer.zero_grad` is not supported by this optimizer.\")\n self.optimizer.zero_grad()\n\n def step(self, closure=None):\n if self.gradient_state.sync_gradients:\n if self.accelerator_state.distributed_type == DistributedType.TPU:\n optimizer_args = {\"closure\": closure} if closure is not None else {}\n xm.optimizer_step(self.optimizer, optimizer_args=optimizer_args)\n elif self.scaler is not None:\n scale_before = self.scaler.get_scale()\n self.scaler.step(self.optimizer, closure)\n self.scaler.update()\n scale_after = self.scaler.get_scale()\n # If we reduced the loss scale, it means the optimizer step was skipped because of gradient overflow.\n self._is_overflow = scale_after < scale_before\n else:\n self.optimizer.step(closure)\n\n def _switch_parameters(self, parameters_map):\n for param_group in self.optimizer.param_groups:\n param_group[\"params\"] = [parameters_map.get(p, p) for p in param_group[\"params\"]]\n\n @property\n def is_overflow(self):\n \"\"\"Whether or not the optimizer step was done, or skipped because of gradient overflow.\"\"\"\n warnings.warn(\n \"The `is_overflow` property is deprecated and will be removed in version 1.0 of Accelerate use \"\n \"`optimizer.step_was_skipped` instead.\",\n FutureWarning,\n )\n return self._is_overflow\n\n @property\n def step_was_skipped(self):\n \"\"\"Whether or not the optimizer step was skipped.\"\"\"\n return self._is_overflow\n", "path": "src/accelerate/optimizer.py"}], "after_files": [{"content": "# Copyright 2021 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport inspect\nimport warnings\n\nimport torch\n\nfrom .state import AcceleratorState, GradientState\nfrom .utils import DistributedType, honor_type, is_torch_version, is_tpu_available\n\n\nif is_tpu_available(check_device=False):\n import torch_xla.core.xla_model as xm\n\n\ndef move_to_device(state, device):\n if isinstance(state, (list, tuple)):\n return honor_type(state, (move_to_device(t, device) for t in state))\n elif isinstance(state, dict):\n return type(state)({k: move_to_device(v, device) for k, v in state.items()})\n elif isinstance(state, torch.Tensor):\n return state.to(device)\n return state\n\n\nclass AcceleratedOptimizer(torch.optim.Optimizer):\n \"\"\"\n Internal wrapper around a torch optimizer.\n\n Conditionally will perform `step` and `zero_grad` if gradients should be synchronized when performing gradient\n accumulation.\n\n Args:\n optimizer (`torch.optim.optimizer.Optimizer`):\n The optimizer to wrap.\n device_placement (`bool`, *optional*, defaults to `True`):\n Whether or not the optimizer should handle device placement. If so, it will place the state dictionary of\n `optimizer` on the right device.\n scaler (`torch.cuda.amp.grad_scaler.GradScaler`, *optional*):\n The scaler to use in the step function if training with mixed precision.\n \"\"\"\n\n def __init__(self, optimizer, device_placement=True, scaler=None):\n self.optimizer = optimizer\n self.scaler = scaler\n self.accelerator_state = AcceleratorState()\n self.gradient_state = GradientState()\n self.device_placement = device_placement\n self._is_overflow = False\n\n # Handle device placement\n if device_placement:\n state_dict = self.optimizer.state_dict()\n if self.accelerator_state.distributed_type == DistributedType.TPU:\n xm.send_cpu_data_to_device(state_dict, self.accelerator_state.device)\n else:\n state_dict = move_to_device(state_dict, self.accelerator_state.device)\n self.optimizer.load_state_dict(state_dict)\n\n @property\n def state(self):\n return self.optimizer.state\n\n @state.setter\n def state(self, state):\n self.optimizer.state = state\n\n @property\n def param_groups(self):\n return self.optimizer.param_groups\n\n @param_groups.setter\n def param_groups(self, param_groups):\n self.optimizer.param_groups = param_groups\n\n @property\n def defaults(self):\n return self.optimizer.defaults\n\n @defaults.setter\n def defaults(self, defaults):\n self.optimizer.defaults = defaults\n\n def add_param_group(self, param_group):\n self.optimizer.add_param_group(param_group)\n\n def load_state_dict(self, state_dict):\n if self.accelerator_state.distributed_type == DistributedType.TPU and self.device_placement:\n xm.send_cpu_data_to_device(state_dict, self.accelerator_state.device)\n self.optimizer.load_state_dict(state_dict)\n\n def state_dict(self):\n return self.optimizer.state_dict()\n\n def zero_grad(self, set_to_none=None):\n if self.gradient_state.sync_gradients:\n if is_torch_version(\"<\", \"1.7.0\"):\n if set_to_none is not None:\n raise ValueError(\n \"`set_to_none` for Optimizer.zero_grad` was introduced in PyTorch 1.7.0 and can't be used for \"\n f\"earlier versions (found version {torch.__version__}).\"\n )\n self.optimizer.zero_grad()\n else:\n accept_arg = \"set_to_none\" in inspect.signature(self.optimizer.zero_grad).parameters\n if accept_arg:\n if set_to_none is None:\n set_to_none = False\n self.optimizer.zero_grad(set_to_none=set_to_none)\n else:\n if set_to_none is not None:\n raise ValueError(\"`set_to_none` for Optimizer.zero_grad` is not supported by this optimizer.\")\n self.optimizer.zero_grad()\n\n def step(self, closure=None):\n if self.gradient_state.sync_gradients:\n if self.accelerator_state.distributed_type == DistributedType.TPU:\n optimizer_args = {\"closure\": closure} if closure is not None else {}\n xm.optimizer_step(self.optimizer, optimizer_args=optimizer_args)\n elif self.scaler is not None:\n scale_before = self.scaler.get_scale()\n self.scaler.step(self.optimizer, closure)\n self.scaler.update()\n scale_after = self.scaler.get_scale()\n # If we reduced the loss scale, it means the optimizer step was skipped because of gradient overflow.\n self._is_overflow = scale_after < scale_before\n else:\n self.optimizer.step(closure)\n\n def _switch_parameters(self, parameters_map):\n for param_group in self.optimizer.param_groups:\n param_group[\"params\"] = [parameters_map.get(p, p) for p in param_group[\"params\"]]\n\n @property\n def is_overflow(self):\n \"\"\"Whether or not the optimizer step was done, or skipped because of gradient overflow.\"\"\"\n warnings.warn(\n \"The `is_overflow` property is deprecated and will be removed in version 1.0 of Accelerate use \"\n \"`optimizer.step_was_skipped` instead.\",\n FutureWarning,\n )\n return self._is_overflow\n\n @property\n def step_was_skipped(self):\n \"\"\"Whether or not the optimizer step was skipped.\"\"\"\n return self._is_overflow\n\n def __getstate__(self):\n return self.__dict__.copy()\n\n def __setstate__(self, state):\n self.__dict__.update(state)\n", "path": "src/accelerate/optimizer.py"}]}
| 2,664 | 120 |
gh_patches_debug_5768
|
rasdani/github-patches
|
git_diff
|
LMFDB__lmfdb-5112
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Problem with zeta zeroes download script
The python script at https://github.com/LMFDB/lmfdb/blob/master/lmfdb/zeros/zeta/platt_zeros.py created by David Platt to parse downloaded zeta zeros files seems to no longer work correctly, possibly because of missing a file `index.db` that should be in the download directory at https://beta.lmfdb.org/data/riemann-zeta-zeros/.
One can work around this problem by applying the following changes
Change #1
- line 70 in the code was " infile.seek(offset, 0)"
- changed line 70 to " infile.seek(offset, 1)"
Change #2
- using sqlite3, create table "zero_index"
- in table "zero_index" insert columns "t, N, filename, offset, block_number"
- add a row to the zero_index table for each file you want to read (e.g. "1, 1, 'zeros_14.dat', 0, 1" for the first file)
We should recover or recreate the missing index file and/or modify the script platt_zeros.py as required so that it works correctly without any changes.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lmfdb/zeros/zeta/zetazeros.py`
Content:
```
1
2 from mpmath import nstr, inf
3 from sage.all import floor, log
4 from lmfdb.logger import make_logger
5 from flask import render_template, request, url_for, Blueprint, Response
6
7 ZetaZeros = Blueprint("zeta zeros", __name__, template_folder="templates")
8 logger = make_logger(ZetaZeros)
9
10 from .platt_zeros import zeros_starting_at_N, zeros_starting_at_t
11
12 credit_string = "David Platt"
13
14
15 def learnmore_list():
16 return [('Completeness of the data', url_for(".completeness")),
17 ('Source of the data', url_for(".source")),
18 ('Reliability of the data', url_for(".reliability"))]
19
20 def friends_list():
21 return [('L-function', url_for("l_functions.l_function_riemann_page"))]
22
23 def downloads():
24 return [('Bulk download', "https://beta.lmfdb.org/data/riemann-zeta-zeros/")]
25
26 # Return the learnmore list with the matchstring entry removed
27 def learnmore_list_remove(matchstring):
28 return [t for t in learnmore_list() if t[0].find(matchstring) < 0]
29
30
31 @ZetaZeros.route("/")
32 def zetazeros():
33 N = request.args.get("N", None, int)
34 t = request.args.get("t", 0, float)
35 limit = request.args.get("limit", 100, int)
36 if limit > 1000:
37 return list_zeros(N=N, t=t, limit=limit)
38 else:
39 title = r"Zeros of $\zeta(s)$"
40 bread = [("L-functions", url_for("l_functions.index")), (r'Zeros of $\zeta(s)$', ' ')]
41 return render_template('zeta.html', N=N, t=t, limit=limit, title=title, bread=bread, learnmore=learnmore_list(), friends=friends_list(), downloads=downloads())
42
43
44 @ZetaZeros.route("/Completeness")
45 def completeness():
46 t = 'Completeness of Riemann zeta zeros data'
47 bread = [("L-functions", url_for("l_functions.index")),(r"Zeros of $\zeta(s)$", url_for(".zetazeros")),('Completeness', ' ')]
48 return render_template("single.html", kid='rcs.cande.zeros.zeta', credit=credit_string, title=t, bread=bread, learnmore=learnmore_list_remove('Completeness'))
49
50 @ZetaZeros.route("/Source")
51 def source():
52 t = 'Source of Riemann zeta zeros data'
53 bread = [("L-functions", url_for("l_functions.index")),(r"Zeros of $\zeta(s)$", url_for(".zetazeros")),('Source', ' ')]
54 return render_template("single.html", kid='rcs.source.zeros.zeta', credit=credit_string, title=t, bread=bread, learnmore=learnmore_list_remove('Source'))
55
56 @ZetaZeros.route("/Reliability")
57 def reliability():
58 t = 'Reliability of Riemann zeta zeros data'
59 bread = [("L-functions", url_for("l_functions.index")),(r"Zeros of $\zeta(s)$", url_for(".zetazeros")),('Reliability', ' ')]
60 return render_template("single.html", kid='rcs.rigor.zeros.zeta', credit=credit_string, title=t, bread=bread, learnmore=learnmore_list_remove('Reliability'))
61
62 @ZetaZeros.route("/list")
63 def list_zeros(N=None,
64 t=None,
65 limit=None,
66 fmt=None,
67 download=None):
68 if N is None:
69 N = request.args.get("N", None, int)
70 if t is None:
71 t = request.args.get("t", 0, float)
72 if limit is None:
73 limit = request.args.get("limit", 100, int)
74 if fmt is None:
75 fmt = request.args.get("format", "plain")
76 if download is None:
77 download = request.args.get("download", "no")
78
79 if limit < 0:
80 limit = 100
81 if N is not None: # None is < 0!! WHAT THE WHAT!
82 if N < 0:
83 N = 0
84 if t < 0:
85 t = 0
86
87 if limit > 100000:
88 # limit = 100000
89 #
90 bread = [("L-functions", url_for("l_functions.index")),(r"Zeros of $\zeta(s)$", url_for(".zetazeros"))]
91 return render_template('single.html', title="Too many zeros", bread=bread, kid = "dq.zeros.zeta.toomany")
92
93 if N is not None:
94 zeros = zeros_starting_at_N(N, limit)
95 else:
96 zeros = zeros_starting_at_t(t, limit)
97
98 if fmt == 'plain':
99 response = Response(("%d %s\n" % (n, nstr(z,31+floor(log(z,10))+1,strip_zeros=False,min_fixed=-inf,max_fixed=+inf)) for (n, z) in zeros))
100 response.headers['content-type'] = 'text/plain'
101 if download == "yes":
102 response.headers['content-disposition'] = 'attachment; filename=zetazeros'
103 else:
104 response = str(list(zeros))
105
106 return response
107
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lmfdb/zeros/zeta/zetazeros.py b/lmfdb/zeros/zeta/zetazeros.py
--- a/lmfdb/zeros/zeta/zetazeros.py
+++ b/lmfdb/zeros/zeta/zetazeros.py
@@ -21,7 +21,7 @@
return [('L-function', url_for("l_functions.l_function_riemann_page"))]
def downloads():
- return [('Bulk download', "https://beta.lmfdb.org/data/riemann-zeta-zeros/")]
+ return [('Bulk download', "https://beta.lmfdb.org/riemann-zeta-zeros/")]
# Return the learnmore list with the matchstring entry removed
def learnmore_list_remove(matchstring):
|
{"golden_diff": "diff --git a/lmfdb/zeros/zeta/zetazeros.py b/lmfdb/zeros/zeta/zetazeros.py\n--- a/lmfdb/zeros/zeta/zetazeros.py\n+++ b/lmfdb/zeros/zeta/zetazeros.py\n@@ -21,7 +21,7 @@\n return [('L-function', url_for(\"l_functions.l_function_riemann_page\"))]\n \n def downloads():\n- return [('Bulk download', \"https://beta.lmfdb.org/data/riemann-zeta-zeros/\")]\n+ return [('Bulk download', \"https://beta.lmfdb.org/riemann-zeta-zeros/\")]\n \n # Return the learnmore list with the matchstring entry removed\n def learnmore_list_remove(matchstring):\n", "issue": "Problem with zeta zeroes download script\nThe python script at https://github.com/LMFDB/lmfdb/blob/master/lmfdb/zeros/zeta/platt_zeros.py created by David Platt to parse downloaded zeta zeros files seems to no longer work correctly, possibly because of missing a file `index.db` that should be in the download directory at https://beta.lmfdb.org/data/riemann-zeta-zeros/.\r\n\r\nOne can work around this problem by applying the following changes\r\n\r\nChange #1\r\n- line 70 in the code was \" infile.seek(offset, 0)\"\r\n- changed line 70 to \" infile.seek(offset, 1)\"\r\n\r\nChange #2\r\n- using sqlite3, create table \"zero_index\"\r\n- in table \"zero_index\" insert columns \"t, N, filename, offset, block_number\"\r\n- add a row to the zero_index table for each file you want to read (e.g. \"1, 1, 'zeros_14.dat', 0, 1\" for the first file)\r\n\r\nWe should recover or recreate the missing index file and/or modify the script platt_zeros.py as required so that it works correctly without any changes.\r\n\n", "before_files": [{"content": "\nfrom mpmath import nstr, inf\nfrom sage.all import floor, log\nfrom lmfdb.logger import make_logger\nfrom flask import render_template, request, url_for, Blueprint, Response\n\nZetaZeros = Blueprint(\"zeta zeros\", __name__, template_folder=\"templates\")\nlogger = make_logger(ZetaZeros)\n\nfrom .platt_zeros import zeros_starting_at_N, zeros_starting_at_t\n\ncredit_string = \"David Platt\"\n\n\ndef learnmore_list():\n return [('Completeness of the data', url_for(\".completeness\")),\n ('Source of the data', url_for(\".source\")),\n ('Reliability of the data', url_for(\".reliability\"))]\n\ndef friends_list():\n return [('L-function', url_for(\"l_functions.l_function_riemann_page\"))]\n\ndef downloads():\n return [('Bulk download', \"https://beta.lmfdb.org/data/riemann-zeta-zeros/\")]\n\n# Return the learnmore list with the matchstring entry removed\ndef learnmore_list_remove(matchstring):\n return [t for t in learnmore_list() if t[0].find(matchstring) < 0]\n\n\[email protected](\"/\")\ndef zetazeros():\n N = request.args.get(\"N\", None, int)\n t = request.args.get(\"t\", 0, float)\n limit = request.args.get(\"limit\", 100, int)\n if limit > 1000:\n return list_zeros(N=N, t=t, limit=limit)\n else:\n title = r\"Zeros of $\\zeta(s)$\"\n bread = [(\"L-functions\", url_for(\"l_functions.index\")), (r'Zeros of $\\zeta(s)$', ' ')]\n return render_template('zeta.html', N=N, t=t, limit=limit, title=title, bread=bread, learnmore=learnmore_list(), friends=friends_list(), downloads=downloads())\n\n\[email protected](\"/Completeness\")\ndef completeness():\n t = 'Completeness of Riemann zeta zeros data'\n bread = [(\"L-functions\", url_for(\"l_functions.index\")),(r\"Zeros of $\\zeta(s)$\", url_for(\".zetazeros\")),('Completeness', ' ')]\n return render_template(\"single.html\", kid='rcs.cande.zeros.zeta', credit=credit_string, title=t, bread=bread, learnmore=learnmore_list_remove('Completeness'))\n\[email protected](\"/Source\")\ndef source():\n t = 'Source of Riemann zeta zeros data'\n bread = [(\"L-functions\", url_for(\"l_functions.index\")),(r\"Zeros of $\\zeta(s)$\", url_for(\".zetazeros\")),('Source', ' ')]\n return render_template(\"single.html\", kid='rcs.source.zeros.zeta', credit=credit_string, title=t, bread=bread, learnmore=learnmore_list_remove('Source'))\n\[email protected](\"/Reliability\")\ndef reliability():\n t = 'Reliability of Riemann zeta zeros data'\n bread = [(\"L-functions\", url_for(\"l_functions.index\")),(r\"Zeros of $\\zeta(s)$\", url_for(\".zetazeros\")),('Reliability', ' ')]\n return render_template(\"single.html\", kid='rcs.rigor.zeros.zeta', credit=credit_string, title=t, bread=bread, learnmore=learnmore_list_remove('Reliability'))\n\[email protected](\"/list\")\ndef list_zeros(N=None,\n t=None,\n limit=None,\n fmt=None,\n download=None):\n if N is None:\n N = request.args.get(\"N\", None, int)\n if t is None:\n t = request.args.get(\"t\", 0, float)\n if limit is None:\n limit = request.args.get(\"limit\", 100, int)\n if fmt is None:\n fmt = request.args.get(\"format\", \"plain\")\n if download is None:\n download = request.args.get(\"download\", \"no\")\n\n if limit < 0:\n limit = 100\n if N is not None: # None is < 0!! WHAT THE WHAT!\n if N < 0:\n N = 0\n if t < 0:\n t = 0\n\n if limit > 100000:\n # limit = 100000\n #\n bread = [(\"L-functions\", url_for(\"l_functions.index\")),(r\"Zeros of $\\zeta(s)$\", url_for(\".zetazeros\"))]\n return render_template('single.html', title=\"Too many zeros\", bread=bread, kid = \"dq.zeros.zeta.toomany\")\n\n if N is not None:\n zeros = zeros_starting_at_N(N, limit)\n else:\n zeros = zeros_starting_at_t(t, limit)\n\n if fmt == 'plain':\n response = Response((\"%d %s\\n\" % (n, nstr(z,31+floor(log(z,10))+1,strip_zeros=False,min_fixed=-inf,max_fixed=+inf)) for (n, z) in zeros))\n response.headers['content-type'] = 'text/plain'\n if download == \"yes\":\n response.headers['content-disposition'] = 'attachment; filename=zetazeros'\n else:\n response = str(list(zeros))\n\n return response\n", "path": "lmfdb/zeros/zeta/zetazeros.py"}], "after_files": [{"content": "\nfrom mpmath import nstr, inf\nfrom sage.all import floor, log\nfrom lmfdb.logger import make_logger\nfrom flask import render_template, request, url_for, Blueprint, Response\n\nZetaZeros = Blueprint(\"zeta zeros\", __name__, template_folder=\"templates\")\nlogger = make_logger(ZetaZeros)\n\nfrom .platt_zeros import zeros_starting_at_N, zeros_starting_at_t\n\ncredit_string = \"David Platt\"\n\n\ndef learnmore_list():\n return [('Completeness of the data', url_for(\".completeness\")),\n ('Source of the data', url_for(\".source\")),\n ('Reliability of the data', url_for(\".reliability\"))]\n\ndef friends_list():\n return [('L-function', url_for(\"l_functions.l_function_riemann_page\"))]\n\ndef downloads():\n return [('Bulk download', \"https://beta.lmfdb.org/riemann-zeta-zeros/\")]\n\n# Return the learnmore list with the matchstring entry removed\ndef learnmore_list_remove(matchstring):\n return [t for t in learnmore_list() if t[0].find(matchstring) < 0]\n\n\[email protected](\"/\")\ndef zetazeros():\n N = request.args.get(\"N\", None, int)\n t = request.args.get(\"t\", 0, float)\n limit = request.args.get(\"limit\", 100, int)\n if limit > 1000:\n return list_zeros(N=N, t=t, limit=limit)\n else:\n title = r\"Zeros of $\\zeta(s)$\"\n bread = [(\"L-functions\", url_for(\"l_functions.index\")), (r'Zeros of $\\zeta(s)$', ' ')]\n return render_template('zeta.html', N=N, t=t, limit=limit, title=title, bread=bread, learnmore=learnmore_list(), friends=friends_list(), downloads=downloads())\n\n\[email protected](\"/Completeness\")\ndef completeness():\n t = 'Completeness of Riemann zeta zeros data'\n bread = [(\"L-functions\", url_for(\"l_functions.index\")),(r\"Zeros of $\\zeta(s)$\", url_for(\".zetazeros\")),('Completeness', ' ')]\n return render_template(\"single.html\", kid='rcs.cande.zeros.zeta', credit=credit_string, title=t, bread=bread, learnmore=learnmore_list_remove('Completeness'))\n\[email protected](\"/Source\")\ndef source():\n t = 'Source of Riemann zeta zeros data'\n bread = [(\"L-functions\", url_for(\"l_functions.index\")),(r\"Zeros of $\\zeta(s)$\", url_for(\".zetazeros\")),('Source', ' ')]\n return render_template(\"single.html\", kid='rcs.source.zeros.zeta', credit=credit_string, title=t, bread=bread, learnmore=learnmore_list_remove('Source'))\n\[email protected](\"/Reliability\")\ndef reliability():\n t = 'Reliability of Riemann zeta zeros data'\n bread = [(\"L-functions\", url_for(\"l_functions.index\")),(r\"Zeros of $\\zeta(s)$\", url_for(\".zetazeros\")),('Reliability', ' ')]\n return render_template(\"single.html\", kid='rcs.rigor.zeros.zeta', credit=credit_string, title=t, bread=bread, learnmore=learnmore_list_remove('Reliability'))\n\[email protected](\"/list\")\ndef list_zeros(N=None,\n t=None,\n limit=None,\n fmt=None,\n download=None):\n if N is None:\n N = request.args.get(\"N\", None, int)\n if t is None:\n t = request.args.get(\"t\", 0, float)\n if limit is None:\n limit = request.args.get(\"limit\", 100, int)\n if fmt is None:\n fmt = request.args.get(\"format\", \"plain\")\n if download is None:\n download = request.args.get(\"download\", \"no\")\n\n if limit < 0:\n limit = 100\n if N is not None: # None is < 0!! WHAT THE WHAT!\n if N < 0:\n N = 0\n if t < 0:\n t = 0\n\n if limit > 100000:\n # limit = 100000\n #\n bread = [(\"L-functions\", url_for(\"l_functions.index\")),(r\"Zeros of $\\zeta(s)$\", url_for(\".zetazeros\"))]\n return render_template('single.html', title=\"Too many zeros\", bread=bread, kid = \"dq.zeros.zeta.toomany\")\n\n if N is not None:\n zeros = zeros_starting_at_N(N, limit)\n else:\n zeros = zeros_starting_at_t(t, limit)\n\n if fmt == 'plain':\n response = Response((\"%d %s\\n\" % (n, nstr(z,31+floor(log(z,10))+1,strip_zeros=False,min_fixed=-inf,max_fixed=+inf)) for (n, z) in zeros))\n response.headers['content-type'] = 'text/plain'\n if download == \"yes\":\n response.headers['content-disposition'] = 'attachment; filename=zetazeros'\n else:\n response = str(list(zeros))\n\n return response\n", "path": "lmfdb/zeros/zeta/zetazeros.py"}]}
| 1,900 | 166 |
gh_patches_debug_10670
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-4346
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update SQLAlchemy to 1.3.0
## Description
Multiple vulnerabilities were reported in SQLAlchemy <1.3.0:
- CVE-2019-7164 (moderate) : SQLAlchemy through 1.2.17 and 1.3.x through 1.3.0b2 allows SQL Injection via the order_by parameter.
- CVE-2019-7548 (moderate) : SQLAlchemy 1.2.17 has SQL Injection when the group_by parameter can be controlled.
We should update this dependency ASAP for the SecureDrop application.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `install_files/ansible-base/callback_plugins/ansible_version_check.py`
Content:
```
1 # -*- encoding:utf-8 -*-
2 from __future__ import absolute_import, division, print_function, \
3 unicode_literals
4
5 import sys
6
7 import ansible
8
9 try:
10 # Version 2.0+
11 from ansible.plugins.callback import CallbackBase
12 except ImportError:
13 CallbackBase = object
14
15
16 def print_red_bold(text):
17 print('\x1b[31;1m' + text + '\x1b[0m')
18
19
20 class CallbackModule(CallbackBase):
21 def __init__(self):
22 # Can't use `on_X` because this isn't forwards compatible
23 # with Ansible 2.0+
24 required_version = '2.6.8' # Keep synchronized with requirements files
25 if not ansible.__version__.startswith(required_version):
26 print_red_bold(
27 "SecureDrop restriction: only Ansible {version}.*"
28 "is supported."
29 .format(version=required_version)
30 )
31 sys.exit(1)
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/install_files/ansible-base/callback_plugins/ansible_version_check.py b/install_files/ansible-base/callback_plugins/ansible_version_check.py
--- a/install_files/ansible-base/callback_plugins/ansible_version_check.py
+++ b/install_files/ansible-base/callback_plugins/ansible_version_check.py
@@ -21,7 +21,7 @@
def __init__(self):
# Can't use `on_X` because this isn't forwards compatible
# with Ansible 2.0+
- required_version = '2.6.8' # Keep synchronized with requirements files
+ required_version = '2.6.14' # Keep synchronized with requirements files
if not ansible.__version__.startswith(required_version):
print_red_bold(
"SecureDrop restriction: only Ansible {version}.*"
|
{"golden_diff": "diff --git a/install_files/ansible-base/callback_plugins/ansible_version_check.py b/install_files/ansible-base/callback_plugins/ansible_version_check.py\n--- a/install_files/ansible-base/callback_plugins/ansible_version_check.py\n+++ b/install_files/ansible-base/callback_plugins/ansible_version_check.py\n@@ -21,7 +21,7 @@\n def __init__(self):\n # Can't use `on_X` because this isn't forwards compatible\n # with Ansible 2.0+\n- required_version = '2.6.8' # Keep synchronized with requirements files\n+ required_version = '2.6.14' # Keep synchronized with requirements files\n if not ansible.__version__.startswith(required_version):\n print_red_bold(\n \"SecureDrop restriction: only Ansible {version}.*\"\n", "issue": "Update SQLAlchemy to 1.3.0\n## Description\r\n\r\nMultiple vulnerabilities were reported in SQLAlchemy <1.3.0:\r\n- CVE-2019-7164 (moderate) : SQLAlchemy through 1.2.17 and 1.3.x through 1.3.0b2 allows SQL Injection via the order_by parameter.\r\n- CVE-2019-7548 (moderate) : SQLAlchemy 1.2.17 has SQL Injection when the group_by parameter can be controlled.\r\n\r\nWe should update this dependency ASAP for the SecureDrop application.\r\n\n", "before_files": [{"content": "# -*- encoding:utf-8 -*-\nfrom __future__ import absolute_import, division, print_function, \\\n unicode_literals\n\nimport sys\n\nimport ansible\n\ntry:\n # Version 2.0+\n from ansible.plugins.callback import CallbackBase\nexcept ImportError:\n CallbackBase = object\n\n\ndef print_red_bold(text):\n print('\\x1b[31;1m' + text + '\\x1b[0m')\n\n\nclass CallbackModule(CallbackBase):\n def __init__(self):\n # Can't use `on_X` because this isn't forwards compatible\n # with Ansible 2.0+\n required_version = '2.6.8' # Keep synchronized with requirements files\n if not ansible.__version__.startswith(required_version):\n print_red_bold(\n \"SecureDrop restriction: only Ansible {version}.*\"\n \"is supported.\"\n .format(version=required_version)\n )\n sys.exit(1)\n", "path": "install_files/ansible-base/callback_plugins/ansible_version_check.py"}], "after_files": [{"content": "# -*- encoding:utf-8 -*-\nfrom __future__ import absolute_import, division, print_function, \\\n unicode_literals\n\nimport sys\n\nimport ansible\n\ntry:\n # Version 2.0+\n from ansible.plugins.callback import CallbackBase\nexcept ImportError:\n CallbackBase = object\n\n\ndef print_red_bold(text):\n print('\\x1b[31;1m' + text + '\\x1b[0m')\n\n\nclass CallbackModule(CallbackBase):\n def __init__(self):\n # Can't use `on_X` because this isn't forwards compatible\n # with Ansible 2.0+\n required_version = '2.6.14' # Keep synchronized with requirements files\n if not ansible.__version__.startswith(required_version):\n print_red_bold(\n \"SecureDrop restriction: only Ansible {version}.*\"\n \"is supported.\"\n .format(version=required_version)\n )\n sys.exit(1)\n", "path": "install_files/ansible-base/callback_plugins/ansible_version_check.py"}]}
| 654 | 176 |
gh_patches_debug_34201
|
rasdani/github-patches
|
git_diff
|
ytdl-org__youtube-dl-12587
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
openload.co extractor not working
youtube-dl --get-url --verbose https://openload.co/embed/kUEfGclsU9o/
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'--get-url', u'--verbose', u'https://openload.co/embed/kUEfGclsU9o/']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2016.08.22
[debug] Python version 2.6.6 - Linux-2.6.32-642.1.1.el6.x86_64-x86_64-with-centos-6.8-Final
[debug] exe versions: ffmpeg 0.6.5, ffprobe 0.6.5
[debug] Proxy map: {}
ERROR: Unable to extract link image; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 691, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 347, in extract
return self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/openload.py", line 62, in _real_extract
r'<img[^>]+id="linkimg"[^>]+src="([^"]+)"', webpage, 'link image')
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 650, in _search_regex
raise RegexNotFoundError('Unable to extract %s' % _name)
RegexNotFoundError: Unable to extract link image; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `youtube_dl/extractor/openload.py`
Content:
```
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 import re
5
6 from .common import InfoExtractor
7 from ..compat import compat_chr
8 from ..utils import (
9 determine_ext,
10 ExtractorError,
11 )
12
13
14 class OpenloadIE(InfoExtractor):
15 _VALID_URL = r'https?://(?:openload\.(?:co|io)|oload\.tv)/(?:f|embed)/(?P<id>[a-zA-Z0-9-_]+)'
16
17 _TESTS = [{
18 'url': 'https://openload.co/f/kUEfGclsU9o',
19 'md5': 'bf1c059b004ebc7a256f89408e65c36e',
20 'info_dict': {
21 'id': 'kUEfGclsU9o',
22 'ext': 'mp4',
23 'title': 'skyrim_no-audio_1080.mp4',
24 'thumbnail': r're:^https?://.*\.jpg$',
25 },
26 }, {
27 'url': 'https://openload.co/embed/rjC09fkPLYs',
28 'info_dict': {
29 'id': 'rjC09fkPLYs',
30 'ext': 'mp4',
31 'title': 'movie.mp4',
32 'thumbnail': r're:^https?://.*\.jpg$',
33 'subtitles': {
34 'en': [{
35 'ext': 'vtt',
36 }],
37 },
38 },
39 'params': {
40 'skip_download': True, # test subtitles only
41 },
42 }, {
43 'url': 'https://openload.co/embed/kUEfGclsU9o/skyrim_no-audio_1080.mp4',
44 'only_matching': True,
45 }, {
46 'url': 'https://openload.io/f/ZAn6oz-VZGE/',
47 'only_matching': True,
48 }, {
49 'url': 'https://openload.co/f/_-ztPaZtMhM/',
50 'only_matching': True,
51 }, {
52 # unavailable via https://openload.co/f/Sxz5sADo82g/, different layout
53 # for title and ext
54 'url': 'https://openload.co/embed/Sxz5sADo82g/',
55 'only_matching': True,
56 }, {
57 'url': 'https://oload.tv/embed/KnG-kKZdcfY/',
58 'only_matching': True,
59 }]
60
61 @staticmethod
62 def _extract_urls(webpage):
63 return re.findall(
64 r'<iframe[^>]+src=["\']((?:https?://)?(?:openload\.(?:co|io)|oload\.tv)/embed/[a-zA-Z0-9-_]+)',
65 webpage)
66
67 def _real_extract(self, url):
68 video_id = self._match_id(url)
69 webpage = self._download_webpage('https://openload.co/embed/%s/' % video_id, video_id)
70
71 if 'File not found' in webpage or 'deleted by the owner' in webpage:
72 raise ExtractorError('File not found', expected=True)
73
74 ol_id = self._search_regex(
75 '<span[^>]+id="[^"]+"[^>]*>([0-9A-Za-z]+)</span>',
76 webpage, 'openload ID')
77
78 video_url_chars = []
79
80 first_char = ord(ol_id[0])
81 key = first_char - 55
82 maxKey = max(2, key)
83 key = min(maxKey, len(ol_id) - 38)
84 t = ol_id[key:key + 36]
85
86 hashMap = {}
87 v = ol_id.replace(t, '')
88 h = 0
89
90 while h < len(t):
91 f = t[h:h + 3]
92 i = int(f, 8)
93 hashMap[h / 3] = i
94 h += 3
95
96 h = 0
97 H = 0
98 while h < len(v):
99 B = ''
100 C = ''
101 if len(v) >= h + 2:
102 B = v[h:h + 2]
103 if len(v) >= h + 3:
104 C = v[h:h + 3]
105 i = int(B, 16)
106 h += 2
107 if H % 3 == 0:
108 i = int(C, 8)
109 h += 1
110 elif H % 2 == 0 and H != 0 and ord(v[H - 1]) < 60:
111 i = int(C, 10)
112 h += 1
113 index = H % 7
114
115 A = hashMap[index]
116 i ^= 213
117 i ^= A
118 video_url_chars.append(compat_chr(i))
119 H += 1
120
121 video_url = 'https://openload.co/stream/%s?mime=true'
122 video_url = video_url % (''.join(video_url_chars))
123
124 title = self._og_search_title(webpage, default=None) or self._search_regex(
125 r'<span[^>]+class=["\']title["\'][^>]*>([^<]+)', webpage,
126 'title', default=None) or self._html_search_meta(
127 'description', webpage, 'title', fatal=True)
128
129 entries = self._parse_html5_media_entries(url, webpage, video_id)
130 subtitles = entries[0]['subtitles'] if entries else None
131
132 info_dict = {
133 'id': video_id,
134 'title': title,
135 'thumbnail': self._og_search_thumbnail(webpage, default=None),
136 'url': video_url,
137 # Seems all videos have extensions in their titles
138 'ext': determine_ext(title, 'mp4'),
139 'subtitles': subtitles,
140 }
141 return info_dict
142
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/youtube_dl/extractor/openload.py b/youtube_dl/extractor/openload.py
--- a/youtube_dl/extractor/openload.py
+++ b/youtube_dl/extractor/openload.py
@@ -75,51 +75,38 @@
'<span[^>]+id="[^"]+"[^>]*>([0-9A-Za-z]+)</span>',
webpage, 'openload ID')
- video_url_chars = []
-
- first_char = ord(ol_id[0])
- key = first_char - 55
- maxKey = max(2, key)
- key = min(maxKey, len(ol_id) - 38)
- t = ol_id[key:key + 36]
-
- hashMap = {}
- v = ol_id.replace(t, '')
- h = 0
-
- while h < len(t):
- f = t[h:h + 3]
- i = int(f, 8)
- hashMap[h / 3] = i
- h += 3
-
- h = 0
- H = 0
- while h < len(v):
- B = ''
- C = ''
- if len(v) >= h + 2:
- B = v[h:h + 2]
- if len(v) >= h + 3:
- C = v[h:h + 3]
- i = int(B, 16)
- h += 2
- if H % 3 == 0:
- i = int(C, 8)
- h += 1
- elif H % 2 == 0 and H != 0 and ord(v[H - 1]) < 60:
- i = int(C, 10)
- h += 1
- index = H % 7
-
- A = hashMap[index]
- i ^= 213
- i ^= A
- video_url_chars.append(compat_chr(i))
- H += 1
+ decoded = ''
+ a = ol_id[0:24]
+ b = []
+ for i in range(0, len(a), 8):
+ b.append(int(a[i:i + 8] or '0', 16))
+ ol_id = ol_id[24:]
+ j = 0
+ k = 0
+ while j < len(ol_id):
+ c = 128
+ d = 0
+ e = 0
+ f = 0
+ _more = True
+ while _more:
+ if j + 1 >= len(ol_id):
+ c = 143
+ f = int(ol_id[j:j + 2] or '0', 16)
+ j += 2
+ d += (f & 127) << e
+ e += 7
+ _more = f >= c
+ g = d ^ b[k % 3]
+ for i in range(4):
+ char_dec = (g >> 8 * i) & (c + 127)
+ char = compat_chr(char_dec)
+ if char != '#':
+ decoded += char
+ k += 1
video_url = 'https://openload.co/stream/%s?mime=true'
- video_url = video_url % (''.join(video_url_chars))
+ video_url = video_url % decoded
title = self._og_search_title(webpage, default=None) or self._search_regex(
r'<span[^>]+class=["\']title["\'][^>]*>([^<]+)', webpage,
|
{"golden_diff": "diff --git a/youtube_dl/extractor/openload.py b/youtube_dl/extractor/openload.py\n--- a/youtube_dl/extractor/openload.py\n+++ b/youtube_dl/extractor/openload.py\n@@ -75,51 +75,38 @@\n '<span[^>]+id=\"[^\"]+\"[^>]*>([0-9A-Za-z]+)</span>',\n webpage, 'openload ID')\n \n- video_url_chars = []\n-\n- first_char = ord(ol_id[0])\n- key = first_char - 55\n- maxKey = max(2, key)\n- key = min(maxKey, len(ol_id) - 38)\n- t = ol_id[key:key + 36]\n-\n- hashMap = {}\n- v = ol_id.replace(t, '')\n- h = 0\n-\n- while h < len(t):\n- f = t[h:h + 3]\n- i = int(f, 8)\n- hashMap[h / 3] = i\n- h += 3\n-\n- h = 0\n- H = 0\n- while h < len(v):\n- B = ''\n- C = ''\n- if len(v) >= h + 2:\n- B = v[h:h + 2]\n- if len(v) >= h + 3:\n- C = v[h:h + 3]\n- i = int(B, 16)\n- h += 2\n- if H % 3 == 0:\n- i = int(C, 8)\n- h += 1\n- elif H % 2 == 0 and H != 0 and ord(v[H - 1]) < 60:\n- i = int(C, 10)\n- h += 1\n- index = H % 7\n-\n- A = hashMap[index]\n- i ^= 213\n- i ^= A\n- video_url_chars.append(compat_chr(i))\n- H += 1\n+ decoded = ''\n+ a = ol_id[0:24]\n+ b = []\n+ for i in range(0, len(a), 8):\n+ b.append(int(a[i:i + 8] or '0', 16))\n+ ol_id = ol_id[24:]\n+ j = 0\n+ k = 0\n+ while j < len(ol_id):\n+ c = 128\n+ d = 0\n+ e = 0\n+ f = 0\n+ _more = True\n+ while _more:\n+ if j + 1 >= len(ol_id):\n+ c = 143\n+ f = int(ol_id[j:j + 2] or '0', 16)\n+ j += 2\n+ d += (f & 127) << e\n+ e += 7\n+ _more = f >= c\n+ g = d ^ b[k % 3]\n+ for i in range(4):\n+ char_dec = (g >> 8 * i) & (c + 127)\n+ char = compat_chr(char_dec)\n+ if char != '#':\n+ decoded += char\n+ k += 1\n \n video_url = 'https://openload.co/stream/%s?mime=true'\n- video_url = video_url % (''.join(video_url_chars))\n+ video_url = video_url % decoded\n \n title = self._og_search_title(webpage, default=None) or self._search_regex(\n r'<span[^>]+class=[\"\\']title[\"\\'][^>]*>([^<]+)', webpage,\n", "issue": "openload.co extractor not working\n youtube-dl --get-url --verbose https://openload.co/embed/kUEfGclsU9o/\n[debug] System config: []\n[debug] User config: []\n[debug] Command-line args: [u'--get-url', u'--verbose', u'https://openload.co/embed/kUEfGclsU9o/']\n[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8\n[debug] youtube-dl version 2016.08.22\n[debug] Python version 2.6.6 - Linux-2.6.32-642.1.1.el6.x86_64-x86_64-with-centos-6.8-Final\n[debug] exe versions: ffmpeg 0.6.5, ffprobe 0.6.5\n[debug] Proxy map: {}\nERROR: Unable to extract link image; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\nTraceback (most recent call last):\n File \"/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 691, in extract_info\n ie_result = ie.extract(url)\n File \"/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py\", line 347, in extract\n return self._real_extract(url)\n File \"/usr/local/bin/youtube-dl/youtube_dl/extractor/openload.py\", line 62, in _real_extract\n r'<img[^>]+id=\"linkimg\"[^>]+src=\"([^\"]+)\"', webpage, 'link image')\n File \"/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py\", line 650, in _search_regex\n raise RegexNotFoundError('Unable to extract %s' % _name)\nRegexNotFoundError: Unable to extract link image; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nimport re\n\nfrom .common import InfoExtractor\nfrom ..compat import compat_chr\nfrom ..utils import (\n determine_ext,\n ExtractorError,\n)\n\n\nclass OpenloadIE(InfoExtractor):\n _VALID_URL = r'https?://(?:openload\\.(?:co|io)|oload\\.tv)/(?:f|embed)/(?P<id>[a-zA-Z0-9-_]+)'\n\n _TESTS = [{\n 'url': 'https://openload.co/f/kUEfGclsU9o',\n 'md5': 'bf1c059b004ebc7a256f89408e65c36e',\n 'info_dict': {\n 'id': 'kUEfGclsU9o',\n 'ext': 'mp4',\n 'title': 'skyrim_no-audio_1080.mp4',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n },\n }, {\n 'url': 'https://openload.co/embed/rjC09fkPLYs',\n 'info_dict': {\n 'id': 'rjC09fkPLYs',\n 'ext': 'mp4',\n 'title': 'movie.mp4',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n 'subtitles': {\n 'en': [{\n 'ext': 'vtt',\n }],\n },\n },\n 'params': {\n 'skip_download': True, # test subtitles only\n },\n }, {\n 'url': 'https://openload.co/embed/kUEfGclsU9o/skyrim_no-audio_1080.mp4',\n 'only_matching': True,\n }, {\n 'url': 'https://openload.io/f/ZAn6oz-VZGE/',\n 'only_matching': True,\n }, {\n 'url': 'https://openload.co/f/_-ztPaZtMhM/',\n 'only_matching': True,\n }, {\n # unavailable via https://openload.co/f/Sxz5sADo82g/, different layout\n # for title and ext\n 'url': 'https://openload.co/embed/Sxz5sADo82g/',\n 'only_matching': True,\n }, {\n 'url': 'https://oload.tv/embed/KnG-kKZdcfY/',\n 'only_matching': True,\n }]\n\n @staticmethod\n def _extract_urls(webpage):\n return re.findall(\n r'<iframe[^>]+src=[\"\\']((?:https?://)?(?:openload\\.(?:co|io)|oload\\.tv)/embed/[a-zA-Z0-9-_]+)',\n webpage)\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n webpage = self._download_webpage('https://openload.co/embed/%s/' % video_id, video_id)\n\n if 'File not found' in webpage or 'deleted by the owner' in webpage:\n raise ExtractorError('File not found', expected=True)\n\n ol_id = self._search_regex(\n '<span[^>]+id=\"[^\"]+\"[^>]*>([0-9A-Za-z]+)</span>',\n webpage, 'openload ID')\n\n video_url_chars = []\n\n first_char = ord(ol_id[0])\n key = first_char - 55\n maxKey = max(2, key)\n key = min(maxKey, len(ol_id) - 38)\n t = ol_id[key:key + 36]\n\n hashMap = {}\n v = ol_id.replace(t, '')\n h = 0\n\n while h < len(t):\n f = t[h:h + 3]\n i = int(f, 8)\n hashMap[h / 3] = i\n h += 3\n\n h = 0\n H = 0\n while h < len(v):\n B = ''\n C = ''\n if len(v) >= h + 2:\n B = v[h:h + 2]\n if len(v) >= h + 3:\n C = v[h:h + 3]\n i = int(B, 16)\n h += 2\n if H % 3 == 0:\n i = int(C, 8)\n h += 1\n elif H % 2 == 0 and H != 0 and ord(v[H - 1]) < 60:\n i = int(C, 10)\n h += 1\n index = H % 7\n\n A = hashMap[index]\n i ^= 213\n i ^= A\n video_url_chars.append(compat_chr(i))\n H += 1\n\n video_url = 'https://openload.co/stream/%s?mime=true'\n video_url = video_url % (''.join(video_url_chars))\n\n title = self._og_search_title(webpage, default=None) or self._search_regex(\n r'<span[^>]+class=[\"\\']title[\"\\'][^>]*>([^<]+)', webpage,\n 'title', default=None) or self._html_search_meta(\n 'description', webpage, 'title', fatal=True)\n\n entries = self._parse_html5_media_entries(url, webpage, video_id)\n subtitles = entries[0]['subtitles'] if entries else None\n\n info_dict = {\n 'id': video_id,\n 'title': title,\n 'thumbnail': self._og_search_thumbnail(webpage, default=None),\n 'url': video_url,\n # Seems all videos have extensions in their titles\n 'ext': determine_ext(title, 'mp4'),\n 'subtitles': subtitles,\n }\n return info_dict\n", "path": "youtube_dl/extractor/openload.py"}], "after_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nimport re\n\nfrom .common import InfoExtractor\nfrom ..compat import compat_chr\nfrom ..utils import (\n determine_ext,\n ExtractorError,\n)\n\n\nclass OpenloadIE(InfoExtractor):\n _VALID_URL = r'https?://(?:openload\\.(?:co|io)|oload\\.tv)/(?:f|embed)/(?P<id>[a-zA-Z0-9-_]+)'\n\n _TESTS = [{\n 'url': 'https://openload.co/f/kUEfGclsU9o',\n 'md5': 'bf1c059b004ebc7a256f89408e65c36e',\n 'info_dict': {\n 'id': 'kUEfGclsU9o',\n 'ext': 'mp4',\n 'title': 'skyrim_no-audio_1080.mp4',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n },\n }, {\n 'url': 'https://openload.co/embed/rjC09fkPLYs',\n 'info_dict': {\n 'id': 'rjC09fkPLYs',\n 'ext': 'mp4',\n 'title': 'movie.mp4',\n 'thumbnail': r're:^https?://.*\\.jpg$',\n 'subtitles': {\n 'en': [{\n 'ext': 'vtt',\n }],\n },\n },\n 'params': {\n 'skip_download': True, # test subtitles only\n },\n }, {\n 'url': 'https://openload.co/embed/kUEfGclsU9o/skyrim_no-audio_1080.mp4',\n 'only_matching': True,\n }, {\n 'url': 'https://openload.io/f/ZAn6oz-VZGE/',\n 'only_matching': True,\n }, {\n 'url': 'https://openload.co/f/_-ztPaZtMhM/',\n 'only_matching': True,\n }, {\n # unavailable via https://openload.co/f/Sxz5sADo82g/, different layout\n # for title and ext\n 'url': 'https://openload.co/embed/Sxz5sADo82g/',\n 'only_matching': True,\n }, {\n 'url': 'https://oload.tv/embed/KnG-kKZdcfY/',\n 'only_matching': True,\n }]\n\n @staticmethod\n def _extract_urls(webpage):\n return re.findall(\n r'<iframe[^>]+src=[\"\\']((?:https?://)?(?:openload\\.(?:co|io)|oload\\.tv)/embed/[a-zA-Z0-9-_]+)',\n webpage)\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n webpage = self._download_webpage('https://openload.co/embed/%s/' % video_id, video_id)\n\n if 'File not found' in webpage or 'deleted by the owner' in webpage:\n raise ExtractorError('File not found', expected=True)\n\n ol_id = self._search_regex(\n '<span[^>]+id=\"[^\"]+\"[^>]*>([0-9A-Za-z]+)</span>',\n webpage, 'openload ID')\n\n decoded = ''\n a = ol_id[0:24]\n b = []\n for i in range(0, len(a), 8):\n b.append(int(a[i:i + 8] or '0', 16))\n ol_id = ol_id[24:]\n j = 0\n k = 0\n while j < len(ol_id):\n c = 128\n d = 0\n e = 0\n f = 0\n _more = True\n while _more:\n if j + 1 >= len(ol_id):\n c = 143\n f = int(ol_id[j:j + 2] or '0', 16)\n j += 2\n d += (f & 127) << e\n e += 7\n _more = f >= c\n g = d ^ b[k % 3]\n for i in range(4):\n char_dec = (g >> 8 * i) & (c + 127)\n char = compat_chr(char_dec)\n if char != '#':\n decoded += char\n k += 1\n\n video_url = 'https://openload.co/stream/%s?mime=true'\n video_url = video_url % decoded\n\n title = self._og_search_title(webpage, default=None) or self._search_regex(\n r'<span[^>]+class=[\"\\']title[\"\\'][^>]*>([^<]+)', webpage,\n 'title', default=None) or self._html_search_meta(\n 'description', webpage, 'title', fatal=True)\n\n entries = self._parse_html5_media_entries(url, webpage, video_id)\n subtitles = entries[0]['subtitles'] if entries else None\n\n info_dict = {\n 'id': video_id,\n 'title': title,\n 'thumbnail': self._og_search_thumbnail(webpage, default=None),\n 'url': video_url,\n # Seems all videos have extensions in their titles\n 'ext': determine_ext(title, 'mp4'),\n 'subtitles': subtitles,\n }\n return info_dict\n", "path": "youtube_dl/extractor/openload.py"}]}
| 2,365 | 838 |
gh_patches_debug_12089
|
rasdani/github-patches
|
git_diff
|
ipython__ipython-12980
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
%rerun prints a misleading error when 0 lines are requested
## Steps to reproduce
* Execute `%rerun -l 0`
## Expected result
Nothing rerun, nothing printed
## Actual result
Nothing rerun, but the following is printed
```
No lines in history match specification
```
## Comment
This error message is relevant to another part of `%rerun`'s functionality, namely, rerunning based on search with `-g <query>`, but is confusing here.
## System info
Manjaro Linux, Python 3.9.4, IPython 7.22.0 (reproduces on IPython 8.0.0.dev)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `IPython/core/magics/history.py`
Content:
```
1 """Implementation of magic functions related to History.
2 """
3 #-----------------------------------------------------------------------------
4 # Copyright (c) 2012, IPython Development Team.
5 #
6 # Distributed under the terms of the Modified BSD License.
7 #
8 # The full license is in the file COPYING.txt, distributed with this software.
9 #-----------------------------------------------------------------------------
10
11 #-----------------------------------------------------------------------------
12 # Imports
13 #-----------------------------------------------------------------------------
14
15 # Stdlib
16 import os
17 import sys
18 from io import open as io_open
19 import fnmatch
20
21 # Our own packages
22 from IPython.core.error import StdinNotImplementedError
23 from IPython.core.magic import Magics, magics_class, line_magic
24 from IPython.core.magic_arguments import (argument, magic_arguments,
25 parse_argstring)
26 from IPython.testing.skipdoctest import skip_doctest
27 from IPython.utils import io
28
29 #-----------------------------------------------------------------------------
30 # Magics class implementation
31 #-----------------------------------------------------------------------------
32
33
34 _unspecified = object()
35
36
37 @magics_class
38 class HistoryMagics(Magics):
39
40 @magic_arguments()
41 @argument(
42 '-n', dest='print_nums', action='store_true', default=False,
43 help="""
44 print line numbers for each input.
45 This feature is only available if numbered prompts are in use.
46 """)
47 @argument(
48 '-o', dest='get_output', action='store_true', default=False,
49 help="also print outputs for each input.")
50 @argument(
51 '-p', dest='pyprompts', action='store_true', default=False,
52 help="""
53 print classic '>>>' python prompts before each input.
54 This is useful for making documentation, and in conjunction
55 with -o, for producing doctest-ready output.
56 """)
57 @argument(
58 '-t', dest='raw', action='store_false', default=True,
59 help="""
60 print the 'translated' history, as IPython understands it.
61 IPython filters your input and converts it all into valid Python
62 source before executing it (things like magics or aliases are turned
63 into function calls, for example). With this option, you'll see the
64 native history instead of the user-entered version: '%%cd /' will be
65 seen as 'get_ipython().run_line_magic("cd", "/")' instead of '%%cd /'.
66 """)
67 @argument(
68 '-f', dest='filename',
69 help="""
70 FILENAME: instead of printing the output to the screen, redirect
71 it to the given file. The file is always overwritten, though *when
72 it can*, IPython asks for confirmation first. In particular, running
73 the command 'history -f FILENAME' from the IPython Notebook
74 interface will replace FILENAME even if it already exists *without*
75 confirmation.
76 """)
77 @argument(
78 '-g', dest='pattern', nargs='*', default=None,
79 help="""
80 treat the arg as a glob pattern to search for in (full) history.
81 This includes the saved history (almost all commands ever written).
82 The pattern may contain '?' to match one unknown character and '*'
83 to match any number of unknown characters. Use '%%hist -g' to show
84 full saved history (may be very long).
85 """)
86 @argument(
87 '-l', dest='limit', type=int, nargs='?', default=_unspecified,
88 help="""
89 get the last n lines from all sessions. Specify n as a single
90 arg, or the default is the last 10 lines.
91 """)
92 @argument(
93 '-u', dest='unique', action='store_true',
94 help="""
95 when searching history using `-g`, show only unique history.
96 """)
97 @argument('range', nargs='*')
98 @skip_doctest
99 @line_magic
100 def history(self, parameter_s = ''):
101 """Print input history (_i<n> variables), with most recent last.
102
103 By default, input history is printed without line numbers so it can be
104 directly pasted into an editor. Use -n to show them.
105
106 By default, all input history from the current session is displayed.
107 Ranges of history can be indicated using the syntax:
108
109 ``4``
110 Line 4, current session
111 ``4-6``
112 Lines 4-6, current session
113 ``243/1-5``
114 Lines 1-5, session 243
115 ``~2/7``
116 Line 7, session 2 before current
117 ``~8/1-~6/5``
118 From the first line of 8 sessions ago, to the fifth line of 6
119 sessions ago.
120
121 Multiple ranges can be entered, separated by spaces
122
123 The same syntax is used by %macro, %save, %edit, %rerun
124
125 Examples
126 --------
127 ::
128
129 In [6]: %history -n 4-6
130 4:a = 12
131 5:print a**2
132 6:%history -n 4-6
133
134 """
135
136 args = parse_argstring(self.history, parameter_s)
137
138 # For brevity
139 history_manager = self.shell.history_manager
140
141 def _format_lineno(session, line):
142 """Helper function to format line numbers properly."""
143 if session in (0, history_manager.session_number):
144 return str(line)
145 return "%s/%s" % (session, line)
146
147 # Check if output to specific file was requested.
148 outfname = args.filename
149 if not outfname:
150 outfile = sys.stdout # default
151 # We don't want to close stdout at the end!
152 close_at_end = False
153 else:
154 if os.path.exists(outfname):
155 try:
156 ans = io.ask_yes_no("File %r exists. Overwrite?" % outfname)
157 except StdinNotImplementedError:
158 ans = True
159 if not ans:
160 print('Aborting.')
161 return
162 print("Overwriting file.")
163 outfile = io_open(outfname, 'w', encoding='utf-8')
164 close_at_end = True
165
166 print_nums = args.print_nums
167 get_output = args.get_output
168 pyprompts = args.pyprompts
169 raw = args.raw
170
171 pattern = None
172 limit = None if args.limit is _unspecified else args.limit
173
174 range_pattern = False
175 if args.pattern is not None and not args.range:
176 if args.pattern:
177 pattern = "*" + " ".join(args.pattern) + "*"
178 else:
179 pattern = "*"
180 hist = history_manager.search(pattern, raw=raw, output=get_output,
181 n=limit, unique=args.unique)
182 print_nums = True
183 elif args.limit is not _unspecified:
184 n = 10 if limit is None else limit
185 hist = history_manager.get_tail(n, raw=raw, output=get_output)
186 else:
187 if args.range: # Get history by ranges
188 if args.pattern:
189 range_pattern = "*" + " ".join(args.pattern) + "*"
190 print_nums = True
191 hist = history_manager.get_range_by_str(" ".join(args.range),
192 raw, get_output)
193 else: # Just get history for the current session
194 hist = history_manager.get_range(raw=raw, output=get_output)
195
196 # We could be displaying the entire history, so let's not try to pull
197 # it into a list in memory. Anything that needs more space will just
198 # misalign.
199 width = 4
200
201 for session, lineno, inline in hist:
202 # Print user history with tabs expanded to 4 spaces. The GUI
203 # clients use hard tabs for easier usability in auto-indented code,
204 # but we want to produce PEP-8 compliant history for safe pasting
205 # into an editor.
206 if get_output:
207 inline, output = inline
208 if range_pattern:
209 if not fnmatch.fnmatch(inline, range_pattern):
210 continue
211 inline = inline.expandtabs(4).rstrip()
212
213 multiline = "\n" in inline
214 line_sep = '\n' if multiline else ' '
215 if print_nums:
216 print(u'%s:%s' % (_format_lineno(session, lineno).rjust(width),
217 line_sep), file=outfile, end=u'')
218 if pyprompts:
219 print(u">>> ", end=u"", file=outfile)
220 if multiline:
221 inline = "\n... ".join(inline.splitlines()) + "\n..."
222 print(inline, file=outfile)
223 if get_output and output:
224 print(output, file=outfile)
225
226 if close_at_end:
227 outfile.close()
228
229 @line_magic
230 def recall(self, arg):
231 r"""Repeat a command, or get command to input line for editing.
232
233 %recall and %rep are equivalent.
234
235 - %recall (no arguments):
236
237 Place a string version of last computation result (stored in the
238 special '_' variable) to the next input prompt. Allows you to create
239 elaborate command lines without using copy-paste::
240
241 In[1]: l = ["hei", "vaan"]
242 In[2]: "".join(l)
243 Out[2]: heivaan
244 In[3]: %recall
245 In[4]: heivaan_ <== cursor blinking
246
247 %recall 45
248
249 Place history line 45 on the next input prompt. Use %hist to find
250 out the number.
251
252 %recall 1-4
253
254 Combine the specified lines into one cell, and place it on the next
255 input prompt. See %history for the slice syntax.
256
257 %recall foo+bar
258
259 If foo+bar can be evaluated in the user namespace, the result is
260 placed at the next input prompt. Otherwise, the history is searched
261 for lines which contain that substring, and the most recent one is
262 placed at the next input prompt.
263 """
264 if not arg: # Last output
265 self.shell.set_next_input(str(self.shell.user_ns["_"]))
266 return
267 # Get history range
268 histlines = self.shell.history_manager.get_range_by_str(arg)
269 cmd = "\n".join(x[2] for x in histlines)
270 if cmd:
271 self.shell.set_next_input(cmd.rstrip())
272 return
273
274 try: # Variable in user namespace
275 cmd = str(eval(arg, self.shell.user_ns))
276 except Exception: # Search for term in history
277 histlines = self.shell.history_manager.search("*"+arg+"*")
278 for h in reversed([x[2] for x in histlines]):
279 if 'recall' in h or 'rep' in h:
280 continue
281 self.shell.set_next_input(h.rstrip())
282 return
283 else:
284 self.shell.set_next_input(cmd.rstrip())
285 return
286 print("Couldn't evaluate or find in history:", arg)
287
288 @line_magic
289 def rerun(self, parameter_s=''):
290 """Re-run previous input
291
292 By default, you can specify ranges of input history to be repeated
293 (as with %history). With no arguments, it will repeat the last line.
294
295 Options:
296
297 -l <n> : Repeat the last n lines of input, not including the
298 current command.
299
300 -g foo : Repeat the most recent line which contains foo
301 """
302 opts, args = self.parse_options(parameter_s, 'l:g:', mode='string')
303 if "l" in opts: # Last n lines
304 n = int(opts['l'])
305 hist = self.shell.history_manager.get_tail(n)
306 elif "g" in opts: # Search
307 p = "*"+opts['g']+"*"
308 hist = list(self.shell.history_manager.search(p))
309 for l in reversed(hist):
310 if "rerun" not in l[2]:
311 hist = [l] # The last match which isn't a %rerun
312 break
313 else:
314 hist = [] # No matches except %rerun
315 elif args: # Specify history ranges
316 hist = self.shell.history_manager.get_range_by_str(args)
317 else: # Last line
318 hist = self.shell.history_manager.get_tail(1)
319 hist = [x[2] for x in hist]
320 if not hist:
321 print("No lines in history match specification")
322 return
323 histlines = "\n".join(hist)
324 print("=== Executing: ===")
325 print(histlines)
326 print("=== Output: ===")
327 self.shell.run_cell("\n".join(hist), store_history=False)
328
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/IPython/core/magics/history.py b/IPython/core/magics/history.py
--- a/IPython/core/magics/history.py
+++ b/IPython/core/magics/history.py
@@ -301,6 +301,14 @@
opts, args = self.parse_options(parameter_s, 'l:g:', mode='string')
if "l" in opts: # Last n lines
n = int(opts['l'])
+
+ if n == 0:
+ print("Requested 0 last lines - nothing to run")
+ return
+ elif n < 0:
+ print("Number of lines to rerun cannot be negative")
+ return
+
hist = self.shell.history_manager.get_tail(n)
elif "g" in opts: # Search
p = "*"+opts['g']+"*"
|
{"golden_diff": "diff --git a/IPython/core/magics/history.py b/IPython/core/magics/history.py\n--- a/IPython/core/magics/history.py\n+++ b/IPython/core/magics/history.py\n@@ -301,6 +301,14 @@\n opts, args = self.parse_options(parameter_s, 'l:g:', mode='string')\n if \"l\" in opts: # Last n lines\n n = int(opts['l'])\n+\n+ if n == 0:\n+ print(\"Requested 0 last lines - nothing to run\")\n+ return\n+ elif n < 0:\n+ print(\"Number of lines to rerun cannot be negative\")\n+ return\n+\n hist = self.shell.history_manager.get_tail(n)\n elif \"g\" in opts: # Search\n p = \"*\"+opts['g']+\"*\"\n", "issue": "%rerun prints a misleading error when 0 lines are requested\n## Steps to reproduce\r\n\r\n* Execute `%rerun -l 0`\r\n\r\n## Expected result\r\nNothing rerun, nothing printed \r\n\r\n## Actual result\r\nNothing rerun, but the following is printed\r\n```\r\nNo lines in history match specification\r\n```\r\n\r\n## Comment\r\nThis error message is relevant to another part of `%rerun`'s functionality, namely, rerunning based on search with `-g <query>`, but is confusing here.\r\n\r\n## System info\r\nManjaro Linux, Python 3.9.4, IPython 7.22.0 (reproduces on IPython 8.0.0.dev)\n", "before_files": [{"content": "\"\"\"Implementation of magic functions related to History.\n\"\"\"\n#-----------------------------------------------------------------------------\n# Copyright (c) 2012, IPython Development Team.\n#\n# Distributed under the terms of the Modified BSD License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Stdlib\nimport os\nimport sys\nfrom io import open as io_open\nimport fnmatch\n\n# Our own packages\nfrom IPython.core.error import StdinNotImplementedError\nfrom IPython.core.magic import Magics, magics_class, line_magic\nfrom IPython.core.magic_arguments import (argument, magic_arguments,\n parse_argstring)\nfrom IPython.testing.skipdoctest import skip_doctest\nfrom IPython.utils import io\n\n#-----------------------------------------------------------------------------\n# Magics class implementation\n#-----------------------------------------------------------------------------\n\n\n_unspecified = object()\n\n\n@magics_class\nclass HistoryMagics(Magics):\n\n @magic_arguments()\n @argument(\n '-n', dest='print_nums', action='store_true', default=False,\n help=\"\"\"\n print line numbers for each input.\n This feature is only available if numbered prompts are in use.\n \"\"\")\n @argument(\n '-o', dest='get_output', action='store_true', default=False,\n help=\"also print outputs for each input.\")\n @argument(\n '-p', dest='pyprompts', action='store_true', default=False,\n help=\"\"\"\n print classic '>>>' python prompts before each input.\n This is useful for making documentation, and in conjunction\n with -o, for producing doctest-ready output.\n \"\"\")\n @argument(\n '-t', dest='raw', action='store_false', default=True,\n help=\"\"\"\n print the 'translated' history, as IPython understands it.\n IPython filters your input and converts it all into valid Python\n source before executing it (things like magics or aliases are turned\n into function calls, for example). With this option, you'll see the\n native history instead of the user-entered version: '%%cd /' will be\n seen as 'get_ipython().run_line_magic(\"cd\", \"/\")' instead of '%%cd /'.\n \"\"\")\n @argument(\n '-f', dest='filename',\n help=\"\"\"\n FILENAME: instead of printing the output to the screen, redirect\n it to the given file. The file is always overwritten, though *when\n it can*, IPython asks for confirmation first. In particular, running\n the command 'history -f FILENAME' from the IPython Notebook\n interface will replace FILENAME even if it already exists *without*\n confirmation.\n \"\"\")\n @argument(\n '-g', dest='pattern', nargs='*', default=None,\n help=\"\"\"\n treat the arg as a glob pattern to search for in (full) history.\n This includes the saved history (almost all commands ever written).\n The pattern may contain '?' to match one unknown character and '*'\n to match any number of unknown characters. Use '%%hist -g' to show\n full saved history (may be very long).\n \"\"\")\n @argument(\n '-l', dest='limit', type=int, nargs='?', default=_unspecified,\n help=\"\"\"\n get the last n lines from all sessions. Specify n as a single\n arg, or the default is the last 10 lines.\n \"\"\")\n @argument(\n '-u', dest='unique', action='store_true',\n help=\"\"\"\n when searching history using `-g`, show only unique history.\n \"\"\")\n @argument('range', nargs='*')\n @skip_doctest\n @line_magic\n def history(self, parameter_s = ''):\n \"\"\"Print input history (_i<n> variables), with most recent last.\n\n By default, input history is printed without line numbers so it can be\n directly pasted into an editor. Use -n to show them.\n\n By default, all input history from the current session is displayed.\n Ranges of history can be indicated using the syntax:\n \n ``4``\n Line 4, current session\n ``4-6``\n Lines 4-6, current session\n ``243/1-5``\n Lines 1-5, session 243\n ``~2/7``\n Line 7, session 2 before current\n ``~8/1-~6/5``\n From the first line of 8 sessions ago, to the fifth line of 6\n sessions ago.\n \n Multiple ranges can be entered, separated by spaces\n\n The same syntax is used by %macro, %save, %edit, %rerun\n\n Examples\n --------\n ::\n\n In [6]: %history -n 4-6\n 4:a = 12\n 5:print a**2\n 6:%history -n 4-6\n\n \"\"\"\n\n args = parse_argstring(self.history, parameter_s)\n\n # For brevity\n history_manager = self.shell.history_manager\n\n def _format_lineno(session, line):\n \"\"\"Helper function to format line numbers properly.\"\"\"\n if session in (0, history_manager.session_number):\n return str(line)\n return \"%s/%s\" % (session, line)\n\n # Check if output to specific file was requested.\n outfname = args.filename\n if not outfname:\n outfile = sys.stdout # default\n # We don't want to close stdout at the end!\n close_at_end = False\n else:\n if os.path.exists(outfname):\n try:\n ans = io.ask_yes_no(\"File %r exists. Overwrite?\" % outfname)\n except StdinNotImplementedError:\n ans = True\n if not ans:\n print('Aborting.')\n return\n print(\"Overwriting file.\")\n outfile = io_open(outfname, 'w', encoding='utf-8')\n close_at_end = True\n\n print_nums = args.print_nums\n get_output = args.get_output\n pyprompts = args.pyprompts\n raw = args.raw\n\n pattern = None\n limit = None if args.limit is _unspecified else args.limit\n\n range_pattern = False\n if args.pattern is not None and not args.range:\n if args.pattern:\n pattern = \"*\" + \" \".join(args.pattern) + \"*\"\n else:\n pattern = \"*\"\n hist = history_manager.search(pattern, raw=raw, output=get_output,\n n=limit, unique=args.unique)\n print_nums = True\n elif args.limit is not _unspecified:\n n = 10 if limit is None else limit\n hist = history_manager.get_tail(n, raw=raw, output=get_output)\n else:\n if args.range: # Get history by ranges\n if args.pattern:\n range_pattern = \"*\" + \" \".join(args.pattern) + \"*\"\n print_nums = True\n hist = history_manager.get_range_by_str(\" \".join(args.range),\n raw, get_output)\n else: # Just get history for the current session\n hist = history_manager.get_range(raw=raw, output=get_output)\n\n # We could be displaying the entire history, so let's not try to pull\n # it into a list in memory. Anything that needs more space will just\n # misalign.\n width = 4\n\n for session, lineno, inline in hist:\n # Print user history with tabs expanded to 4 spaces. The GUI\n # clients use hard tabs for easier usability in auto-indented code,\n # but we want to produce PEP-8 compliant history for safe pasting\n # into an editor.\n if get_output:\n inline, output = inline\n if range_pattern:\n if not fnmatch.fnmatch(inline, range_pattern):\n continue\n inline = inline.expandtabs(4).rstrip()\n\n multiline = \"\\n\" in inline\n line_sep = '\\n' if multiline else ' '\n if print_nums:\n print(u'%s:%s' % (_format_lineno(session, lineno).rjust(width),\n line_sep), file=outfile, end=u'')\n if pyprompts:\n print(u\">>> \", end=u\"\", file=outfile)\n if multiline:\n inline = \"\\n... \".join(inline.splitlines()) + \"\\n...\"\n print(inline, file=outfile)\n if get_output and output:\n print(output, file=outfile)\n\n if close_at_end:\n outfile.close()\n\n @line_magic\n def recall(self, arg):\n r\"\"\"Repeat a command, or get command to input line for editing.\n\n %recall and %rep are equivalent.\n\n - %recall (no arguments):\n\n Place a string version of last computation result (stored in the\n special '_' variable) to the next input prompt. Allows you to create\n elaborate command lines without using copy-paste::\n\n In[1]: l = [\"hei\", \"vaan\"]\n In[2]: \"\".join(l)\n Out[2]: heivaan\n In[3]: %recall\n In[4]: heivaan_ <== cursor blinking\n\n %recall 45\n\n Place history line 45 on the next input prompt. Use %hist to find\n out the number.\n\n %recall 1-4\n\n Combine the specified lines into one cell, and place it on the next\n input prompt. See %history for the slice syntax.\n\n %recall foo+bar\n\n If foo+bar can be evaluated in the user namespace, the result is\n placed at the next input prompt. Otherwise, the history is searched\n for lines which contain that substring, and the most recent one is\n placed at the next input prompt.\n \"\"\"\n if not arg: # Last output\n self.shell.set_next_input(str(self.shell.user_ns[\"_\"]))\n return\n # Get history range\n histlines = self.shell.history_manager.get_range_by_str(arg)\n cmd = \"\\n\".join(x[2] for x in histlines)\n if cmd:\n self.shell.set_next_input(cmd.rstrip())\n return\n\n try: # Variable in user namespace\n cmd = str(eval(arg, self.shell.user_ns))\n except Exception: # Search for term in history\n histlines = self.shell.history_manager.search(\"*\"+arg+\"*\")\n for h in reversed([x[2] for x in histlines]):\n if 'recall' in h or 'rep' in h:\n continue\n self.shell.set_next_input(h.rstrip())\n return\n else:\n self.shell.set_next_input(cmd.rstrip())\n return\n print(\"Couldn't evaluate or find in history:\", arg)\n\n @line_magic\n def rerun(self, parameter_s=''):\n \"\"\"Re-run previous input\n\n By default, you can specify ranges of input history to be repeated\n (as with %history). With no arguments, it will repeat the last line.\n\n Options:\n\n -l <n> : Repeat the last n lines of input, not including the\n current command.\n\n -g foo : Repeat the most recent line which contains foo\n \"\"\"\n opts, args = self.parse_options(parameter_s, 'l:g:', mode='string')\n if \"l\" in opts: # Last n lines\n n = int(opts['l'])\n hist = self.shell.history_manager.get_tail(n)\n elif \"g\" in opts: # Search\n p = \"*\"+opts['g']+\"*\"\n hist = list(self.shell.history_manager.search(p))\n for l in reversed(hist):\n if \"rerun\" not in l[2]:\n hist = [l] # The last match which isn't a %rerun\n break\n else:\n hist = [] # No matches except %rerun\n elif args: # Specify history ranges\n hist = self.shell.history_manager.get_range_by_str(args)\n else: # Last line\n hist = self.shell.history_manager.get_tail(1)\n hist = [x[2] for x in hist]\n if not hist:\n print(\"No lines in history match specification\")\n return\n histlines = \"\\n\".join(hist)\n print(\"=== Executing: ===\")\n print(histlines)\n print(\"=== Output: ===\")\n self.shell.run_cell(\"\\n\".join(hist), store_history=False)\n", "path": "IPython/core/magics/history.py"}], "after_files": [{"content": "\"\"\"Implementation of magic functions related to History.\n\"\"\"\n#-----------------------------------------------------------------------------\n# Copyright (c) 2012, IPython Development Team.\n#\n# Distributed under the terms of the Modified BSD License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\n# Stdlib\nimport os\nimport sys\nfrom io import open as io_open\nimport fnmatch\n\n# Our own packages\nfrom IPython.core.error import StdinNotImplementedError\nfrom IPython.core.magic import Magics, magics_class, line_magic\nfrom IPython.core.magic_arguments import (argument, magic_arguments,\n parse_argstring)\nfrom IPython.testing.skipdoctest import skip_doctest\nfrom IPython.utils import io\n\n#-----------------------------------------------------------------------------\n# Magics class implementation\n#-----------------------------------------------------------------------------\n\n\n_unspecified = object()\n\n\n@magics_class\nclass HistoryMagics(Magics):\n\n @magic_arguments()\n @argument(\n '-n', dest='print_nums', action='store_true', default=False,\n help=\"\"\"\n print line numbers for each input.\n This feature is only available if numbered prompts are in use.\n \"\"\")\n @argument(\n '-o', dest='get_output', action='store_true', default=False,\n help=\"also print outputs for each input.\")\n @argument(\n '-p', dest='pyprompts', action='store_true', default=False,\n help=\"\"\"\n print classic '>>>' python prompts before each input.\n This is useful for making documentation, and in conjunction\n with -o, for producing doctest-ready output.\n \"\"\")\n @argument(\n '-t', dest='raw', action='store_false', default=True,\n help=\"\"\"\n print the 'translated' history, as IPython understands it.\n IPython filters your input and converts it all into valid Python\n source before executing it (things like magics or aliases are turned\n into function calls, for example). With this option, you'll see the\n native history instead of the user-entered version: '%%cd /' will be\n seen as 'get_ipython().run_line_magic(\"cd\", \"/\")' instead of '%%cd /'.\n \"\"\")\n @argument(\n '-f', dest='filename',\n help=\"\"\"\n FILENAME: instead of printing the output to the screen, redirect\n it to the given file. The file is always overwritten, though *when\n it can*, IPython asks for confirmation first. In particular, running\n the command 'history -f FILENAME' from the IPython Notebook\n interface will replace FILENAME even if it already exists *without*\n confirmation.\n \"\"\")\n @argument(\n '-g', dest='pattern', nargs='*', default=None,\n help=\"\"\"\n treat the arg as a glob pattern to search for in (full) history.\n This includes the saved history (almost all commands ever written).\n The pattern may contain '?' to match one unknown character and '*'\n to match any number of unknown characters. Use '%%hist -g' to show\n full saved history (may be very long).\n \"\"\")\n @argument(\n '-l', dest='limit', type=int, nargs='?', default=_unspecified,\n help=\"\"\"\n get the last n lines from all sessions. Specify n as a single\n arg, or the default is the last 10 lines.\n \"\"\")\n @argument(\n '-u', dest='unique', action='store_true',\n help=\"\"\"\n when searching history using `-g`, show only unique history.\n \"\"\")\n @argument('range', nargs='*')\n @skip_doctest\n @line_magic\n def history(self, parameter_s = ''):\n \"\"\"Print input history (_i<n> variables), with most recent last.\n\n By default, input history is printed without line numbers so it can be\n directly pasted into an editor. Use -n to show them.\n\n By default, all input history from the current session is displayed.\n Ranges of history can be indicated using the syntax:\n \n ``4``\n Line 4, current session\n ``4-6``\n Lines 4-6, current session\n ``243/1-5``\n Lines 1-5, session 243\n ``~2/7``\n Line 7, session 2 before current\n ``~8/1-~6/5``\n From the first line of 8 sessions ago, to the fifth line of 6\n sessions ago.\n \n Multiple ranges can be entered, separated by spaces\n\n The same syntax is used by %macro, %save, %edit, %rerun\n\n Examples\n --------\n ::\n\n In [6]: %history -n 4-6\n 4:a = 12\n 5:print a**2\n 6:%history -n 4-6\n\n \"\"\"\n\n args = parse_argstring(self.history, parameter_s)\n\n # For brevity\n history_manager = self.shell.history_manager\n\n def _format_lineno(session, line):\n \"\"\"Helper function to format line numbers properly.\"\"\"\n if session in (0, history_manager.session_number):\n return str(line)\n return \"%s/%s\" % (session, line)\n\n # Check if output to specific file was requested.\n outfname = args.filename\n if not outfname:\n outfile = sys.stdout # default\n # We don't want to close stdout at the end!\n close_at_end = False\n else:\n if os.path.exists(outfname):\n try:\n ans = io.ask_yes_no(\"File %r exists. Overwrite?\" % outfname)\n except StdinNotImplementedError:\n ans = True\n if not ans:\n print('Aborting.')\n return\n print(\"Overwriting file.\")\n outfile = io_open(outfname, 'w', encoding='utf-8')\n close_at_end = True\n\n print_nums = args.print_nums\n get_output = args.get_output\n pyprompts = args.pyprompts\n raw = args.raw\n\n pattern = None\n limit = None if args.limit is _unspecified else args.limit\n\n range_pattern = False\n if args.pattern is not None and not args.range:\n if args.pattern:\n pattern = \"*\" + \" \".join(args.pattern) + \"*\"\n else:\n pattern = \"*\"\n hist = history_manager.search(pattern, raw=raw, output=get_output,\n n=limit, unique=args.unique)\n print_nums = True\n elif args.limit is not _unspecified:\n n = 10 if limit is None else limit\n hist = history_manager.get_tail(n, raw=raw, output=get_output)\n else:\n if args.range: # Get history by ranges\n if args.pattern:\n range_pattern = \"*\" + \" \".join(args.pattern) + \"*\"\n print_nums = True\n hist = history_manager.get_range_by_str(\" \".join(args.range),\n raw, get_output)\n else: # Just get history for the current session\n hist = history_manager.get_range(raw=raw, output=get_output)\n\n # We could be displaying the entire history, so let's not try to pull\n # it into a list in memory. Anything that needs more space will just\n # misalign.\n width = 4\n\n for session, lineno, inline in hist:\n # Print user history with tabs expanded to 4 spaces. The GUI\n # clients use hard tabs for easier usability in auto-indented code,\n # but we want to produce PEP-8 compliant history for safe pasting\n # into an editor.\n if get_output:\n inline, output = inline\n if range_pattern:\n if not fnmatch.fnmatch(inline, range_pattern):\n continue\n inline = inline.expandtabs(4).rstrip()\n\n multiline = \"\\n\" in inline\n line_sep = '\\n' if multiline else ' '\n if print_nums:\n print(u'%s:%s' % (_format_lineno(session, lineno).rjust(width),\n line_sep), file=outfile, end=u'')\n if pyprompts:\n print(u\">>> \", end=u\"\", file=outfile)\n if multiline:\n inline = \"\\n... \".join(inline.splitlines()) + \"\\n...\"\n print(inline, file=outfile)\n if get_output and output:\n print(output, file=outfile)\n\n if close_at_end:\n outfile.close()\n\n @line_magic\n def recall(self, arg):\n r\"\"\"Repeat a command, or get command to input line for editing.\n\n %recall and %rep are equivalent.\n\n - %recall (no arguments):\n\n Place a string version of last computation result (stored in the\n special '_' variable) to the next input prompt. Allows you to create\n elaborate command lines without using copy-paste::\n\n In[1]: l = [\"hei\", \"vaan\"]\n In[2]: \"\".join(l)\n Out[2]: heivaan\n In[3]: %recall\n In[4]: heivaan_ <== cursor blinking\n\n %recall 45\n\n Place history line 45 on the next input prompt. Use %hist to find\n out the number.\n\n %recall 1-4\n\n Combine the specified lines into one cell, and place it on the next\n input prompt. See %history for the slice syntax.\n\n %recall foo+bar\n\n If foo+bar can be evaluated in the user namespace, the result is\n placed at the next input prompt. Otherwise, the history is searched\n for lines which contain that substring, and the most recent one is\n placed at the next input prompt.\n \"\"\"\n if not arg: # Last output\n self.shell.set_next_input(str(self.shell.user_ns[\"_\"]))\n return\n # Get history range\n histlines = self.shell.history_manager.get_range_by_str(arg)\n cmd = \"\\n\".join(x[2] for x in histlines)\n if cmd:\n self.shell.set_next_input(cmd.rstrip())\n return\n\n try: # Variable in user namespace\n cmd = str(eval(arg, self.shell.user_ns))\n except Exception: # Search for term in history\n histlines = self.shell.history_manager.search(\"*\"+arg+\"*\")\n for h in reversed([x[2] for x in histlines]):\n if 'recall' in h or 'rep' in h:\n continue\n self.shell.set_next_input(h.rstrip())\n return\n else:\n self.shell.set_next_input(cmd.rstrip())\n print(\"Couldn't evaluate or find in history:\", arg)\n\n @line_magic\n def rerun(self, parameter_s=''):\n \"\"\"Re-run previous input\n\n By default, you can specify ranges of input history to be repeated\n (as with %history). With no arguments, it will repeat the last line.\n\n Options:\n\n -l <n> : Repeat the last n lines of input, not including the\n current command.\n\n -g foo : Repeat the most recent line which contains foo\n \"\"\"\n opts, args = self.parse_options(parameter_s, 'l:g:', mode='string')\n if \"l\" in opts: # Last n lines\n n = int(opts['l'])\n\n if n == 0:\n print(\"Requested 0 last lines - nothing to run\")\n return\n elif n < 0:\n print(\"Number of lines to rerun cannot be negative\")\n return\n\n hist = self.shell.history_manager.get_tail(n)\n elif \"g\" in opts: # Search\n p = \"*\"+opts['g']+\"*\"\n hist = list(self.shell.history_manager.search(p))\n for l in reversed(hist):\n if \"rerun\" not in l[2]:\n hist = [l] # The last match which isn't a %rerun\n break\n else:\n hist = [] # No matches except %rerun\n elif args: # Specify history ranges\n hist = self.shell.history_manager.get_range_by_str(args)\n else: # Last line\n hist = self.shell.history_manager.get_tail(1)\n hist = [x[2] for x in hist]\n if not hist:\n print(\"No lines in history match specification\")\n return\n histlines = \"\\n\".join(hist)\n print(\"=== Executing: ===\")\n print(histlines)\n print(\"=== Output: ===\")\n self.shell.run_cell(\"\\n\".join(hist), store_history=False)\n", "path": "IPython/core/magics/history.py"}]}
| 3,998 | 189 |
gh_patches_debug_8450
|
rasdani/github-patches
|
git_diff
|
easybuilders__easybuild-easyblocks-1796
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SuiteSparse easyblock fails when using Python 3
When running `eb` on top of Python 3.6.8:
```
$ eb SuiteSparse-5.4.0-foss-2019a-METIS-5.1.0.eb
...
== configuring...
ERROR: Traceback (most recent call last):
File "/home/kehoste/easybuild-framework/easybuild/main.py", line 112, in build_and_install_software
(ec_res['success'], app_log, err) = build_and_install_one(ec, init_env)
File "/home/kehoste/easybuild-framework/easybuild/framework/easyblock.py", line 3046, in build_and_install_one
result = app.run_all_steps(run_test_cases=run_test_cases)
File "/home/kehoste/easybuild-framework/easybuild/framework/easyblock.py", line 2954, in run_all_steps
self.run_step(step_name, step_methods)
File "/home/kehoste/easybuild-framework/easybuild/framework/easyblock.py", line 2824, in run_step
step_method(self)()
File "/home/kehoste/easybuild-easyblocks/easybuild/easyblocks/s/suitesparse.py", line 113, in configure_step
for (var, val) in cfgvars.items():
RuntimeError: dictionary changed size during iteration
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `easybuild/easyblocks/s/suitesparse.py`
Content:
```
1 ##
2 # Copyright 2009-2019 Ghent University
3 #
4 # This file is part of EasyBuild,
5 # originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),
6 # with support of Ghent University (http://ugent.be/hpc),
7 # the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),
8 # Flemish Research Foundation (FWO) (http://www.fwo.be/en)
9 # and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).
10 #
11 # https://github.com/easybuilders/easybuild
12 #
13 # EasyBuild is free software: you can redistribute it and/or modify
14 # it under the terms of the GNU General Public License as published by
15 # the Free Software Foundation v2.
16 #
17 # EasyBuild is distributed in the hope that it will be useful,
18 # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 # GNU General Public License for more details.
21 #
22 # You should have received a copy of the GNU General Public License
23 # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.
24 ##
25 """
26 EasyBuild support for SuiteSparse, implemented as an easyblock
27
28 @author: Stijn De Weirdt (Ghent University)
29 @author: Dries Verdegem (Ghent University)
30 @author: Kenneth Hoste (Ghent University)
31 @author: Pieter De Baets (Ghent University)
32 @author: Jens Timmerman (Ghent University)
33 """
34 import fileinput
35 import re
36 import os
37 import shutil
38 import sys
39 import stat
40 from distutils.version import LooseVersion
41
42 from easybuild.easyblocks.generic.configuremake import ConfigureMake
43 from easybuild.tools.build_log import EasyBuildError
44 from easybuild.tools.filetools import mkdir, write_file, adjust_permissions
45 from easybuild.tools.modules import get_software_root
46 from easybuild.tools.modules import get_software_libdir
47 from easybuild.tools.systemtools import get_shared_lib_ext
48
49
50 class EB_SuiteSparse(ConfigureMake):
51 """Support for building SuiteSparse."""
52
53 def __init__(self, *args, **kwargs):
54 """Custom constructor for SuiteSparse easyblock, initialize custom class parameters."""
55 super(EB_SuiteSparse, self).__init__(*args, **kwargs)
56 self.config_name = 'UNKNOWN'
57
58 def configure_step(self):
59 """Configure build by patching UFconfig.mk or SuiteSparse_config.mk."""
60
61 if LooseVersion(self.version) < LooseVersion('4.0'):
62 self.config_name = 'UFconfig'
63 else:
64 self.config_name = 'SuiteSparse_config'
65
66 cfgvars = {
67 'CC': os.getenv('MPICC'),
68 'CFLAGS': os.getenv('CFLAGS'),
69 'CXX': os.getenv('MPICXX'),
70 'F77': os.getenv('MPIF77'),
71 'F77FLAGS': os.getenv('F77FLAGS'),
72 }
73
74 # avoid that (system) Intel compilers are always considered
75 self.cfg.update('buildopts', 'AUTOCC=no')
76
77 # Set BLAS and LAPACK libraries as specified in SuiteSparse README.txt
78 self.cfg.update('buildopts', 'BLAS="%s"' % os.getenv('LIBBLAS_MT'))
79 self.cfg.update('buildopts', 'LAPACK="%s"' % os.getenv('LIBLAPACK_MT'))
80
81 # Get METIS or ParMETIS settings
82 metis = get_software_root('METIS')
83 parmetis = get_software_root('ParMETIS')
84 if parmetis:
85 metis_path = parmetis
86 metis_include = os.path.join(parmetis, 'include')
87 metis_libs = os.path.join(parmetis, get_software_libdir('ParMETIS'), 'libmetis.a')
88
89 elif metis:
90 metis_path = metis
91 metis_include = os.path.join(metis, 'include')
92 metis_libs = os.path.join(metis, get_software_libdir('METIS'), 'libmetis.a')
93
94 else:
95 raise EasyBuildError("Neither METIS or ParMETIS module loaded.")
96
97 if LooseVersion(self.version) >= LooseVersion('4.5.1'):
98 cfgvars.update({
99 'MY_METIS_LIB': metis_libs,
100 'MY_METIS_INC': metis_include,
101 })
102 else:
103 cfgvars.update({
104 'METIS_PATH': metis_path,
105 'METIS': metis_libs,
106 })
107
108 # patch file
109 fp = os.path.join(self.cfg['start_dir'], self.config_name, '%s.mk' % self.config_name)
110
111 try:
112 for line in fileinput.input(fp, inplace=1, backup='.orig'):
113 for (var, val) in cfgvars.items():
114 orig_line = line
115 # for variables in cfgvars, substiture lines assignment
116 # in the file, whatever they are, by assignments to the
117 # values in cfgvars
118 line = re.sub(r"^\s*(%s\s*=\s*).*\n$" % var,
119 r"\1 %s # patched by EasyBuild\n" % val,
120 line)
121 if line != orig_line:
122 cfgvars.pop(var)
123 sys.stdout.write(line)
124 except IOError as err:
125 raise EasyBuildError("Failed to patch %s in: %s", fp, err)
126
127 # add remaining entries at the end
128 if cfgvars:
129 cfgtxt = '# lines below added automatically by EasyBuild\n'
130 cfgtxt += '\n'.join(["%s = %s" % (var, val) for (var, val) in cfgvars.items()])
131 write_file(fp, cfgtxt, append=True)
132
133 def install_step(self):
134 """Install by copying the contents of the builddir to the installdir (preserving permissions)"""
135 for x in os.listdir(self.cfg['start_dir']):
136 src = os.path.join(self.cfg['start_dir'], x)
137 dst = os.path.join(self.installdir, x)
138 try:
139 if os.path.isdir(src):
140 shutil.copytree(src, dst)
141 # symlink
142 # - dst/Lib to dst/lib
143 # - dst/Include to dst/include
144 for c in ['Lib', 'Include']:
145 nsrc = os.path.join(dst, c)
146 ndst = os.path.join(dst, c.lower())
147 if os.path.exists(nsrc):
148 os.symlink(nsrc, ndst)
149 # enable r-x permissions for group/others
150 perms = stat.S_IRGRP | stat.S_IXGRP | stat.S_IROTH | stat.S_IXOTH
151 adjust_permissions(dst, perms, add=True, recursive=True, onlydirs=True)
152 else:
153 shutil.copy2(src, dst)
154 except OSError as err:
155 raise EasyBuildError("Copying src %s to dst %s failed: %s", src, dst, err)
156
157 # some extra symlinks are necessary for UMFPACK to work.
158 paths = [
159 os.path.join('AMD', 'include', 'amd.h'),
160 os.path.join('AMD', 'include', 'amd_internal.h'),
161 os.path.join(self.config_name, '%s.h' % self.config_name),
162 os.path.join('AMD', 'lib', 'libamd.a')
163 ]
164 for path in paths:
165 src = os.path.join(self.installdir, path)
166 dn = path.split(os.path.sep)[-2]
167 fn = path.split(os.path.sep)[-1]
168 dstdir = os.path.join(self.installdir, 'UMFPACK', dn)
169 mkdir(dstdir)
170 if os.path.exists(src):
171 try:
172 os.symlink(src, os.path.join(dstdir, fn))
173 except OSError as err:
174 raise EasyBuildError("Failed to make symbolic link from %s to %s: %s", src, dst, err)
175
176 def make_module_req_guess(self):
177 """
178 Extra path to consider for module file:
179 * add config dir and include to $CPATH so include files are found
180 * add UMFPACK and AMD library, and lib dirs to $LD_LIBRARY_PATH
181 """
182
183 guesses = super(EB_SuiteSparse, self).make_module_req_guess()
184
185 # Previous versions of SuiteSparse used specific directories for includes and libraries
186 if LooseVersion(self.version) < LooseVersion('4.5'):
187 include_dirs = [self.config_name]
188 ld_library_path = ['AMD/lib', 'BTF/lib', 'CAMD/lib', 'CCOLAMD/lib', 'CHOLAMD/lib', 'CHOLMOD/lib',
189 'COLAMD/lib/', 'CSparse/lib', 'CXSparse/lib', 'KLU/lib', 'LDL/lib', 'RBio/lib',
190 'UMFPACK/lib', self.config_name]
191
192 guesses['CPATH'].extend(include_dirs)
193 guesses['LD_LIBRARY_PATH'].extend(ld_library_path)
194 guesses['LIBRARY_PATH'].extend(ld_library_path)
195
196 return guesses
197
198 def sanity_check_step(self):
199 """Custom sanity check for SuiteSparse."""
200
201 # Make sure that SuiteSparse did NOT compile its own Metis
202 if os.path.exists(os.path.join(self.installdir, 'lib', 'libmetis.%s' % get_shared_lib_ext())):
203 raise EasyBuildError("SuiteSparse has compiled its own Metis. This will conflict with the Metis build."
204 " The SuiteSparse EasyBlock need to be updated!")
205
206 libnames = ['AMD', 'BTF', 'CAMD', 'CCOLAMD', 'CHOLMOD', 'COLAMD', 'CXSparse', 'KLU',
207 'LDL', 'RBio', 'SPQR', 'UMFPACK']
208 libs = [os.path.join(x, 'lib', 'lib%s.a' % x.lower()) for x in libnames]
209
210 if LooseVersion(self.version) < LooseVersion('4.0'):
211 csparse_dir = 'CSparse3'
212 else:
213 csparse_dir = 'CSparse'
214 libs.append(os.path.join(csparse_dir, 'lib', 'libcsparse.a'))
215
216 # Latest version of SuiteSparse also compiles shared library and put them in 'lib'
217 shlib_ext = get_shared_lib_ext()
218 if LooseVersion(self.version) >= LooseVersion('4.5.1'):
219 libs += [os.path.join('lib', 'lib%s.%s' % (l.lower(), shlib_ext)) for l in libnames]
220
221 custom_paths = {
222 'files': libs,
223 'dirs': ['MATLAB_Tools'],
224 }
225
226 super(EB_SuiteSparse, self).sanity_check_step(custom_paths=custom_paths)
227
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/easybuild/easyblocks/s/suitesparse.py b/easybuild/easyblocks/s/suitesparse.py
--- a/easybuild/easyblocks/s/suitesparse.py
+++ b/easybuild/easyblocks/s/suitesparse.py
@@ -110,7 +110,7 @@
try:
for line in fileinput.input(fp, inplace=1, backup='.orig'):
- for (var, val) in cfgvars.items():
+ for (var, val) in list(cfgvars.items()):
orig_line = line
# for variables in cfgvars, substiture lines assignment
# in the file, whatever they are, by assignments to the
|
{"golden_diff": "diff --git a/easybuild/easyblocks/s/suitesparse.py b/easybuild/easyblocks/s/suitesparse.py\n--- a/easybuild/easyblocks/s/suitesparse.py\n+++ b/easybuild/easyblocks/s/suitesparse.py\n@@ -110,7 +110,7 @@\n \n try:\n for line in fileinput.input(fp, inplace=1, backup='.orig'):\n- for (var, val) in cfgvars.items():\n+ for (var, val) in list(cfgvars.items()):\n orig_line = line\n # for variables in cfgvars, substiture lines assignment\n # in the file, whatever they are, by assignments to the\n", "issue": "SuiteSparse easyblock fails when using Python 3\nWhen running `eb` on top of Python 3.6.8:\r\n\r\n```\r\n$ eb SuiteSparse-5.4.0-foss-2019a-METIS-5.1.0.eb\r\n...\r\n== configuring...\r\nERROR: Traceback (most recent call last):\r\n File \"/home/kehoste/easybuild-framework/easybuild/main.py\", line 112, in build_and_install_software\r\n (ec_res['success'], app_log, err) = build_and_install_one(ec, init_env)\r\n File \"/home/kehoste/easybuild-framework/easybuild/framework/easyblock.py\", line 3046, in build_and_install_one\r\n result = app.run_all_steps(run_test_cases=run_test_cases)\r\n File \"/home/kehoste/easybuild-framework/easybuild/framework/easyblock.py\", line 2954, in run_all_steps\r\n self.run_step(step_name, step_methods)\r\n File \"/home/kehoste/easybuild-framework/easybuild/framework/easyblock.py\", line 2824, in run_step\r\n step_method(self)()\r\n File \"/home/kehoste/easybuild-easyblocks/easybuild/easyblocks/s/suitesparse.py\", line 113, in configure_step\r\n for (var, val) in cfgvars.items():\r\nRuntimeError: dictionary changed size during iteration\r\n```\n", "before_files": [{"content": "##\n# Copyright 2009-2019 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for SuiteSparse, implemented as an easyblock\n\n@author: Stijn De Weirdt (Ghent University)\n@author: Dries Verdegem (Ghent University)\n@author: Kenneth Hoste (Ghent University)\n@author: Pieter De Baets (Ghent University)\n@author: Jens Timmerman (Ghent University)\n\"\"\"\nimport fileinput\nimport re\nimport os\nimport shutil\nimport sys\nimport stat\nfrom distutils.version import LooseVersion\n\nfrom easybuild.easyblocks.generic.configuremake import ConfigureMake\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.filetools import mkdir, write_file, adjust_permissions\nfrom easybuild.tools.modules import get_software_root\nfrom easybuild.tools.modules import get_software_libdir\nfrom easybuild.tools.systemtools import get_shared_lib_ext\n\n\nclass EB_SuiteSparse(ConfigureMake):\n \"\"\"Support for building SuiteSparse.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Custom constructor for SuiteSparse easyblock, initialize custom class parameters.\"\"\"\n super(EB_SuiteSparse, self).__init__(*args, **kwargs)\n self.config_name = 'UNKNOWN'\n\n def configure_step(self):\n \"\"\"Configure build by patching UFconfig.mk or SuiteSparse_config.mk.\"\"\"\n\n if LooseVersion(self.version) < LooseVersion('4.0'):\n self.config_name = 'UFconfig'\n else:\n self.config_name = 'SuiteSparse_config'\n\n cfgvars = {\n 'CC': os.getenv('MPICC'),\n 'CFLAGS': os.getenv('CFLAGS'),\n 'CXX': os.getenv('MPICXX'),\n 'F77': os.getenv('MPIF77'),\n 'F77FLAGS': os.getenv('F77FLAGS'),\n }\n\n # avoid that (system) Intel compilers are always considered\n self.cfg.update('buildopts', 'AUTOCC=no')\n\n # Set BLAS and LAPACK libraries as specified in SuiteSparse README.txt\n self.cfg.update('buildopts', 'BLAS=\"%s\"' % os.getenv('LIBBLAS_MT'))\n self.cfg.update('buildopts', 'LAPACK=\"%s\"' % os.getenv('LIBLAPACK_MT'))\n\n # Get METIS or ParMETIS settings\n metis = get_software_root('METIS')\n parmetis = get_software_root('ParMETIS')\n if parmetis:\n metis_path = parmetis\n metis_include = os.path.join(parmetis, 'include')\n metis_libs = os.path.join(parmetis, get_software_libdir('ParMETIS'), 'libmetis.a')\n\n elif metis:\n metis_path = metis\n metis_include = os.path.join(metis, 'include')\n metis_libs = os.path.join(metis, get_software_libdir('METIS'), 'libmetis.a')\n\n else:\n raise EasyBuildError(\"Neither METIS or ParMETIS module loaded.\")\n\n if LooseVersion(self.version) >= LooseVersion('4.5.1'):\n cfgvars.update({\n 'MY_METIS_LIB': metis_libs,\n 'MY_METIS_INC': metis_include,\n })\n else:\n cfgvars.update({\n 'METIS_PATH': metis_path,\n 'METIS': metis_libs,\n })\n\n # patch file\n fp = os.path.join(self.cfg['start_dir'], self.config_name, '%s.mk' % self.config_name)\n\n try:\n for line in fileinput.input(fp, inplace=1, backup='.orig'):\n for (var, val) in cfgvars.items():\n orig_line = line\n # for variables in cfgvars, substiture lines assignment\n # in the file, whatever they are, by assignments to the\n # values in cfgvars\n line = re.sub(r\"^\\s*(%s\\s*=\\s*).*\\n$\" % var,\n r\"\\1 %s # patched by EasyBuild\\n\" % val,\n line)\n if line != orig_line:\n cfgvars.pop(var)\n sys.stdout.write(line)\n except IOError as err:\n raise EasyBuildError(\"Failed to patch %s in: %s\", fp, err)\n\n # add remaining entries at the end\n if cfgvars:\n cfgtxt = '# lines below added automatically by EasyBuild\\n'\n cfgtxt += '\\n'.join([\"%s = %s\" % (var, val) for (var, val) in cfgvars.items()])\n write_file(fp, cfgtxt, append=True)\n\n def install_step(self):\n \"\"\"Install by copying the contents of the builddir to the installdir (preserving permissions)\"\"\"\n for x in os.listdir(self.cfg['start_dir']):\n src = os.path.join(self.cfg['start_dir'], x)\n dst = os.path.join(self.installdir, x)\n try:\n if os.path.isdir(src):\n shutil.copytree(src, dst)\n # symlink\n # - dst/Lib to dst/lib\n # - dst/Include to dst/include\n for c in ['Lib', 'Include']:\n nsrc = os.path.join(dst, c)\n ndst = os.path.join(dst, c.lower())\n if os.path.exists(nsrc):\n os.symlink(nsrc, ndst)\n # enable r-x permissions for group/others\n perms = stat.S_IRGRP | stat.S_IXGRP | stat.S_IROTH | stat.S_IXOTH\n adjust_permissions(dst, perms, add=True, recursive=True, onlydirs=True)\n else:\n shutil.copy2(src, dst)\n except OSError as err:\n raise EasyBuildError(\"Copying src %s to dst %s failed: %s\", src, dst, err)\n\n # some extra symlinks are necessary for UMFPACK to work.\n paths = [\n os.path.join('AMD', 'include', 'amd.h'),\n os.path.join('AMD', 'include', 'amd_internal.h'),\n os.path.join(self.config_name, '%s.h' % self.config_name),\n os.path.join('AMD', 'lib', 'libamd.a')\n ]\n for path in paths:\n src = os.path.join(self.installdir, path)\n dn = path.split(os.path.sep)[-2]\n fn = path.split(os.path.sep)[-1]\n dstdir = os.path.join(self.installdir, 'UMFPACK', dn)\n mkdir(dstdir)\n if os.path.exists(src):\n try:\n os.symlink(src, os.path.join(dstdir, fn))\n except OSError as err:\n raise EasyBuildError(\"Failed to make symbolic link from %s to %s: %s\", src, dst, err)\n\n def make_module_req_guess(self):\n \"\"\"\n Extra path to consider for module file:\n * add config dir and include to $CPATH so include files are found\n * add UMFPACK and AMD library, and lib dirs to $LD_LIBRARY_PATH\n \"\"\"\n\n guesses = super(EB_SuiteSparse, self).make_module_req_guess()\n\n # Previous versions of SuiteSparse used specific directories for includes and libraries\n if LooseVersion(self.version) < LooseVersion('4.5'):\n include_dirs = [self.config_name]\n ld_library_path = ['AMD/lib', 'BTF/lib', 'CAMD/lib', 'CCOLAMD/lib', 'CHOLAMD/lib', 'CHOLMOD/lib',\n 'COLAMD/lib/', 'CSparse/lib', 'CXSparse/lib', 'KLU/lib', 'LDL/lib', 'RBio/lib',\n 'UMFPACK/lib', self.config_name]\n\n guesses['CPATH'].extend(include_dirs)\n guesses['LD_LIBRARY_PATH'].extend(ld_library_path)\n guesses['LIBRARY_PATH'].extend(ld_library_path)\n\n return guesses\n\n def sanity_check_step(self):\n \"\"\"Custom sanity check for SuiteSparse.\"\"\"\n\n # Make sure that SuiteSparse did NOT compile its own Metis\n if os.path.exists(os.path.join(self.installdir, 'lib', 'libmetis.%s' % get_shared_lib_ext())):\n raise EasyBuildError(\"SuiteSparse has compiled its own Metis. This will conflict with the Metis build.\"\n \" The SuiteSparse EasyBlock need to be updated!\")\n\n libnames = ['AMD', 'BTF', 'CAMD', 'CCOLAMD', 'CHOLMOD', 'COLAMD', 'CXSparse', 'KLU',\n 'LDL', 'RBio', 'SPQR', 'UMFPACK']\n libs = [os.path.join(x, 'lib', 'lib%s.a' % x.lower()) for x in libnames]\n\n if LooseVersion(self.version) < LooseVersion('4.0'):\n csparse_dir = 'CSparse3'\n else:\n csparse_dir = 'CSparse'\n libs.append(os.path.join(csparse_dir, 'lib', 'libcsparse.a'))\n\n # Latest version of SuiteSparse also compiles shared library and put them in 'lib'\n shlib_ext = get_shared_lib_ext()\n if LooseVersion(self.version) >= LooseVersion('4.5.1'):\n libs += [os.path.join('lib', 'lib%s.%s' % (l.lower(), shlib_ext)) for l in libnames]\n\n custom_paths = {\n 'files': libs,\n 'dirs': ['MATLAB_Tools'],\n }\n\n super(EB_SuiteSparse, self).sanity_check_step(custom_paths=custom_paths)\n", "path": "easybuild/easyblocks/s/suitesparse.py"}], "after_files": [{"content": "##\n# Copyright 2009-2019 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for SuiteSparse, implemented as an easyblock\n\n@author: Stijn De Weirdt (Ghent University)\n@author: Dries Verdegem (Ghent University)\n@author: Kenneth Hoste (Ghent University)\n@author: Pieter De Baets (Ghent University)\n@author: Jens Timmerman (Ghent University)\n\"\"\"\nimport fileinput\nimport re\nimport os\nimport shutil\nimport sys\nimport stat\nfrom distutils.version import LooseVersion\n\nfrom easybuild.easyblocks.generic.configuremake import ConfigureMake\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.filetools import mkdir, write_file, adjust_permissions\nfrom easybuild.tools.modules import get_software_root\nfrom easybuild.tools.modules import get_software_libdir\nfrom easybuild.tools.systemtools import get_shared_lib_ext\n\n\nclass EB_SuiteSparse(ConfigureMake):\n \"\"\"Support for building SuiteSparse.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Custom constructor for SuiteSparse easyblock, initialize custom class parameters.\"\"\"\n super(EB_SuiteSparse, self).__init__(*args, **kwargs)\n self.config_name = 'UNKNOWN'\n\n def configure_step(self):\n \"\"\"Configure build by patching UFconfig.mk or SuiteSparse_config.mk.\"\"\"\n\n if LooseVersion(self.version) < LooseVersion('4.0'):\n self.config_name = 'UFconfig'\n else:\n self.config_name = 'SuiteSparse_config'\n\n cfgvars = {\n 'CC': os.getenv('MPICC'),\n 'CFLAGS': os.getenv('CFLAGS'),\n 'CXX': os.getenv('MPICXX'),\n 'F77': os.getenv('MPIF77'),\n 'F77FLAGS': os.getenv('F77FLAGS'),\n }\n\n # avoid that (system) Intel compilers are always considered\n self.cfg.update('buildopts', 'AUTOCC=no')\n\n # Set BLAS and LAPACK libraries as specified in SuiteSparse README.txt\n self.cfg.update('buildopts', 'BLAS=\"%s\"' % os.getenv('LIBBLAS_MT'))\n self.cfg.update('buildopts', 'LAPACK=\"%s\"' % os.getenv('LIBLAPACK_MT'))\n\n # Get METIS or ParMETIS settings\n metis = get_software_root('METIS')\n parmetis = get_software_root('ParMETIS')\n if parmetis:\n metis_path = parmetis\n metis_include = os.path.join(parmetis, 'include')\n metis_libs = os.path.join(parmetis, get_software_libdir('ParMETIS'), 'libmetis.a')\n\n elif metis:\n metis_path = metis\n metis_include = os.path.join(metis, 'include')\n metis_libs = os.path.join(metis, get_software_libdir('METIS'), 'libmetis.a')\n\n else:\n raise EasyBuildError(\"Neither METIS or ParMETIS module loaded.\")\n\n if LooseVersion(self.version) >= LooseVersion('4.5.1'):\n cfgvars.update({\n 'MY_METIS_LIB': metis_libs,\n 'MY_METIS_INC': metis_include,\n })\n else:\n cfgvars.update({\n 'METIS_PATH': metis_path,\n 'METIS': metis_libs,\n })\n\n # patch file\n fp = os.path.join(self.cfg['start_dir'], self.config_name, '%s.mk' % self.config_name)\n\n try:\n for line in fileinput.input(fp, inplace=1, backup='.orig'):\n for (var, val) in list(cfgvars.items()):\n orig_line = line\n # for variables in cfgvars, substiture lines assignment\n # in the file, whatever they are, by assignments to the\n # values in cfgvars\n line = re.sub(r\"^\\s*(%s\\s*=\\s*).*\\n$\" % var,\n r\"\\1 %s # patched by EasyBuild\\n\" % val,\n line)\n if line != orig_line:\n cfgvars.pop(var)\n sys.stdout.write(line)\n except IOError as err:\n raise EasyBuildError(\"Failed to patch %s in: %s\", fp, err)\n\n # add remaining entries at the end\n if cfgvars:\n cfgtxt = '# lines below added automatically by EasyBuild\\n'\n cfgtxt += '\\n'.join([\"%s = %s\" % (var, val) for (var, val) in cfgvars.items()])\n write_file(fp, cfgtxt, append=True)\n\n def install_step(self):\n \"\"\"Install by copying the contents of the builddir to the installdir (preserving permissions)\"\"\"\n for x in os.listdir(self.cfg['start_dir']):\n src = os.path.join(self.cfg['start_dir'], x)\n dst = os.path.join(self.installdir, x)\n try:\n if os.path.isdir(src):\n shutil.copytree(src, dst)\n # symlink\n # - dst/Lib to dst/lib\n # - dst/Include to dst/include\n for c in ['Lib', 'Include']:\n nsrc = os.path.join(dst, c)\n ndst = os.path.join(dst, c.lower())\n if os.path.exists(nsrc):\n os.symlink(nsrc, ndst)\n # enable r-x permissions for group/others\n perms = stat.S_IRGRP | stat.S_IXGRP | stat.S_IROTH | stat.S_IXOTH\n adjust_permissions(dst, perms, add=True, recursive=True, onlydirs=True)\n else:\n shutil.copy2(src, dst)\n except OSError as err:\n raise EasyBuildError(\"Copying src %s to dst %s failed: %s\", src, dst, err)\n\n # some extra symlinks are necessary for UMFPACK to work.\n paths = [\n os.path.join('AMD', 'include', 'amd.h'),\n os.path.join('AMD', 'include', 'amd_internal.h'),\n os.path.join(self.config_name, '%s.h' % self.config_name),\n os.path.join('AMD', 'lib', 'libamd.a')\n ]\n for path in paths:\n src = os.path.join(self.installdir, path)\n dn = path.split(os.path.sep)[-2]\n fn = path.split(os.path.sep)[-1]\n dstdir = os.path.join(self.installdir, 'UMFPACK', dn)\n mkdir(dstdir)\n if os.path.exists(src):\n try:\n os.symlink(src, os.path.join(dstdir, fn))\n except OSError as err:\n raise EasyBuildError(\"Failed to make symbolic link from %s to %s: %s\", src, dst, err)\n\n def make_module_req_guess(self):\n \"\"\"\n Extra path to consider for module file:\n * add config dir and include to $CPATH so include files are found\n * add UMFPACK and AMD library, and lib dirs to $LD_LIBRARY_PATH\n \"\"\"\n\n guesses = super(EB_SuiteSparse, self).make_module_req_guess()\n\n # Previous versions of SuiteSparse used specific directories for includes and libraries\n if LooseVersion(self.version) < LooseVersion('4.5'):\n include_dirs = [self.config_name]\n ld_library_path = ['AMD/lib', 'BTF/lib', 'CAMD/lib', 'CCOLAMD/lib', 'CHOLAMD/lib', 'CHOLMOD/lib',\n 'COLAMD/lib/', 'CSparse/lib', 'CXSparse/lib', 'KLU/lib', 'LDL/lib', 'RBio/lib',\n 'UMFPACK/lib', self.config_name]\n\n guesses['CPATH'].extend(include_dirs)\n guesses['LD_LIBRARY_PATH'].extend(ld_library_path)\n guesses['LIBRARY_PATH'].extend(ld_library_path)\n\n return guesses\n\n def sanity_check_step(self):\n \"\"\"Custom sanity check for SuiteSparse.\"\"\"\n\n # Make sure that SuiteSparse did NOT compile its own Metis\n if os.path.exists(os.path.join(self.installdir, 'lib', 'libmetis.%s' % get_shared_lib_ext())):\n raise EasyBuildError(\"SuiteSparse has compiled its own Metis. This will conflict with the Metis build.\"\n \" The SuiteSparse EasyBlock need to be updated!\")\n\n libnames = ['AMD', 'BTF', 'CAMD', 'CCOLAMD', 'CHOLMOD', 'COLAMD', 'CXSparse', 'KLU',\n 'LDL', 'RBio', 'SPQR', 'UMFPACK']\n libs = [os.path.join(x, 'lib', 'lib%s.a' % x.lower()) for x in libnames]\n\n if LooseVersion(self.version) < LooseVersion('4.0'):\n csparse_dir = 'CSparse3'\n else:\n csparse_dir = 'CSparse'\n libs.append(os.path.join(csparse_dir, 'lib', 'libcsparse.a'))\n\n # Latest version of SuiteSparse also compiles shared library and put them in 'lib'\n shlib_ext = get_shared_lib_ext()\n if LooseVersion(self.version) >= LooseVersion('4.5.1'):\n libs += [os.path.join('lib', 'lib%s.%s' % (l.lower(), shlib_ext)) for l in libnames]\n\n custom_paths = {\n 'files': libs,\n 'dirs': ['MATLAB_Tools'],\n }\n\n super(EB_SuiteSparse, self).sanity_check_step(custom_paths=custom_paths)\n", "path": "easybuild/easyblocks/s/suitesparse.py"}]}
| 3,487 | 156 |
gh_patches_debug_29546
|
rasdani/github-patches
|
git_diff
|
Mailu__Mailu-1014
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Should `antispam` be hardcoded?
I'm in a middle of setting up my own mailu instances, however I'm using namespaced container names (i.e. `mailu-admin`, `mailu-front` etc.). Everything works pretty well so far, however I have some problems with imap and antispam. After some investigation I noticed, that files `core/dovecot/conf/bin/ham` and `core/dovecot/conf/bin/spam` have antispam host hardcoded instead of using `HOST_ANTISPAM`.
I'm happy to provide PR, but firstly want to be sure - is it in fact oversight, or is this by design?
Should `antispam` be hardcoded?
I'm in a middle of setting up my own mailu instances, however I'm using namespaced container names (i.e. `mailu-admin`, `mailu-front` etc.). Everything works pretty well so far, however I have some problems with imap and antispam. After some investigation I noticed, that files `core/dovecot/conf/bin/ham` and `core/dovecot/conf/bin/spam` have antispam host hardcoded instead of using `HOST_ANTISPAM`.
I'm happy to provide PR, but firstly want to be sure - is it in fact oversight, or is this by design?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/dovecot/start.py`
Content:
```
1 #!/usr/bin/python3
2
3 import os
4 import glob
5 import multiprocessing
6 import logging as log
7 import sys
8
9 from podop import run_server
10 from socrate import system, conf
11
12 log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", "WARNING"))
13
14 def start_podop():
15 os.setuid(8)
16 url = "http://" + os.environ["ADMIN_ADDRESS"] + "/internal/dovecot/§"
17 run_server(0, "dovecot", "/tmp/podop.socket", [
18 ("quota", "url", url ),
19 ("auth", "url", url),
20 ("sieve", "url", url),
21 ])
22
23 # Actual startup script
24 os.environ["FRONT_ADDRESS"] = system.resolve_address(os.environ.get("HOST_FRONT", "front"))
25 os.environ["REDIS_ADDRESS"] = system.resolve_address(os.environ.get("HOST_REDIS", "redis"))
26 os.environ["ADMIN_ADDRESS"] = system.resolve_address(os.environ.get("HOST_ADMIN", "admin"))
27 if os.environ["WEBMAIL"] != "none":
28 os.environ["WEBMAIL_ADDRESS"] = system.resolve_address(os.environ.get("HOST_WEBMAIL", "webmail"))
29
30 for dovecot_file in glob.glob("/conf/*.conf"):
31 conf.jinja(dovecot_file, os.environ, os.path.join("/etc/dovecot", os.path.basename(dovecot_file)))
32
33 # Run Podop, then postfix
34 multiprocessing.Process(target=start_podop).start()
35 os.system("chown mail:mail /mail")
36 os.system("chown -R mail:mail /var/lib/dovecot /conf")
37 os.execv("/usr/sbin/dovecot", ["dovecot", "-c", "/etc/dovecot/dovecot.conf", "-F"])
38
```
Path: `services/rspamd/start.py`
Content:
```
1 #!/usr/bin/python3
2
3 import os
4 import glob
5 import logging as log
6 import sys
7 from socrate import system, conf
8
9 log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", "WARNING"))
10
11 # Actual startup script
12 os.environ["FRONT_ADDRESS"] = system.resolve_address(os.environ.get("HOST_FRONT", "front"))
13
14 if "HOST_REDIS" not in os.environ:
15 os.environ["REDIS_ADDRESS"] = system.resolve_address(os.environ.get("HOST_REDIS", "redis"))
16
17
18 for rspamd_file in glob.glob("/conf/*"):
19 conf.jinja(rspamd_file, os.environ, os.path.join("/etc/rspamd/local.d", os.path.basename(rspamd_file)))
20
21 # Run rspamd
22 os.execv("/usr/sbin/rspamd", ["rspamd", "-i", "-f"])
23
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/core/dovecot/start.py b/core/dovecot/start.py
--- a/core/dovecot/start.py
+++ b/core/dovecot/start.py
@@ -24,6 +24,7 @@
os.environ["FRONT_ADDRESS"] = system.resolve_address(os.environ.get("HOST_FRONT", "front"))
os.environ["REDIS_ADDRESS"] = system.resolve_address(os.environ.get("HOST_REDIS", "redis"))
os.environ["ADMIN_ADDRESS"] = system.resolve_address(os.environ.get("HOST_ADMIN", "admin"))
+os.environ["ANTISPAM_ADDRESS"] = system.resolve_address(os.environ.get("HOST_ANTISPAM", "antispam:11334"))
if os.environ["WEBMAIL"] != "none":
os.environ["WEBMAIL_ADDRESS"] = system.resolve_address(os.environ.get("HOST_WEBMAIL", "webmail"))
diff --git a/services/rspamd/start.py b/services/rspamd/start.py
--- a/services/rspamd/start.py
+++ b/services/rspamd/start.py
@@ -9,11 +9,14 @@
log.basicConfig(stream=sys.stderr, level=os.environ.get("LOG_LEVEL", "WARNING"))
# Actual startup script
+
os.environ["FRONT_ADDRESS"] = system.resolve_address(os.environ.get("HOST_FRONT", "front"))
if "HOST_REDIS" not in os.environ:
os.environ["REDIS_ADDRESS"] = system.resolve_address(os.environ.get("HOST_REDIS", "redis"))
+if os.environ.get("ANTIVIRUS") == 'clamav':
+ os.environ["ANTIVIRUS_ADDRESS"] = system.resolve_address(os.environ.get("HOST_ANTIVIRUS", "antivirus:3310"))
for rspamd_file in glob.glob("/conf/*"):
conf.jinja(rspamd_file, os.environ, os.path.join("/etc/rspamd/local.d", os.path.basename(rspamd_file)))
|
{"golden_diff": "diff --git a/core/dovecot/start.py b/core/dovecot/start.py\n--- a/core/dovecot/start.py\n+++ b/core/dovecot/start.py\n@@ -24,6 +24,7 @@\n os.environ[\"FRONT_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_FRONT\", \"front\"))\n os.environ[\"REDIS_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_REDIS\", \"redis\"))\n os.environ[\"ADMIN_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_ADMIN\", \"admin\"))\n+os.environ[\"ANTISPAM_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_ANTISPAM\", \"antispam:11334\"))\n if os.environ[\"WEBMAIL\"] != \"none\":\n os.environ[\"WEBMAIL_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_WEBMAIL\", \"webmail\"))\n \ndiff --git a/services/rspamd/start.py b/services/rspamd/start.py\n--- a/services/rspamd/start.py\n+++ b/services/rspamd/start.py\n@@ -9,11 +9,14 @@\n log.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n \n # Actual startup script\n+\n os.environ[\"FRONT_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_FRONT\", \"front\"))\n \n if \"HOST_REDIS\" not in os.environ:\n os.environ[\"REDIS_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_REDIS\", \"redis\"))\n \n+if os.environ.get(\"ANTIVIRUS\") == 'clamav':\n+ os.environ[\"ANTIVIRUS_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_ANTIVIRUS\", \"antivirus:3310\"))\n \n for rspamd_file in glob.glob(\"/conf/*\"):\n conf.jinja(rspamd_file, os.environ, os.path.join(\"/etc/rspamd/local.d\", os.path.basename(rspamd_file)))\n", "issue": "Should `antispam` be hardcoded?\nI'm in a middle of setting up my own mailu instances, however I'm using namespaced container names (i.e. `mailu-admin`, `mailu-front` etc.). Everything works pretty well so far, however I have some problems with imap and antispam. After some investigation I noticed, that files `core/dovecot/conf/bin/ham` and `core/dovecot/conf/bin/spam` have antispam host hardcoded instead of using `HOST_ANTISPAM`. \r\n\r\nI'm happy to provide PR, but firstly want to be sure - is it in fact oversight, or is this by design?\nShould `antispam` be hardcoded?\nI'm in a middle of setting up my own mailu instances, however I'm using namespaced container names (i.e. `mailu-admin`, `mailu-front` etc.). Everything works pretty well so far, however I have some problems with imap and antispam. After some investigation I noticed, that files `core/dovecot/conf/bin/ham` and `core/dovecot/conf/bin/spam` have antispam host hardcoded instead of using `HOST_ANTISPAM`. \r\n\r\nI'm happy to provide PR, but firstly want to be sure - is it in fact oversight, or is this by design?\n", "before_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport glob\nimport multiprocessing\nimport logging as log\nimport sys\n\nfrom podop import run_server\nfrom socrate import system, conf\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\ndef start_podop():\n os.setuid(8)\n url = \"http://\" + os.environ[\"ADMIN_ADDRESS\"] + \"/internal/dovecot/\u00a7\"\n run_server(0, \"dovecot\", \"/tmp/podop.socket\", [\n\t\t(\"quota\", \"url\", url ),\n\t\t(\"auth\", \"url\", url),\n\t\t(\"sieve\", \"url\", url),\n ])\n\n# Actual startup script\nos.environ[\"FRONT_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_FRONT\", \"front\"))\nos.environ[\"REDIS_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_REDIS\", \"redis\"))\nos.environ[\"ADMIN_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_ADMIN\", \"admin\"))\nif os.environ[\"WEBMAIL\"] != \"none\":\n os.environ[\"WEBMAIL_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_WEBMAIL\", \"webmail\"))\n\nfor dovecot_file in glob.glob(\"/conf/*.conf\"):\n conf.jinja(dovecot_file, os.environ, os.path.join(\"/etc/dovecot\", os.path.basename(dovecot_file)))\n\n# Run Podop, then postfix\nmultiprocessing.Process(target=start_podop).start()\nos.system(\"chown mail:mail /mail\")\nos.system(\"chown -R mail:mail /var/lib/dovecot /conf\")\nos.execv(\"/usr/sbin/dovecot\", [\"dovecot\", \"-c\", \"/etc/dovecot/dovecot.conf\", \"-F\"])\n", "path": "core/dovecot/start.py"}, {"content": "#!/usr/bin/python3\n\nimport os\nimport glob\nimport logging as log\nimport sys\nfrom socrate import system, conf\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\n# Actual startup script\nos.environ[\"FRONT_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_FRONT\", \"front\"))\n\nif \"HOST_REDIS\" not in os.environ:\n os.environ[\"REDIS_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_REDIS\", \"redis\"))\n\n\nfor rspamd_file in glob.glob(\"/conf/*\"):\n conf.jinja(rspamd_file, os.environ, os.path.join(\"/etc/rspamd/local.d\", os.path.basename(rspamd_file)))\n\n# Run rspamd\nos.execv(\"/usr/sbin/rspamd\", [\"rspamd\", \"-i\", \"-f\"])\n", "path": "services/rspamd/start.py"}], "after_files": [{"content": "#!/usr/bin/python3\n\nimport os\nimport glob\nimport multiprocessing\nimport logging as log\nimport sys\n\nfrom podop import run_server\nfrom socrate import system, conf\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\ndef start_podop():\n os.setuid(8)\n url = \"http://\" + os.environ[\"ADMIN_ADDRESS\"] + \"/internal/dovecot/\u00a7\"\n run_server(0, \"dovecot\", \"/tmp/podop.socket\", [\n\t\t(\"quota\", \"url\", url ),\n\t\t(\"auth\", \"url\", url),\n\t\t(\"sieve\", \"url\", url),\n ])\n\n# Actual startup script\nos.environ[\"FRONT_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_FRONT\", \"front\"))\nos.environ[\"REDIS_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_REDIS\", \"redis\"))\nos.environ[\"ADMIN_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_ADMIN\", \"admin\"))\nos.environ[\"ANTISPAM_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_ANTISPAM\", \"antispam:11334\"))\nif os.environ[\"WEBMAIL\"] != \"none\":\n os.environ[\"WEBMAIL_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_WEBMAIL\", \"webmail\"))\n\nfor dovecot_file in glob.glob(\"/conf/*.conf\"):\n conf.jinja(dovecot_file, os.environ, os.path.join(\"/etc/dovecot\", os.path.basename(dovecot_file)))\n\n# Run Podop, then postfix\nmultiprocessing.Process(target=start_podop).start()\nos.system(\"chown mail:mail /mail\")\nos.system(\"chown -R mail:mail /var/lib/dovecot /conf\")\nos.execv(\"/usr/sbin/dovecot\", [\"dovecot\", \"-c\", \"/etc/dovecot/dovecot.conf\", \"-F\"])\n", "path": "core/dovecot/start.py"}, {"content": "#!/usr/bin/python3\n\nimport os\nimport glob\nimport logging as log\nimport sys\nfrom socrate import system, conf\n\nlog.basicConfig(stream=sys.stderr, level=os.environ.get(\"LOG_LEVEL\", \"WARNING\"))\n\n# Actual startup script\n\nos.environ[\"FRONT_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_FRONT\", \"front\"))\n\nif \"HOST_REDIS\" not in os.environ:\n os.environ[\"REDIS_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_REDIS\", \"redis\"))\n\nif os.environ.get(\"ANTIVIRUS\") == 'clamav':\n os.environ[\"ANTIVIRUS_ADDRESS\"] = system.resolve_address(os.environ.get(\"HOST_ANTIVIRUS\", \"antivirus:3310\"))\n\nfor rspamd_file in glob.glob(\"/conf/*\"):\n conf.jinja(rspamd_file, os.environ, os.path.join(\"/etc/rspamd/local.d\", os.path.basename(rspamd_file)))\n\n# Run rspamd\nos.execv(\"/usr/sbin/rspamd\", [\"rspamd\", \"-i\", \"-f\"])\n", "path": "services/rspamd/start.py"}]}
| 1,208 | 401 |
gh_patches_debug_15349
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-4248
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `applications/Chat/coati/models/base/actor.py`
Content:
```
1 from typing import Optional
2
3 import torch
4 import torch.nn as nn
5
6 from ..lora import LoRAModule
7
8
9 class Actor(LoRAModule):
10 """
11 Actor model base class.
12
13 Args:
14 model (nn.Module): Actor Model.
15 lora_rank (int): LoRA rank.
16 lora_train_bias (str): LoRA bias training mode.
17 """
18
19 def __init__(self, model: nn.Module, lora_rank: int = 0, lora_train_bias: str = 'none') -> None:
20 super().__init__(lora_rank=lora_rank, lora_train_bias=lora_train_bias)
21 self.model = model
22 self.convert_to_lora()
23
24 def forward(self,
25 input_ids: torch.LongTensor,
26 attention_mask: Optional[torch.Tensor] = None,
27 **model_kwargs, # HACK: `generate` method may pass more kwargs
28 ) -> torch.Tensor:
29 """Returns model output.
30 """
31 output = self.model(
32 input_ids,
33 attention_mask=attention_mask,
34 **model_kwargs
35 )
36 return output
37
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/applications/Chat/coati/models/base/actor.py b/applications/Chat/coati/models/base/actor.py
--- a/applications/Chat/coati/models/base/actor.py
+++ b/applications/Chat/coati/models/base/actor.py
@@ -21,16 +21,13 @@
self.model = model
self.convert_to_lora()
- def forward(self,
- input_ids: torch.LongTensor,
- attention_mask: Optional[torch.Tensor] = None,
- **model_kwargs, # HACK: `generate` method may pass more kwargs
- ) -> torch.Tensor:
+ def forward(
+ self,
+ input_ids: torch.LongTensor,
+ attention_mask: Optional[torch.Tensor] = None,
+ **model_kwargs, # HACK: `generate` method may pass more kwargs
+ ) -> torch.Tensor:
"""Returns model output.
"""
- output = self.model(
- input_ids,
- attention_mask=attention_mask,
- **model_kwargs
- )
+ output = self.model(input_ids, attention_mask=attention_mask, **model_kwargs)
return output
|
{"golden_diff": "diff --git a/applications/Chat/coati/models/base/actor.py b/applications/Chat/coati/models/base/actor.py\n--- a/applications/Chat/coati/models/base/actor.py\n+++ b/applications/Chat/coati/models/base/actor.py\n@@ -21,16 +21,13 @@\n self.model = model\n self.convert_to_lora()\n \n- def forward(self,\n- input_ids: torch.LongTensor,\n- attention_mask: Optional[torch.Tensor] = None,\n- **model_kwargs, # HACK: `generate` method may pass more kwargs\n- ) -> torch.Tensor:\n+ def forward(\n+ self,\n+ input_ids: torch.LongTensor,\n+ attention_mask: Optional[torch.Tensor] = None,\n+ **model_kwargs, # HACK: `generate` method may pass more kwargs\n+ ) -> torch.Tensor:\n \"\"\"Returns model output.\n \"\"\"\n- output = self.model(\n- input_ids,\n- attention_mask=attention_mask,\n- **model_kwargs\n- )\n+ output = self.model(input_ids, attention_mask=attention_mask, **model_kwargs)\n return output\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from typing import Optional\n\nimport torch\nimport torch.nn as nn\n\nfrom ..lora import LoRAModule\n\n\nclass Actor(LoRAModule):\n \"\"\"\n Actor model base class.\n\n Args:\n model (nn.Module): Actor Model.\n lora_rank (int): LoRA rank.\n lora_train_bias (str): LoRA bias training mode.\n \"\"\"\n\n def __init__(self, model: nn.Module, lora_rank: int = 0, lora_train_bias: str = 'none') -> None:\n super().__init__(lora_rank=lora_rank, lora_train_bias=lora_train_bias)\n self.model = model\n self.convert_to_lora()\n\n def forward(self,\n input_ids: torch.LongTensor,\n attention_mask: Optional[torch.Tensor] = None,\n **model_kwargs, # HACK: `generate` method may pass more kwargs\n ) -> torch.Tensor:\n \"\"\"Returns model output.\n \"\"\"\n output = self.model(\n input_ids,\n attention_mask=attention_mask,\n **model_kwargs\n )\n return output\n", "path": "applications/Chat/coati/models/base/actor.py"}], "after_files": [{"content": "from typing import Optional\n\nimport torch\nimport torch.nn as nn\n\nfrom ..lora import LoRAModule\n\n\nclass Actor(LoRAModule):\n \"\"\"\n Actor model base class.\n\n Args:\n model (nn.Module): Actor Model.\n lora_rank (int): LoRA rank.\n lora_train_bias (str): LoRA bias training mode.\n \"\"\"\n\n def __init__(self, model: nn.Module, lora_rank: int = 0, lora_train_bias: str = 'none') -> None:\n super().__init__(lora_rank=lora_rank, lora_train_bias=lora_train_bias)\n self.model = model\n self.convert_to_lora()\n\n def forward(\n self,\n input_ids: torch.LongTensor,\n attention_mask: Optional[torch.Tensor] = None,\n **model_kwargs, # HACK: `generate` method may pass more kwargs\n ) -> torch.Tensor:\n \"\"\"Returns model output.\n \"\"\"\n output = self.model(input_ids, attention_mask=attention_mask, **model_kwargs)\n return output\n", "path": "applications/Chat/coati/models/base/actor.py"}]}
| 590 | 255 |
gh_patches_debug_31496
|
rasdani/github-patches
|
git_diff
|
cisagov__manage.get.gov-247
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Save a new domain application on form submission
_Outcome_: On "save" press, save any form changes to the Application model.
Relies on #221
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/registrar/forms/application_wizard.py`
Content:
```
1 """Forms Wizard for creating a new domain application."""
2
3 import logging
4
5 from django import forms
6
7 from django.contrib.auth.mixins import LoginRequiredMixin
8
9 from formtools.wizard.views import NamedUrlSessionWizardView # type: ignore
10
11
12 logger = logging.getLogger(__name__)
13
14
15 class OrganizationForm(forms.Form):
16 organization_type = forms.ChoiceField(
17 required=True,
18 choices=[
19 ("Federal", "Federal: a federal agency"),
20 ("Interstate", "Interstate: an organization of two or more states"),
21 (
22 "State_or_Territory",
23 (
24 "State or Territory: One of the 50 U.S. states, the District of "
25 "Columbia, American Samoa, Guam, Northern Mariana Islands, "
26 "Puerto Rico, or the U.S. Virgin Islands"
27 ),
28 ),
29 (
30 "Tribal",
31 (
32 "Tribal: a tribal government recognized by the federal or "
33 "state government"
34 ),
35 ),
36 ("County", "County: a county, parish, or borough"),
37 ("City", "City: a city, town, township, village, etc."),
38 (
39 "Special_District",
40 "Special District: an independent organization within a single state",
41 ),
42 ],
43 widget=forms.RadioSelect,
44 )
45 federal_type = forms.ChoiceField(
46 required=False,
47 choices=[
48 ("Executive", "Executive"),
49 ("Judicial", "Judicial"),
50 ("Legislative", "Legislative"),
51 ],
52 widget=forms.RadioSelect,
53 )
54 is_election_board = forms.ChoiceField(
55 required=False,
56 choices=[
57 ("Yes", "Yes"),
58 ("No", "No"),
59 ],
60 widget=forms.RadioSelect,
61 )
62
63
64 class ContactForm(forms.Form):
65 organization_name = forms.CharField(label="Organization Name")
66 street_address = forms.CharField(label="Street address")
67
68
69 # List of forms in our wizard. Each entry is a tuple of a name and a form
70 # subclass
71 FORMS = [
72 ("organization", OrganizationForm),
73 ("contact", ContactForm),
74 ]
75
76 # Dict to match up the right template with the right step. Keys here must
77 # match the first elements of the tuples in FORMS
78 TEMPLATES = {
79 "organization": "application_organization.html",
80 "contact": "application_contact.html",
81 }
82
83 # We need to pass our page titles as context to the templates, indexed
84 # by the step names
85 TITLES = {
86 "organization": "About your organization",
87 "contact": "Your organization's contact information",
88 }
89
90
91 class ApplicationWizard(LoginRequiredMixin, NamedUrlSessionWizardView):
92
93 """Multi-page form ("wizard") for new domain applications.
94
95 This sets up a sequence of forms that gather information for new
96 domain applications. Each form in the sequence has its own URL and
97 the progress through the form is stored in the Django session (thus
98 "NamedUrlSessionWizardView").
99 """
100
101 form_list = FORMS
102
103 def get_template_names(self):
104 """Template for the current step.
105
106 The return is a singleton list.
107 """
108 return [TEMPLATES[self.steps.current]]
109
110 def get_context_data(self, form, **kwargs):
111 """Add title information to the context for all steps."""
112 context = super().get_context_data(form=form, **kwargs)
113 context["form_titles"] = TITLES
114 return context
115
116 def done(self, form_list, **kwargs):
117 logger.info("Application form submitted.")
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/registrar/forms/application_wizard.py b/src/registrar/forms/application_wizard.py
--- a/src/registrar/forms/application_wizard.py
+++ b/src/registrar/forms/application_wizard.py
@@ -3,11 +3,14 @@
import logging
from django import forms
+from django.shortcuts import redirect
from django.contrib.auth.mixins import LoginRequiredMixin
from formtools.wizard.views import NamedUrlSessionWizardView # type: ignore
+from registrar.models import DomainApplication, Website
+
logger = logging.getLogger(__name__)
@@ -113,5 +116,33 @@
context["form_titles"] = TITLES
return context
- def done(self, form_list, **kwargs):
- logger.info("Application form submitted.")
+ def forms_to_object(self, form_dict: dict) -> DomainApplication:
+ """Unpack the form responses onto the model object properties."""
+ application = DomainApplication.objects.create(creator=self.request.user)
+
+ # organization information
+ organization_data = form_dict["organization"].cleaned_data
+ application.organization_type = organization_data["organization_type"]
+ application.federal_branch = organization_data["federal_type"]
+ application.is_election_office = organization_data["is_election_board"]
+
+ # contact information
+ contact_data = form_dict["contact"].cleaned_data
+ application.organization_name = contact_data["organization_name"]
+ application.street_address = contact_data["street_address"]
+ # TODO: add the rest of these fields when they are created in the forms
+
+ # This isn't really the requested_domain field
+ # but we need something in this field to make the form submittable
+ requested_site, _ = Website.objects.get_or_create(
+ website=contact_data["organization_name"] + ".gov"
+ )
+ application.requested_domain = requested_site
+ return application
+
+ def done(self, form_list, form_dict, **kwargs):
+ application = self.forms_to_object(form_dict)
+ application.submit() # change the status to submitted
+ application.save()
+ logger.debug("Application object saved:", application.id)
+ return redirect("home")
|
{"golden_diff": "diff --git a/src/registrar/forms/application_wizard.py b/src/registrar/forms/application_wizard.py\n--- a/src/registrar/forms/application_wizard.py\n+++ b/src/registrar/forms/application_wizard.py\n@@ -3,11 +3,14 @@\n import logging\n \n from django import forms\n+from django.shortcuts import redirect\n \n from django.contrib.auth.mixins import LoginRequiredMixin\n \n from formtools.wizard.views import NamedUrlSessionWizardView # type: ignore\n \n+from registrar.models import DomainApplication, Website\n+\n \n logger = logging.getLogger(__name__)\n \n@@ -113,5 +116,33 @@\n context[\"form_titles\"] = TITLES\n return context\n \n- def done(self, form_list, **kwargs):\n- logger.info(\"Application form submitted.\")\n+ def forms_to_object(self, form_dict: dict) -> DomainApplication:\n+ \"\"\"Unpack the form responses onto the model object properties.\"\"\"\n+ application = DomainApplication.objects.create(creator=self.request.user)\n+\n+ # organization information\n+ organization_data = form_dict[\"organization\"].cleaned_data\n+ application.organization_type = organization_data[\"organization_type\"]\n+ application.federal_branch = organization_data[\"federal_type\"]\n+ application.is_election_office = organization_data[\"is_election_board\"]\n+\n+ # contact information\n+ contact_data = form_dict[\"contact\"].cleaned_data\n+ application.organization_name = contact_data[\"organization_name\"]\n+ application.street_address = contact_data[\"street_address\"]\n+ # TODO: add the rest of these fields when they are created in the forms\n+\n+ # This isn't really the requested_domain field\n+ # but we need something in this field to make the form submittable\n+ requested_site, _ = Website.objects.get_or_create(\n+ website=contact_data[\"organization_name\"] + \".gov\"\n+ )\n+ application.requested_domain = requested_site\n+ return application\n+\n+ def done(self, form_list, form_dict, **kwargs):\n+ application = self.forms_to_object(form_dict)\n+ application.submit() # change the status to submitted\n+ application.save()\n+ logger.debug(\"Application object saved:\", application.id)\n+ return redirect(\"home\")\n", "issue": "Save a new domain application on form submission\n_Outcome_: On \"save\" press, save any form changes to the Application model. \n\nRelies on #221 \n", "before_files": [{"content": "\"\"\"Forms Wizard for creating a new domain application.\"\"\"\n\nimport logging\n\nfrom django import forms\n\nfrom django.contrib.auth.mixins import LoginRequiredMixin\n\nfrom formtools.wizard.views import NamedUrlSessionWizardView # type: ignore\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass OrganizationForm(forms.Form):\n organization_type = forms.ChoiceField(\n required=True,\n choices=[\n (\"Federal\", \"Federal: a federal agency\"),\n (\"Interstate\", \"Interstate: an organization of two or more states\"),\n (\n \"State_or_Territory\",\n (\n \"State or Territory: One of the 50 U.S. states, the District of \"\n \"Columbia, American Samoa, Guam, Northern Mariana Islands, \"\n \"Puerto Rico, or the U.S. Virgin Islands\"\n ),\n ),\n (\n \"Tribal\",\n (\n \"Tribal: a tribal government recognized by the federal or \"\n \"state government\"\n ),\n ),\n (\"County\", \"County: a county, parish, or borough\"),\n (\"City\", \"City: a city, town, township, village, etc.\"),\n (\n \"Special_District\",\n \"Special District: an independent organization within a single state\",\n ),\n ],\n widget=forms.RadioSelect,\n )\n federal_type = forms.ChoiceField(\n required=False,\n choices=[\n (\"Executive\", \"Executive\"),\n (\"Judicial\", \"Judicial\"),\n (\"Legislative\", \"Legislative\"),\n ],\n widget=forms.RadioSelect,\n )\n is_election_board = forms.ChoiceField(\n required=False,\n choices=[\n (\"Yes\", \"Yes\"),\n (\"No\", \"No\"),\n ],\n widget=forms.RadioSelect,\n )\n\n\nclass ContactForm(forms.Form):\n organization_name = forms.CharField(label=\"Organization Name\")\n street_address = forms.CharField(label=\"Street address\")\n\n\n# List of forms in our wizard. Each entry is a tuple of a name and a form\n# subclass\nFORMS = [\n (\"organization\", OrganizationForm),\n (\"contact\", ContactForm),\n]\n\n# Dict to match up the right template with the right step. Keys here must\n# match the first elements of the tuples in FORMS\nTEMPLATES = {\n \"organization\": \"application_organization.html\",\n \"contact\": \"application_contact.html\",\n}\n\n# We need to pass our page titles as context to the templates, indexed\n# by the step names\nTITLES = {\n \"organization\": \"About your organization\",\n \"contact\": \"Your organization's contact information\",\n}\n\n\nclass ApplicationWizard(LoginRequiredMixin, NamedUrlSessionWizardView):\n\n \"\"\"Multi-page form (\"wizard\") for new domain applications.\n\n This sets up a sequence of forms that gather information for new\n domain applications. Each form in the sequence has its own URL and\n the progress through the form is stored in the Django session (thus\n \"NamedUrlSessionWizardView\").\n \"\"\"\n\n form_list = FORMS\n\n def get_template_names(self):\n \"\"\"Template for the current step.\n\n The return is a singleton list.\n \"\"\"\n return [TEMPLATES[self.steps.current]]\n\n def get_context_data(self, form, **kwargs):\n \"\"\"Add title information to the context for all steps.\"\"\"\n context = super().get_context_data(form=form, **kwargs)\n context[\"form_titles\"] = TITLES\n return context\n\n def done(self, form_list, **kwargs):\n logger.info(\"Application form submitted.\")\n", "path": "src/registrar/forms/application_wizard.py"}], "after_files": [{"content": "\"\"\"Forms Wizard for creating a new domain application.\"\"\"\n\nimport logging\n\nfrom django import forms\nfrom django.shortcuts import redirect\n\nfrom django.contrib.auth.mixins import LoginRequiredMixin\n\nfrom formtools.wizard.views import NamedUrlSessionWizardView # type: ignore\n\nfrom registrar.models import DomainApplication, Website\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass OrganizationForm(forms.Form):\n organization_type = forms.ChoiceField(\n required=True,\n choices=[\n (\"Federal\", \"Federal: a federal agency\"),\n (\"Interstate\", \"Interstate: an organization of two or more states\"),\n (\n \"State_or_Territory\",\n (\n \"State or Territory: One of the 50 U.S. states, the District of \"\n \"Columbia, American Samoa, Guam, Northern Mariana Islands, \"\n \"Puerto Rico, or the U.S. Virgin Islands\"\n ),\n ),\n (\n \"Tribal\",\n (\n \"Tribal: a tribal government recognized by the federal or \"\n \"state government\"\n ),\n ),\n (\"County\", \"County: a county, parish, or borough\"),\n (\"City\", \"City: a city, town, township, village, etc.\"),\n (\n \"Special_District\",\n \"Special District: an independent organization within a single state\",\n ),\n ],\n widget=forms.RadioSelect,\n )\n federal_type = forms.ChoiceField(\n required=False,\n choices=[\n (\"Executive\", \"Executive\"),\n (\"Judicial\", \"Judicial\"),\n (\"Legislative\", \"Legislative\"),\n ],\n widget=forms.RadioSelect,\n )\n is_election_board = forms.ChoiceField(\n required=False,\n choices=[\n (\"Yes\", \"Yes\"),\n (\"No\", \"No\"),\n ],\n widget=forms.RadioSelect,\n )\n\n\nclass ContactForm(forms.Form):\n organization_name = forms.CharField(label=\"Organization Name\")\n street_address = forms.CharField(label=\"Street address\")\n\n\n# List of forms in our wizard. Each entry is a tuple of a name and a form\n# subclass\nFORMS = [\n (\"organization\", OrganizationForm),\n (\"contact\", ContactForm),\n]\n\n# Dict to match up the right template with the right step. Keys here must\n# match the first elements of the tuples in FORMS\nTEMPLATES = {\n \"organization\": \"application_organization.html\",\n \"contact\": \"application_contact.html\",\n}\n\n# We need to pass our page titles as context to the templates, indexed\n# by the step names\nTITLES = {\n \"organization\": \"About your organization\",\n \"contact\": \"Your organization's contact information\",\n}\n\n\nclass ApplicationWizard(LoginRequiredMixin, NamedUrlSessionWizardView):\n\n \"\"\"Multi-page form (\"wizard\") for new domain applications.\n\n This sets up a sequence of forms that gather information for new\n domain applications. Each form in the sequence has its own URL and\n the progress through the form is stored in the Django session (thus\n \"NamedUrlSessionWizardView\").\n \"\"\"\n\n form_list = FORMS\n\n def get_template_names(self):\n \"\"\"Template for the current step.\n\n The return is a singleton list.\n \"\"\"\n return [TEMPLATES[self.steps.current]]\n\n def get_context_data(self, form, **kwargs):\n \"\"\"Add title information to the context for all steps.\"\"\"\n context = super().get_context_data(form=form, **kwargs)\n context[\"form_titles\"] = TITLES\n return context\n\n def forms_to_object(self, form_dict: dict) -> DomainApplication:\n \"\"\"Unpack the form responses onto the model object properties.\"\"\"\n application = DomainApplication.objects.create(creator=self.request.user)\n\n # organization information\n organization_data = form_dict[\"organization\"].cleaned_data\n application.organization_type = organization_data[\"organization_type\"]\n application.federal_branch = organization_data[\"federal_type\"]\n application.is_election_office = organization_data[\"is_election_board\"]\n\n # contact information\n contact_data = form_dict[\"contact\"].cleaned_data\n application.organization_name = contact_data[\"organization_name\"]\n application.street_address = contact_data[\"street_address\"]\n # TODO: add the rest of these fields when they are created in the forms\n\n # This isn't really the requested_domain field\n # but we need something in this field to make the form submittable\n requested_site, _ = Website.objects.get_or_create(\n website=contact_data[\"organization_name\"] + \".gov\"\n )\n application.requested_domain = requested_site\n return application\n\n def done(self, form_list, form_dict, **kwargs):\n application = self.forms_to_object(form_dict)\n application.submit() # change the status to submitted\n application.save()\n logger.debug(\"Application object saved:\", application.id)\n return redirect(\"home\")\n", "path": "src/registrar/forms/application_wizard.py"}]}
| 1,298 | 482 |
gh_patches_debug_38213
|
rasdani/github-patches
|
git_diff
|
mlflow__mlflow-7757
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[FR] Allow override of gunicorn app entrypoint for MLflow server
### Willingness to contribute
Yes. I can contribute this feature independently.
### Proposal Summary
The MLflow server executes gunicorn with a hard-coded value `mlflow.server:app` ([reference](https://github.com/mlflow/mlflow/blob/master/mlflow/server/__init__.py#L106)) gunicorn application cli parameter. This prevents easy customization of the MLflow server Flask app i.e. adding `bofore_request` hooks.
### Motivation
> #### What is the use case for this feature?
The goal is adding a `before_request` hook to perform authorization by checking JWT authenication header provided by an AWS application load balancer
> #### Why is this use case valuable to support for MLflow users in general?
In order to allow extra flexibility for self-hosted MLflow users to customize the mlflow Flask server with pre/post request hooks
> #### Why is this use case valuable to support for your project(s) or organization?
This would simplify the logic for authorization of users of the MLflow server
> #### Why is it currently difficult to achieve this use case?
Because MLflow server doesn't natively allow custom pre/post process request hooks for the tracking webserver component
### Details
It would be desirable that the aforementioned gunicorn app entrypoint can be overridden by a custom value (either via --gunicorn-opts or an additional `mlflow server` cli option)
Ideally one would be able to write a custom script as this:
```python
# customapp.py
from flask import request
from mlflow.server import app
@app.before_request
def hook():
print(request.path)
```
and run:
`mlflow server ... --gunicorn-app customapp:app`
### What component(s) does this bug affect?
- [ ] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [ ] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs
- [ ] `area/server-infra`: MLflow Tracking server backend
- [X] `area/tracking`: Tracking Service, tracking client APIs, autologging
### What interface(s) does this bug affect?
- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
### What language(s) does this bug affect?
- [ ] `language/r`: R APIs and clients
- [ ] `language/java`: Java APIs and clients
- [ ] `language/new`: Proposals for new client languages
### What integration(s) does this bug affect?
- [ ] `integrations/azure`: Azure and Azure ML integrations
- [ ] `integrations/sagemaker`: SageMaker integrations
- [ ] `integrations/databricks`: Databricks integrations
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mlflow/server/__init__.py`
Content:
```
1 import os
2 import shlex
3 import sys
4 import textwrap
5
6 from flask import Flask, send_from_directory, Response
7
8 from mlflow.server import handlers
9 from mlflow.server.handlers import (
10 get_artifact_handler,
11 get_metric_history_bulk_handler,
12 STATIC_PREFIX_ENV_VAR,
13 _add_static_prefix,
14 get_model_version_artifact_handler,
15 )
16 from mlflow.utils.process import _exec_cmd
17 from mlflow.version import VERSION
18
19 # NB: These are internal environment variables used for communication between
20 # the cli and the forked gunicorn processes.
21 BACKEND_STORE_URI_ENV_VAR = "_MLFLOW_SERVER_FILE_STORE"
22 REGISTRY_STORE_URI_ENV_VAR = "_MLFLOW_SERVER_REGISTRY_STORE"
23 ARTIFACT_ROOT_ENV_VAR = "_MLFLOW_SERVER_ARTIFACT_ROOT"
24 ARTIFACTS_DESTINATION_ENV_VAR = "_MLFLOW_SERVER_ARTIFACT_DESTINATION"
25 PROMETHEUS_EXPORTER_ENV_VAR = "prometheus_multiproc_dir"
26 SERVE_ARTIFACTS_ENV_VAR = "_MLFLOW_SERVER_SERVE_ARTIFACTS"
27 ARTIFACTS_ONLY_ENV_VAR = "_MLFLOW_SERVER_ARTIFACTS_ONLY"
28
29 REL_STATIC_DIR = "js/build"
30
31 app = Flask(__name__, static_folder=REL_STATIC_DIR)
32 STATIC_DIR = os.path.join(app.root_path, REL_STATIC_DIR)
33
34
35 for http_path, handler, methods in handlers.get_endpoints():
36 app.add_url_rule(http_path, handler.__name__, handler, methods=methods)
37
38 if os.getenv(PROMETHEUS_EXPORTER_ENV_VAR):
39 from mlflow.server.prometheus_exporter import activate_prometheus_exporter
40
41 prometheus_metrics_path = os.getenv(PROMETHEUS_EXPORTER_ENV_VAR)
42 if not os.path.exists(prometheus_metrics_path):
43 os.makedirs(prometheus_metrics_path)
44 activate_prometheus_exporter(app)
45
46
47 # Provide a health check endpoint to ensure the application is responsive
48 @app.route("/health")
49 def health():
50 return "OK", 200
51
52
53 # Provide an endpoint to query the version of mlflow running on the server
54 @app.route("/version")
55 def version():
56 return VERSION, 200
57
58
59 # Serve the "get-artifact" route.
60 @app.route(_add_static_prefix("/get-artifact"))
61 def serve_artifacts():
62 return get_artifact_handler()
63
64
65 # Serve the "model-versions/get-artifact" route.
66 @app.route(_add_static_prefix("/model-versions/get-artifact"))
67 def serve_model_version_artifact():
68 return get_model_version_artifact_handler()
69
70
71 # Serve the "metrics/get-history-bulk" route.
72 @app.route(_add_static_prefix("/ajax-api/2.0/mlflow/metrics/get-history-bulk"))
73 def serve_get_metric_history_bulk():
74 return get_metric_history_bulk_handler()
75
76
77 # We expect the react app to be built assuming it is hosted at /static-files, so that requests for
78 # CSS/JS resources will be made to e.g. /static-files/main.css and we can handle them here.
79 @app.route(_add_static_prefix("/static-files/<path:path>"))
80 def serve_static_file(path):
81 return send_from_directory(STATIC_DIR, path)
82
83
84 # Serve the index.html for the React App for all other routes.
85 @app.route(_add_static_prefix("/"))
86 def serve():
87 if os.path.exists(os.path.join(STATIC_DIR, "index.html")):
88 return send_from_directory(STATIC_DIR, "index.html")
89
90 text = textwrap.dedent(
91 """
92 Unable to display MLflow UI - landing page (index.html) not found.
93
94 You are very likely running the MLflow server using a source installation of the Python MLflow
95 package.
96
97 If you are a developer making MLflow source code changes and intentionally running a source
98 installation of MLflow, you can view the UI by running the Javascript dev server:
99 https://github.com/mlflow/mlflow/blob/master/CONTRIBUTING.md#running-the-javascript-dev-server
100
101 Otherwise, uninstall MLflow via 'pip uninstall mlflow', reinstall an official MLflow release
102 from PyPI via 'pip install mlflow', and rerun the MLflow server.
103 """
104 )
105 return Response(text, mimetype="text/plain")
106
107
108 def _build_waitress_command(waitress_opts, host, port):
109 opts = shlex.split(waitress_opts) if waitress_opts else []
110 return (
111 ["waitress-serve"]
112 + opts
113 + ["--host=%s" % host, "--port=%s" % port, "--ident=mlflow", "mlflow.server:app"]
114 )
115
116
117 def _build_gunicorn_command(gunicorn_opts, host, port, workers):
118 bind_address = f"{host}:{port}"
119 opts = shlex.split(gunicorn_opts) if gunicorn_opts else []
120 return ["gunicorn"] + opts + ["-b", bind_address, "-w", "%s" % workers, "mlflow.server:app"]
121
122
123 def _run_server(
124 file_store_path,
125 registry_store_uri,
126 default_artifact_root,
127 serve_artifacts,
128 artifacts_only,
129 artifacts_destination,
130 host,
131 port,
132 static_prefix=None,
133 workers=None,
134 gunicorn_opts=None,
135 waitress_opts=None,
136 expose_prometheus=None,
137 ):
138 """
139 Run the MLflow server, wrapping it in gunicorn or waitress on windows
140 :param static_prefix: If set, the index.html asset will be served from the path static_prefix.
141 If left None, the index.html asset will be served from the root path.
142 :return: None
143 """
144 env_map = {}
145 if file_store_path:
146 env_map[BACKEND_STORE_URI_ENV_VAR] = file_store_path
147 if registry_store_uri:
148 env_map[REGISTRY_STORE_URI_ENV_VAR] = registry_store_uri
149 if default_artifact_root:
150 env_map[ARTIFACT_ROOT_ENV_VAR] = default_artifact_root
151 if serve_artifacts:
152 env_map[SERVE_ARTIFACTS_ENV_VAR] = "true"
153 if artifacts_only:
154 env_map[ARTIFACTS_ONLY_ENV_VAR] = "true"
155 if artifacts_destination:
156 env_map[ARTIFACTS_DESTINATION_ENV_VAR] = artifacts_destination
157 if static_prefix:
158 env_map[STATIC_PREFIX_ENV_VAR] = static_prefix
159
160 if expose_prometheus:
161 env_map[PROMETHEUS_EXPORTER_ENV_VAR] = expose_prometheus
162
163 # TODO: eventually may want waitress on non-win32
164 if sys.platform == "win32":
165 full_command = _build_waitress_command(waitress_opts, host, port)
166 else:
167 full_command = _build_gunicorn_command(gunicorn_opts, host, port, workers or 4)
168 _exec_cmd(full_command, extra_env=env_map, capture_output=False)
169
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mlflow/server/__init__.py b/mlflow/server/__init__.py
--- a/mlflow/server/__init__.py
+++ b/mlflow/server/__init__.py
@@ -1,3 +1,4 @@
+import entrypoints
import os
import shlex
import sys
@@ -5,6 +6,7 @@
from flask import Flask, send_from_directory, Response
+from mlflow.exceptions import MlflowException
from mlflow.server import handlers
from mlflow.server.handlers import (
get_artifact_handler,
@@ -105,19 +107,37 @@
return Response(text, mimetype="text/plain")
-def _build_waitress_command(waitress_opts, host, port):
+def _get_app_name() -> str:
+ """Search for plugins for custom mlflow app, otherwise return default."""
+ apps = list(entrypoints.get_group_all("mlflow.app"))
+ # Default, nothing installed
+ if len(apps) == 0:
+ return f"{__name__}:app"
+ # Cannot install more than one
+ if len(apps) > 1:
+ raise MlflowException(
+ "Multiple server plugins detected. "
+ "Only one server plugin may be installed. "
+ f"Detected plugins: {', '.join([f'{a.module_name}.{a.object_name}' for a in apps])}"
+ )
+ # Has a plugin installed
+ plugin_app = apps[0]
+ return f"{plugin_app.module_name}:{plugin_app.object_name}"
+
+
+def _build_waitress_command(waitress_opts, host, port, app_name):
opts = shlex.split(waitress_opts) if waitress_opts else []
return (
["waitress-serve"]
+ opts
- + ["--host=%s" % host, "--port=%s" % port, "--ident=mlflow", "mlflow.server:app"]
+ + ["--host=%s" % host, "--port=%s" % port, "--ident=mlflow", app_name]
)
-def _build_gunicorn_command(gunicorn_opts, host, port, workers):
+def _build_gunicorn_command(gunicorn_opts, host, port, workers, app_name):
bind_address = f"{host}:{port}"
opts = shlex.split(gunicorn_opts) if gunicorn_opts else []
- return ["gunicorn"] + opts + ["-b", bind_address, "-w", "%s" % workers, "mlflow.server:app"]
+ return ["gunicorn"] + opts + ["-b", bind_address, "-w", "%s" % workers, app_name]
def _run_server(
@@ -160,9 +180,10 @@
if expose_prometheus:
env_map[PROMETHEUS_EXPORTER_ENV_VAR] = expose_prometheus
+ app_name = _get_app_name()
# TODO: eventually may want waitress on non-win32
if sys.platform == "win32":
- full_command = _build_waitress_command(waitress_opts, host, port)
+ full_command = _build_waitress_command(waitress_opts, host, port, app_name)
else:
- full_command = _build_gunicorn_command(gunicorn_opts, host, port, workers or 4)
+ full_command = _build_gunicorn_command(gunicorn_opts, host, port, workers or 4, app_name)
_exec_cmd(full_command, extra_env=env_map, capture_output=False)
|
{"golden_diff": "diff --git a/mlflow/server/__init__.py b/mlflow/server/__init__.py\n--- a/mlflow/server/__init__.py\n+++ b/mlflow/server/__init__.py\n@@ -1,3 +1,4 @@\n+import entrypoints\n import os\n import shlex\n import sys\n@@ -5,6 +6,7 @@\n \n from flask import Flask, send_from_directory, Response\n \n+from mlflow.exceptions import MlflowException\n from mlflow.server import handlers\n from mlflow.server.handlers import (\n get_artifact_handler,\n@@ -105,19 +107,37 @@\n return Response(text, mimetype=\"text/plain\")\n \n \n-def _build_waitress_command(waitress_opts, host, port):\n+def _get_app_name() -> str:\n+ \"\"\"Search for plugins for custom mlflow app, otherwise return default.\"\"\"\n+ apps = list(entrypoints.get_group_all(\"mlflow.app\"))\n+ # Default, nothing installed\n+ if len(apps) == 0:\n+ return f\"{__name__}:app\"\n+ # Cannot install more than one\n+ if len(apps) > 1:\n+ raise MlflowException(\n+ \"Multiple server plugins detected. \"\n+ \"Only one server plugin may be installed. \"\n+ f\"Detected plugins: {', '.join([f'{a.module_name}.{a.object_name}' for a in apps])}\"\n+ )\n+ # Has a plugin installed\n+ plugin_app = apps[0]\n+ return f\"{plugin_app.module_name}:{plugin_app.object_name}\"\n+\n+\n+def _build_waitress_command(waitress_opts, host, port, app_name):\n opts = shlex.split(waitress_opts) if waitress_opts else []\n return (\n [\"waitress-serve\"]\n + opts\n- + [\"--host=%s\" % host, \"--port=%s\" % port, \"--ident=mlflow\", \"mlflow.server:app\"]\n+ + [\"--host=%s\" % host, \"--port=%s\" % port, \"--ident=mlflow\", app_name]\n )\n \n \n-def _build_gunicorn_command(gunicorn_opts, host, port, workers):\n+def _build_gunicorn_command(gunicorn_opts, host, port, workers, app_name):\n bind_address = f\"{host}:{port}\"\n opts = shlex.split(gunicorn_opts) if gunicorn_opts else []\n- return [\"gunicorn\"] + opts + [\"-b\", bind_address, \"-w\", \"%s\" % workers, \"mlflow.server:app\"]\n+ return [\"gunicorn\"] + opts + [\"-b\", bind_address, \"-w\", \"%s\" % workers, app_name]\n \n \n def _run_server(\n@@ -160,9 +180,10 @@\n if expose_prometheus:\n env_map[PROMETHEUS_EXPORTER_ENV_VAR] = expose_prometheus\n \n+ app_name = _get_app_name()\n # TODO: eventually may want waitress on non-win32\n if sys.platform == \"win32\":\n- full_command = _build_waitress_command(waitress_opts, host, port)\n+ full_command = _build_waitress_command(waitress_opts, host, port, app_name)\n else:\n- full_command = _build_gunicorn_command(gunicorn_opts, host, port, workers or 4)\n+ full_command = _build_gunicorn_command(gunicorn_opts, host, port, workers or 4, app_name)\n _exec_cmd(full_command, extra_env=env_map, capture_output=False)\n", "issue": "[FR] Allow override of gunicorn app entrypoint for MLflow server\n### Willingness to contribute\r\n\r\nYes. I can contribute this feature independently.\r\n\r\n### Proposal Summary\r\n\r\nThe MLflow server executes gunicorn with a hard-coded value `mlflow.server:app` ([reference](https://github.com/mlflow/mlflow/blob/master/mlflow/server/__init__.py#L106)) gunicorn application cli parameter. This prevents easy customization of the MLflow server Flask app i.e. adding `bofore_request` hooks.\r\n\r\n### Motivation\r\n\r\n> #### What is the use case for this feature?\r\nThe goal is adding a `before_request` hook to perform authorization by checking JWT authenication header provided by an AWS application load balancer \r\n\r\n> #### Why is this use case valuable to support for MLflow users in general?\r\nIn order to allow extra flexibility for self-hosted MLflow users to customize the mlflow Flask server with pre/post request hooks \r\n\r\n> #### Why is this use case valuable to support for your project(s) or organization?\r\nThis would simplify the logic for authorization of users of the MLflow server\r\n\r\n> #### Why is it currently difficult to achieve this use case?\r\nBecause MLflow server doesn't natively allow custom pre/post process request hooks for the tracking webserver component\r\n\r\n\r\n### Details\r\n\r\nIt would be desirable that the aforementioned gunicorn app entrypoint can be overridden by a custom value (either via --gunicorn-opts or an additional `mlflow server` cli option)\r\n\r\nIdeally one would be able to write a custom script as this:\r\n\r\n```python\r\n# customapp.py\r\n\r\nfrom flask import request\r\nfrom mlflow.server import app\r\n\r\[email protected]_request\r\ndef hook():\r\n print(request.path)\r\n```\r\nand run:\r\n`mlflow server ... --gunicorn-app customapp:app`\r\n\r\n\r\n### What component(s) does this bug affect?\r\n\r\n- [ ] `area/artifacts`: Artifact stores and artifact logging\r\n- [ ] `area/build`: Build and test infrastructure for MLflow\r\n- [ ] `area/docs`: MLflow documentation pages\r\n- [ ] `area/examples`: Example code\r\n- [ ] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry\r\n- [ ] `area/models`: MLmodel format, model serialization/deserialization, flavors\r\n- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates\r\n- [ ] `area/projects`: MLproject format, project running backends\r\n- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs\r\n- [ ] `area/server-infra`: MLflow Tracking server backend\r\n- [X] `area/tracking`: Tracking Service, tracking client APIs, autologging\r\n\r\n### What interface(s) does this bug affect?\r\n\r\n- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server\r\n- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models\r\n- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry\r\n- [ ] `area/windows`: Windows support\r\n\r\n### What language(s) does this bug affect?\r\n\r\n- [ ] `language/r`: R APIs and clients\r\n- [ ] `language/java`: Java APIs and clients\r\n- [ ] `language/new`: Proposals for new client languages\r\n\r\n### What integration(s) does this bug affect?\r\n\r\n- [ ] `integrations/azure`: Azure and Azure ML integrations\r\n- [ ] `integrations/sagemaker`: SageMaker integrations\r\n- [ ] `integrations/databricks`: Databricks integrations\n", "before_files": [{"content": "import os\nimport shlex\nimport sys\nimport textwrap\n\nfrom flask import Flask, send_from_directory, Response\n\nfrom mlflow.server import handlers\nfrom mlflow.server.handlers import (\n get_artifact_handler,\n get_metric_history_bulk_handler,\n STATIC_PREFIX_ENV_VAR,\n _add_static_prefix,\n get_model_version_artifact_handler,\n)\nfrom mlflow.utils.process import _exec_cmd\nfrom mlflow.version import VERSION\n\n# NB: These are internal environment variables used for communication between\n# the cli and the forked gunicorn processes.\nBACKEND_STORE_URI_ENV_VAR = \"_MLFLOW_SERVER_FILE_STORE\"\nREGISTRY_STORE_URI_ENV_VAR = \"_MLFLOW_SERVER_REGISTRY_STORE\"\nARTIFACT_ROOT_ENV_VAR = \"_MLFLOW_SERVER_ARTIFACT_ROOT\"\nARTIFACTS_DESTINATION_ENV_VAR = \"_MLFLOW_SERVER_ARTIFACT_DESTINATION\"\nPROMETHEUS_EXPORTER_ENV_VAR = \"prometheus_multiproc_dir\"\nSERVE_ARTIFACTS_ENV_VAR = \"_MLFLOW_SERVER_SERVE_ARTIFACTS\"\nARTIFACTS_ONLY_ENV_VAR = \"_MLFLOW_SERVER_ARTIFACTS_ONLY\"\n\nREL_STATIC_DIR = \"js/build\"\n\napp = Flask(__name__, static_folder=REL_STATIC_DIR)\nSTATIC_DIR = os.path.join(app.root_path, REL_STATIC_DIR)\n\n\nfor http_path, handler, methods in handlers.get_endpoints():\n app.add_url_rule(http_path, handler.__name__, handler, methods=methods)\n\nif os.getenv(PROMETHEUS_EXPORTER_ENV_VAR):\n from mlflow.server.prometheus_exporter import activate_prometheus_exporter\n\n prometheus_metrics_path = os.getenv(PROMETHEUS_EXPORTER_ENV_VAR)\n if not os.path.exists(prometheus_metrics_path):\n os.makedirs(prometheus_metrics_path)\n activate_prometheus_exporter(app)\n\n\n# Provide a health check endpoint to ensure the application is responsive\[email protected](\"/health\")\ndef health():\n return \"OK\", 200\n\n\n# Provide an endpoint to query the version of mlflow running on the server\[email protected](\"/version\")\ndef version():\n return VERSION, 200\n\n\n# Serve the \"get-artifact\" route.\[email protected](_add_static_prefix(\"/get-artifact\"))\ndef serve_artifacts():\n return get_artifact_handler()\n\n\n# Serve the \"model-versions/get-artifact\" route.\[email protected](_add_static_prefix(\"/model-versions/get-artifact\"))\ndef serve_model_version_artifact():\n return get_model_version_artifact_handler()\n\n\n# Serve the \"metrics/get-history-bulk\" route.\[email protected](_add_static_prefix(\"/ajax-api/2.0/mlflow/metrics/get-history-bulk\"))\ndef serve_get_metric_history_bulk():\n return get_metric_history_bulk_handler()\n\n\n# We expect the react app to be built assuming it is hosted at /static-files, so that requests for\n# CSS/JS resources will be made to e.g. /static-files/main.css and we can handle them here.\[email protected](_add_static_prefix(\"/static-files/<path:path>\"))\ndef serve_static_file(path):\n return send_from_directory(STATIC_DIR, path)\n\n\n# Serve the index.html for the React App for all other routes.\[email protected](_add_static_prefix(\"/\"))\ndef serve():\n if os.path.exists(os.path.join(STATIC_DIR, \"index.html\")):\n return send_from_directory(STATIC_DIR, \"index.html\")\n\n text = textwrap.dedent(\n \"\"\"\n Unable to display MLflow UI - landing page (index.html) not found.\n\n You are very likely running the MLflow server using a source installation of the Python MLflow\n package.\n\n If you are a developer making MLflow source code changes and intentionally running a source\n installation of MLflow, you can view the UI by running the Javascript dev server:\n https://github.com/mlflow/mlflow/blob/master/CONTRIBUTING.md#running-the-javascript-dev-server\n\n Otherwise, uninstall MLflow via 'pip uninstall mlflow', reinstall an official MLflow release\n from PyPI via 'pip install mlflow', and rerun the MLflow server.\n \"\"\"\n )\n return Response(text, mimetype=\"text/plain\")\n\n\ndef _build_waitress_command(waitress_opts, host, port):\n opts = shlex.split(waitress_opts) if waitress_opts else []\n return (\n [\"waitress-serve\"]\n + opts\n + [\"--host=%s\" % host, \"--port=%s\" % port, \"--ident=mlflow\", \"mlflow.server:app\"]\n )\n\n\ndef _build_gunicorn_command(gunicorn_opts, host, port, workers):\n bind_address = f\"{host}:{port}\"\n opts = shlex.split(gunicorn_opts) if gunicorn_opts else []\n return [\"gunicorn\"] + opts + [\"-b\", bind_address, \"-w\", \"%s\" % workers, \"mlflow.server:app\"]\n\n\ndef _run_server(\n file_store_path,\n registry_store_uri,\n default_artifact_root,\n serve_artifacts,\n artifacts_only,\n artifacts_destination,\n host,\n port,\n static_prefix=None,\n workers=None,\n gunicorn_opts=None,\n waitress_opts=None,\n expose_prometheus=None,\n):\n \"\"\"\n Run the MLflow server, wrapping it in gunicorn or waitress on windows\n :param static_prefix: If set, the index.html asset will be served from the path static_prefix.\n If left None, the index.html asset will be served from the root path.\n :return: None\n \"\"\"\n env_map = {}\n if file_store_path:\n env_map[BACKEND_STORE_URI_ENV_VAR] = file_store_path\n if registry_store_uri:\n env_map[REGISTRY_STORE_URI_ENV_VAR] = registry_store_uri\n if default_artifact_root:\n env_map[ARTIFACT_ROOT_ENV_VAR] = default_artifact_root\n if serve_artifacts:\n env_map[SERVE_ARTIFACTS_ENV_VAR] = \"true\"\n if artifacts_only:\n env_map[ARTIFACTS_ONLY_ENV_VAR] = \"true\"\n if artifacts_destination:\n env_map[ARTIFACTS_DESTINATION_ENV_VAR] = artifacts_destination\n if static_prefix:\n env_map[STATIC_PREFIX_ENV_VAR] = static_prefix\n\n if expose_prometheus:\n env_map[PROMETHEUS_EXPORTER_ENV_VAR] = expose_prometheus\n\n # TODO: eventually may want waitress on non-win32\n if sys.platform == \"win32\":\n full_command = _build_waitress_command(waitress_opts, host, port)\n else:\n full_command = _build_gunicorn_command(gunicorn_opts, host, port, workers or 4)\n _exec_cmd(full_command, extra_env=env_map, capture_output=False)\n", "path": "mlflow/server/__init__.py"}], "after_files": [{"content": "import entrypoints\nimport os\nimport shlex\nimport sys\nimport textwrap\n\nfrom flask import Flask, send_from_directory, Response\n\nfrom mlflow.exceptions import MlflowException\nfrom mlflow.server import handlers\nfrom mlflow.server.handlers import (\n get_artifact_handler,\n get_metric_history_bulk_handler,\n STATIC_PREFIX_ENV_VAR,\n _add_static_prefix,\n get_model_version_artifact_handler,\n)\nfrom mlflow.utils.process import _exec_cmd\nfrom mlflow.version import VERSION\n\n# NB: These are internal environment variables used for communication between\n# the cli and the forked gunicorn processes.\nBACKEND_STORE_URI_ENV_VAR = \"_MLFLOW_SERVER_FILE_STORE\"\nREGISTRY_STORE_URI_ENV_VAR = \"_MLFLOW_SERVER_REGISTRY_STORE\"\nARTIFACT_ROOT_ENV_VAR = \"_MLFLOW_SERVER_ARTIFACT_ROOT\"\nARTIFACTS_DESTINATION_ENV_VAR = \"_MLFLOW_SERVER_ARTIFACT_DESTINATION\"\nPROMETHEUS_EXPORTER_ENV_VAR = \"prometheus_multiproc_dir\"\nSERVE_ARTIFACTS_ENV_VAR = \"_MLFLOW_SERVER_SERVE_ARTIFACTS\"\nARTIFACTS_ONLY_ENV_VAR = \"_MLFLOW_SERVER_ARTIFACTS_ONLY\"\n\nREL_STATIC_DIR = \"js/build\"\n\napp = Flask(__name__, static_folder=REL_STATIC_DIR)\nSTATIC_DIR = os.path.join(app.root_path, REL_STATIC_DIR)\n\n\nfor http_path, handler, methods in handlers.get_endpoints():\n app.add_url_rule(http_path, handler.__name__, handler, methods=methods)\n\nif os.getenv(PROMETHEUS_EXPORTER_ENV_VAR):\n from mlflow.server.prometheus_exporter import activate_prometheus_exporter\n\n prometheus_metrics_path = os.getenv(PROMETHEUS_EXPORTER_ENV_VAR)\n if not os.path.exists(prometheus_metrics_path):\n os.makedirs(prometheus_metrics_path)\n activate_prometheus_exporter(app)\n\n\n# Provide a health check endpoint to ensure the application is responsive\[email protected](\"/health\")\ndef health():\n return \"OK\", 200\n\n\n# Provide an endpoint to query the version of mlflow running on the server\[email protected](\"/version\")\ndef version():\n return VERSION, 200\n\n\n# Serve the \"get-artifact\" route.\[email protected](_add_static_prefix(\"/get-artifact\"))\ndef serve_artifacts():\n return get_artifact_handler()\n\n\n# Serve the \"model-versions/get-artifact\" route.\[email protected](_add_static_prefix(\"/model-versions/get-artifact\"))\ndef serve_model_version_artifact():\n return get_model_version_artifact_handler()\n\n\n# Serve the \"metrics/get-history-bulk\" route.\[email protected](_add_static_prefix(\"/ajax-api/2.0/mlflow/metrics/get-history-bulk\"))\ndef serve_get_metric_history_bulk():\n return get_metric_history_bulk_handler()\n\n\n# We expect the react app to be built assuming it is hosted at /static-files, so that requests for\n# CSS/JS resources will be made to e.g. /static-files/main.css and we can handle them here.\[email protected](_add_static_prefix(\"/static-files/<path:path>\"))\ndef serve_static_file(path):\n return send_from_directory(STATIC_DIR, path)\n\n\n# Serve the index.html for the React App for all other routes.\[email protected](_add_static_prefix(\"/\"))\ndef serve():\n if os.path.exists(os.path.join(STATIC_DIR, \"index.html\")):\n return send_from_directory(STATIC_DIR, \"index.html\")\n\n text = textwrap.dedent(\n \"\"\"\n Unable to display MLflow UI - landing page (index.html) not found.\n\n You are very likely running the MLflow server using a source installation of the Python MLflow\n package.\n\n If you are a developer making MLflow source code changes and intentionally running a source\n installation of MLflow, you can view the UI by running the Javascript dev server:\n https://github.com/mlflow/mlflow/blob/master/CONTRIBUTING.md#running-the-javascript-dev-server\n\n Otherwise, uninstall MLflow via 'pip uninstall mlflow', reinstall an official MLflow release\n from PyPI via 'pip install mlflow', and rerun the MLflow server.\n \"\"\"\n )\n return Response(text, mimetype=\"text/plain\")\n\n\ndef _get_app_name() -> str:\n \"\"\"Search for plugins for custom mlflow app, otherwise return default.\"\"\"\n apps = list(entrypoints.get_group_all(\"mlflow.app\"))\n # Default, nothing installed\n if len(apps) == 0:\n return f\"{__name__}:app\"\n # Cannot install more than one\n if len(apps) > 1:\n raise MlflowException(\n \"Multiple server plugins detected. \"\n \"Only one server plugin may be installed. \"\n f\"Detected plugins: {', '.join([f'{a.module_name}.{a.object_name}' for a in apps])}\"\n )\n # Has a plugin installed\n plugin_app = apps[0]\n return f\"{plugin_app.module_name}:{plugin_app.object_name}\"\n\n\ndef _build_waitress_command(waitress_opts, host, port, app_name):\n opts = shlex.split(waitress_opts) if waitress_opts else []\n return (\n [\"waitress-serve\"]\n + opts\n + [\"--host=%s\" % host, \"--port=%s\" % port, \"--ident=mlflow\", app_name]\n )\n\n\ndef _build_gunicorn_command(gunicorn_opts, host, port, workers, app_name):\n bind_address = f\"{host}:{port}\"\n opts = shlex.split(gunicorn_opts) if gunicorn_opts else []\n return [\"gunicorn\"] + opts + [\"-b\", bind_address, \"-w\", \"%s\" % workers, app_name]\n\n\ndef _run_server(\n file_store_path,\n registry_store_uri,\n default_artifact_root,\n serve_artifacts,\n artifacts_only,\n artifacts_destination,\n host,\n port,\n static_prefix=None,\n workers=None,\n gunicorn_opts=None,\n waitress_opts=None,\n expose_prometheus=None,\n):\n \"\"\"\n Run the MLflow server, wrapping it in gunicorn or waitress on windows\n :param static_prefix: If set, the index.html asset will be served from the path static_prefix.\n If left None, the index.html asset will be served from the root path.\n :return: None\n \"\"\"\n env_map = {}\n if file_store_path:\n env_map[BACKEND_STORE_URI_ENV_VAR] = file_store_path\n if registry_store_uri:\n env_map[REGISTRY_STORE_URI_ENV_VAR] = registry_store_uri\n if default_artifact_root:\n env_map[ARTIFACT_ROOT_ENV_VAR] = default_artifact_root\n if serve_artifacts:\n env_map[SERVE_ARTIFACTS_ENV_VAR] = \"true\"\n if artifacts_only:\n env_map[ARTIFACTS_ONLY_ENV_VAR] = \"true\"\n if artifacts_destination:\n env_map[ARTIFACTS_DESTINATION_ENV_VAR] = artifacts_destination\n if static_prefix:\n env_map[STATIC_PREFIX_ENV_VAR] = static_prefix\n\n if expose_prometheus:\n env_map[PROMETHEUS_EXPORTER_ENV_VAR] = expose_prometheus\n\n app_name = _get_app_name()\n # TODO: eventually may want waitress on non-win32\n if sys.platform == \"win32\":\n full_command = _build_waitress_command(waitress_opts, host, port, app_name)\n else:\n full_command = _build_gunicorn_command(gunicorn_opts, host, port, workers or 4, app_name)\n _exec_cmd(full_command, extra_env=env_map, capture_output=False)\n", "path": "mlflow/server/__init__.py"}]}
| 2,872 | 768 |
gh_patches_debug_5444
|
rasdani/github-patches
|
git_diff
|
strawberry-graphql__strawberry-1726
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`dacite` library doesn't compatible after 0.69.0 strawberry-graphql. (unhasable type 'StrawberryOptional' problem)
I have a regression issue with StrawberryOptional (breaking changes in 0.69.0)
When I use dacite library to convert dict to strawberry.type class instance, it works in 0.67.x version.
```
from dacite import from_dict
@strawberry.type
class Tag:
tag_id: int = None
category: str = None
images: List[str] = None
@strawberry.field()
def image(self, info: Info) -> Optional[str]:
...
from_dict(data_class=Tag, data=dict(tag_id=34, category="song", images=["a", "b"]))
// same with Tag(tag_id=34, category="song", images=["a", "b"])
```
`dacite.from_dict` method convert `dict(**attributes)` to `Tag(tag_id=~, ...)` automatically.
It makes ORM model instance <-> Strawberry type instance conversion easily.
After upgrade, it occur an error unhashable type: 'StrawberryOptional' because of Optional[str] type.
Can we make `StrawberryOptional`, `StrawberryList` to be hashable type to use dacite.from_dict ?
Or is there any alternative way to convert dict to strawberry.type instance?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `strawberry/type.py`
Content:
```
1 from __future__ import annotations
2
3 from abc import ABC, abstractmethod
4 from typing import TYPE_CHECKING, List, Mapping, TypeVar, Union
5
6
7 if TYPE_CHECKING:
8 from .types.types import TypeDefinition
9
10
11 class StrawberryType(ABC):
12 @property
13 def type_params(self) -> List[TypeVar]:
14 return []
15
16 @abstractmethod
17 def copy_with(
18 self, type_var_map: Mapping[TypeVar, Union[StrawberryType, type]]
19 ) -> Union[StrawberryType, type]:
20 raise NotImplementedError()
21
22 @property
23 @abstractmethod
24 def is_generic(self) -> bool:
25 raise NotImplementedError()
26
27 def __eq__(self, other: object) -> bool:
28 from strawberry.annotation import StrawberryAnnotation
29
30 if isinstance(other, StrawberryType):
31 return self is other
32
33 elif isinstance(other, StrawberryAnnotation):
34 return self == other.resolve()
35
36 else:
37 # This could be simplified if StrawberryAnnotation.resolve() always returned
38 # a StrawberryType
39 resolved = StrawberryAnnotation(other).resolve()
40 if isinstance(resolved, StrawberryType):
41 return self == resolved
42 else:
43 return NotImplemented
44
45 def __hash__(self) -> int:
46 # TODO: Is this a bad idea? __eq__ objects are supposed to have the same hash
47 return id(self)
48
49
50 class StrawberryContainer(StrawberryType):
51 def __init__(self, of_type: Union[StrawberryType, type]):
52 self.of_type = of_type
53
54 def __eq__(self, other: object) -> bool:
55 if isinstance(other, StrawberryType):
56 if isinstance(other, StrawberryContainer):
57 return self.of_type == other.of_type
58 else:
59 return False
60
61 return super().__eq__(other)
62
63 @property
64 def type_params(self) -> List[TypeVar]:
65 if hasattr(self.of_type, "_type_definition"):
66 parameters = getattr(self.of_type, "__parameters__", None)
67
68 return list(parameters) if parameters else []
69
70 elif isinstance(self.of_type, StrawberryType):
71 return self.of_type.type_params
72
73 else:
74 return []
75
76 def copy_with(
77 self, type_var_map: Mapping[TypeVar, Union[StrawberryType, type]]
78 ) -> StrawberryType:
79 of_type_copy: Union[StrawberryType, type]
80
81 # TODO: Obsolete with StrawberryObject
82 if hasattr(self.of_type, "_type_definition"):
83 type_definition: TypeDefinition = (
84 self.of_type._type_definition # type: ignore
85 )
86
87 if type_definition.is_generic:
88 of_type_copy = type_definition.copy_with(type_var_map)
89
90 elif isinstance(self.of_type, StrawberryType) and self.of_type.is_generic:
91 of_type_copy = self.of_type.copy_with(type_var_map)
92
93 assert of_type_copy
94
95 return type(self)(of_type_copy)
96
97 @property
98 def is_generic(self) -> bool:
99 # TODO: Obsolete with StrawberryObject
100 type_ = self.of_type
101 if hasattr(self.of_type, "_type_definition"):
102 type_ = self.of_type._type_definition # type: ignore
103
104 if isinstance(type_, StrawberryType):
105 return type_.is_generic
106
107 return False
108
109
110 class StrawberryList(StrawberryContainer):
111 ...
112
113
114 class StrawberryOptional(StrawberryContainer):
115 ...
116
117
118 class StrawberryTypeVar(StrawberryType):
119 def __init__(self, type_var: TypeVar):
120 self.type_var = type_var
121
122 def copy_with(
123 self, type_var_map: Mapping[TypeVar, Union[StrawberryType, type]]
124 ) -> Union[StrawberryType, type]:
125 return type_var_map[self.type_var]
126
127 @property
128 def is_generic(self) -> bool:
129 return True
130
131 @property
132 def type_params(self) -> List[TypeVar]:
133 return [self.type_var]
134
135 def __eq__(self, other) -> bool:
136 if isinstance(other, StrawberryTypeVar):
137 return self.type_var == other.type_var
138 if isinstance(other, TypeVar):
139 return self.type_var == other
140
141 return super().__eq__(other)
142
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/strawberry/type.py b/strawberry/type.py
--- a/strawberry/type.py
+++ b/strawberry/type.py
@@ -51,6 +51,9 @@
def __init__(self, of_type: Union[StrawberryType, type]):
self.of_type = of_type
+ def __hash__(self) -> int:
+ return hash((self.__class__, self.of_type))
+
def __eq__(self, other: object) -> bool:
if isinstance(other, StrawberryType):
if isinstance(other, StrawberryContainer):
|
{"golden_diff": "diff --git a/strawberry/type.py b/strawberry/type.py\n--- a/strawberry/type.py\n+++ b/strawberry/type.py\n@@ -51,6 +51,9 @@\n def __init__(self, of_type: Union[StrawberryType, type]):\n self.of_type = of_type\n \n+ def __hash__(self) -> int:\n+ return hash((self.__class__, self.of_type))\n+\n def __eq__(self, other: object) -> bool:\n if isinstance(other, StrawberryType):\n if isinstance(other, StrawberryContainer):\n", "issue": "`dacite` library doesn't compatible after 0.69.0 strawberry-graphql. (unhasable type 'StrawberryOptional' problem)\nI have a regression issue with StrawberryOptional (breaking changes in 0.69.0) \r\nWhen I use dacite library to convert dict to strawberry.type class instance, it works in 0.67.x version. \r\n\r\n```\r\nfrom dacite import from_dict\r\n\r\[email protected]\r\nclass Tag:\r\n tag_id: int = None\r\n category: str = None\r\n images: List[str] = None\r\n\r\n @strawberry.field()\r\n def image(self, info: Info) -> Optional[str]:\r\n ...\r\n\r\nfrom_dict(data_class=Tag, data=dict(tag_id=34, category=\"song\", images=[\"a\", \"b\"]))\r\n// same with Tag(tag_id=34, category=\"song\", images=[\"a\", \"b\"]) \r\n```\r\n\r\n`dacite.from_dict` method convert `dict(**attributes)` to `Tag(tag_id=~, ...)` automatically. \r\nIt makes ORM model instance <-> Strawberry type instance conversion easily. \r\n\r\nAfter upgrade, it occur an error unhashable type: 'StrawberryOptional' because of Optional[str] type. \r\n\r\nCan we make `StrawberryOptional`, `StrawberryList` to be hashable type to use dacite.from_dict ? \r\nOr is there any alternative way to convert dict to strawberry.type instance?\r\n\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom abc import ABC, abstractmethod\nfrom typing import TYPE_CHECKING, List, Mapping, TypeVar, Union\n\n\nif TYPE_CHECKING:\n from .types.types import TypeDefinition\n\n\nclass StrawberryType(ABC):\n @property\n def type_params(self) -> List[TypeVar]:\n return []\n\n @abstractmethod\n def copy_with(\n self, type_var_map: Mapping[TypeVar, Union[StrawberryType, type]]\n ) -> Union[StrawberryType, type]:\n raise NotImplementedError()\n\n @property\n @abstractmethod\n def is_generic(self) -> bool:\n raise NotImplementedError()\n\n def __eq__(self, other: object) -> bool:\n from strawberry.annotation import StrawberryAnnotation\n\n if isinstance(other, StrawberryType):\n return self is other\n\n elif isinstance(other, StrawberryAnnotation):\n return self == other.resolve()\n\n else:\n # This could be simplified if StrawberryAnnotation.resolve() always returned\n # a StrawberryType\n resolved = StrawberryAnnotation(other).resolve()\n if isinstance(resolved, StrawberryType):\n return self == resolved\n else:\n return NotImplemented\n\n def __hash__(self) -> int:\n # TODO: Is this a bad idea? __eq__ objects are supposed to have the same hash\n return id(self)\n\n\nclass StrawberryContainer(StrawberryType):\n def __init__(self, of_type: Union[StrawberryType, type]):\n self.of_type = of_type\n\n def __eq__(self, other: object) -> bool:\n if isinstance(other, StrawberryType):\n if isinstance(other, StrawberryContainer):\n return self.of_type == other.of_type\n else:\n return False\n\n return super().__eq__(other)\n\n @property\n def type_params(self) -> List[TypeVar]:\n if hasattr(self.of_type, \"_type_definition\"):\n parameters = getattr(self.of_type, \"__parameters__\", None)\n\n return list(parameters) if parameters else []\n\n elif isinstance(self.of_type, StrawberryType):\n return self.of_type.type_params\n\n else:\n return []\n\n def copy_with(\n self, type_var_map: Mapping[TypeVar, Union[StrawberryType, type]]\n ) -> StrawberryType:\n of_type_copy: Union[StrawberryType, type]\n\n # TODO: Obsolete with StrawberryObject\n if hasattr(self.of_type, \"_type_definition\"):\n type_definition: TypeDefinition = (\n self.of_type._type_definition # type: ignore\n )\n\n if type_definition.is_generic:\n of_type_copy = type_definition.copy_with(type_var_map)\n\n elif isinstance(self.of_type, StrawberryType) and self.of_type.is_generic:\n of_type_copy = self.of_type.copy_with(type_var_map)\n\n assert of_type_copy\n\n return type(self)(of_type_copy)\n\n @property\n def is_generic(self) -> bool:\n # TODO: Obsolete with StrawberryObject\n type_ = self.of_type\n if hasattr(self.of_type, \"_type_definition\"):\n type_ = self.of_type._type_definition # type: ignore\n\n if isinstance(type_, StrawberryType):\n return type_.is_generic\n\n return False\n\n\nclass StrawberryList(StrawberryContainer):\n ...\n\n\nclass StrawberryOptional(StrawberryContainer):\n ...\n\n\nclass StrawberryTypeVar(StrawberryType):\n def __init__(self, type_var: TypeVar):\n self.type_var = type_var\n\n def copy_with(\n self, type_var_map: Mapping[TypeVar, Union[StrawberryType, type]]\n ) -> Union[StrawberryType, type]:\n return type_var_map[self.type_var]\n\n @property\n def is_generic(self) -> bool:\n return True\n\n @property\n def type_params(self) -> List[TypeVar]:\n return [self.type_var]\n\n def __eq__(self, other) -> bool:\n if isinstance(other, StrawberryTypeVar):\n return self.type_var == other.type_var\n if isinstance(other, TypeVar):\n return self.type_var == other\n\n return super().__eq__(other)\n", "path": "strawberry/type.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom abc import ABC, abstractmethod\nfrom typing import TYPE_CHECKING, List, Mapping, TypeVar, Union\n\n\nif TYPE_CHECKING:\n from .types.types import TypeDefinition\n\n\nclass StrawberryType(ABC):\n @property\n def type_params(self) -> List[TypeVar]:\n return []\n\n @abstractmethod\n def copy_with(\n self, type_var_map: Mapping[TypeVar, Union[StrawberryType, type]]\n ) -> Union[StrawberryType, type]:\n raise NotImplementedError()\n\n @property\n @abstractmethod\n def is_generic(self) -> bool:\n raise NotImplementedError()\n\n def __eq__(self, other: object) -> bool:\n from strawberry.annotation import StrawberryAnnotation\n\n if isinstance(other, StrawberryType):\n return self is other\n\n elif isinstance(other, StrawberryAnnotation):\n return self == other.resolve()\n\n else:\n # This could be simplified if StrawberryAnnotation.resolve() always returned\n # a StrawberryType\n resolved = StrawberryAnnotation(other).resolve()\n if isinstance(resolved, StrawberryType):\n return self == resolved\n else:\n return NotImplemented\n\n def __hash__(self) -> int:\n # TODO: Is this a bad idea? __eq__ objects are supposed to have the same hash\n return id(self)\n\n\nclass StrawberryContainer(StrawberryType):\n def __init__(self, of_type: Union[StrawberryType, type]):\n self.of_type = of_type\n\n def __hash__(self) -> int:\n return hash((self.__class__, self.of_type))\n\n def __eq__(self, other: object) -> bool:\n if isinstance(other, StrawberryType):\n if isinstance(other, StrawberryContainer):\n return self.of_type == other.of_type\n else:\n return False\n\n return super().__eq__(other)\n\n @property\n def type_params(self) -> List[TypeVar]:\n if hasattr(self.of_type, \"_type_definition\"):\n parameters = getattr(self.of_type, \"__parameters__\", None)\n\n return list(parameters) if parameters else []\n\n elif isinstance(self.of_type, StrawberryType):\n return self.of_type.type_params\n\n else:\n return []\n\n def copy_with(\n self, type_var_map: Mapping[TypeVar, Union[StrawberryType, type]]\n ) -> StrawberryType:\n of_type_copy: Union[StrawberryType, type]\n\n # TODO: Obsolete with StrawberryObject\n if hasattr(self.of_type, \"_type_definition\"):\n type_definition: TypeDefinition = (\n self.of_type._type_definition # type: ignore\n )\n\n if type_definition.is_generic:\n of_type_copy = type_definition.copy_with(type_var_map)\n\n elif isinstance(self.of_type, StrawberryType) and self.of_type.is_generic:\n of_type_copy = self.of_type.copy_with(type_var_map)\n\n assert of_type_copy\n\n return type(self)(of_type_copy)\n\n @property\n def is_generic(self) -> bool:\n # TODO: Obsolete with StrawberryObject\n type_ = self.of_type\n if hasattr(self.of_type, \"_type_definition\"):\n type_ = self.of_type._type_definition # type: ignore\n\n if isinstance(type_, StrawberryType):\n return type_.is_generic\n\n return False\n\n\nclass StrawberryList(StrawberryContainer):\n ...\n\n\nclass StrawberryOptional(StrawberryContainer):\n ...\n\n\nclass StrawberryTypeVar(StrawberryType):\n def __init__(self, type_var: TypeVar):\n self.type_var = type_var\n\n def copy_with(\n self, type_var_map: Mapping[TypeVar, Union[StrawberryType, type]]\n ) -> Union[StrawberryType, type]:\n return type_var_map[self.type_var]\n\n @property\n def is_generic(self) -> bool:\n return True\n\n @property\n def type_params(self) -> List[TypeVar]:\n return [self.type_var]\n\n def __eq__(self, other) -> bool:\n if isinstance(other, StrawberryTypeVar):\n return self.type_var == other.type_var\n if isinstance(other, TypeVar):\n return self.type_var == other\n\n return super().__eq__(other)\n", "path": "strawberry/type.py"}]}
| 1,791 | 130 |
gh_patches_debug_37020
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-2642
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Convert Values to JSON for prometheus collector
* Fixes #2502
# Convert values to native JSON
Convert values to native JSON instead of string in the Prometheus Collector
* Changes in the list for `_translate_to_prometheus`
Fixes #2502
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] This change requires a documentation update
# Checklist:
- [x] Followed the style guidelines of this project
- [x] Changelogs have been updated
- [ ] Unit tests have been added
- [ ] Documentation has been updated
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 This library allows export of metrics data to `Prometheus <https://prometheus.io/>`_.
17
18 Usage
19 -----
20
21 The **OpenTelemetry Prometheus Exporter** allows export of `OpenTelemetry`_
22 metrics to `Prometheus`_.
23
24
25 .. _Prometheus: https://prometheus.io/
26 .. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
27
28 .. code:: python
29
30 from prometheus_client import start_http_server
31
32 from opentelemetry._metrics import get_meter_provider, set_meter_provider
33 from opentelemetry.exporter.prometheus import PrometheusMetricReader
34 from opentelemetry.sdk._metrics import MeterProvider
35
36 # Start Prometheus client
37 start_http_server(port=8000, addr="localhost")
38
39 # Exporter to export metrics to Prometheus
40 prefix = "MyAppPrefix"
41 reader = PrometheusMetricReader(prefix)
42
43 # Meter is responsible for creating and recording metrics
44 set_meter_provider(MeterProvider(metric_readers=[reader]))
45 meter = get_meter_provider().get_meter("myapp", "0.1.2")
46
47 counter = meter.create_counter(
48 "requests",
49 "requests",
50 "number of requests",
51 )
52
53 # Labels are used to identify key-values that are associated with a specific
54 # metric that you want to record. These are useful for pre-aggregation and can
55 # be used to store custom dimensions pertaining to a metric
56 labels = {"environment": "staging"}
57
58 counter.add(25, labels)
59 input("Press any key to exit...")
60
61 API
62 ---
63 """
64
65 import collections
66 import logging
67 import re
68 from itertools import chain
69 from typing import Iterable, Optional, Sequence, Tuple
70
71 from prometheus_client import core
72
73 from opentelemetry.sdk._metrics.export import MetricReader
74 from opentelemetry.sdk._metrics.point import Gauge, Histogram, Metric, Sum
75
76 _logger = logging.getLogger(__name__)
77
78
79 def _convert_buckets(metric: Metric) -> Sequence[Tuple[str, int]]:
80 buckets = []
81 total_count = 0
82 for upper_bound, count in zip(
83 chain(metric.point.explicit_bounds, ["+Inf"]),
84 metric.point.bucket_counts,
85 ):
86 total_count += count
87 buckets.append((f"{upper_bound}", total_count))
88
89 return buckets
90
91
92 class PrometheusMetricReader(MetricReader):
93 """Prometheus metric exporter for OpenTelemetry.
94
95 Args:
96 prefix: single-word application prefix relevant to the domain
97 the metric belongs to.
98 """
99
100 def __init__(self, prefix: str = "") -> None:
101 super().__init__()
102 self._collector = _CustomCollector(prefix)
103 core.REGISTRY.register(self._collector)
104 self._collector._callback = self.collect
105
106 def _receive_metrics(self, metrics: Iterable[Metric]) -> None:
107 if metrics is None:
108 return
109 self._collector.add_metrics_data(metrics)
110
111 def shutdown(self) -> bool:
112 core.REGISTRY.unregister(self._collector)
113 return True
114
115
116 class _CustomCollector:
117 """_CustomCollector represents the Prometheus Collector object
118
119 See more:
120 https://github.com/prometheus/client_python#custom-collectors
121 """
122
123 def __init__(self, prefix: str = ""):
124 self._prefix = prefix
125 self._callback = None
126 self._metrics_to_export = collections.deque()
127 self._non_letters_digits_underscore_re = re.compile(
128 r"[^\w]", re.UNICODE | re.IGNORECASE
129 )
130
131 def add_metrics_data(self, export_records: Sequence[Metric]) -> None:
132 """Add metrics to Prometheus data"""
133 self._metrics_to_export.append(export_records)
134
135 def collect(self) -> None:
136 """Collect fetches the metrics from OpenTelemetry
137 and delivers them as Prometheus Metrics.
138 Collect is invoked every time a ``prometheus.Gatherer`` is run
139 for example when the HTTP endpoint is invoked by Prometheus.
140 """
141 if self._callback is not None:
142 self._callback()
143
144 while self._metrics_to_export:
145 for export_record in self._metrics_to_export.popleft():
146 prometheus_metric = self._translate_to_prometheus(
147 export_record
148 )
149 if prometheus_metric is not None:
150 yield prometheus_metric
151
152 def _translate_to_prometheus(
153 self, metric: Metric
154 ) -> Optional[core.Metric]:
155 prometheus_metric = None
156 label_values = []
157 label_keys = []
158 for key, value in metric.attributes.items():
159 label_keys.append(self._sanitize(key))
160 label_values.append(str(value))
161
162 metric_name = ""
163 if self._prefix != "":
164 metric_name = self._prefix + "_"
165 metric_name += self._sanitize(metric.name)
166
167 description = metric.description or ""
168 if isinstance(metric.point, Sum):
169 prometheus_metric = core.CounterMetricFamily(
170 name=metric_name,
171 documentation=description,
172 labels=label_keys,
173 unit=metric.unit,
174 )
175 prometheus_metric.add_metric(
176 labels=label_values, value=metric.point.value
177 )
178 elif isinstance(metric.point, Gauge):
179 prometheus_metric = core.GaugeMetricFamily(
180 name=metric_name,
181 documentation=description,
182 labels=label_keys,
183 unit=metric.unit,
184 )
185 prometheus_metric.add_metric(
186 labels=label_values, value=metric.point.value
187 )
188 elif isinstance(metric.point, Histogram):
189 value = metric.point.sum
190 prometheus_metric = core.HistogramMetricFamily(
191 name=metric_name,
192 documentation=description,
193 labels=label_keys,
194 unit=metric.unit,
195 )
196 buckets = _convert_buckets(metric)
197 prometheus_metric.add_metric(
198 labels=label_values, buckets=buckets, sum_value=value
199 )
200 else:
201 _logger.warning("Unsupported metric type. %s", type(metric.point))
202 return prometheus_metric
203
204 def _sanitize(self, key: str) -> str:
205 """sanitize the given metric name or label according to Prometheus rule.
206 Replace all characters other than [A-Za-z0-9_] with '_'.
207 """
208 return self._non_letters_digits_underscore_re.sub("_", key)
209
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py b/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py
--- a/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py
+++ b/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py
@@ -62,18 +62,19 @@
---
"""
-import collections
-import logging
-import re
+from collections import deque
from itertools import chain
-from typing import Iterable, Optional, Sequence, Tuple
+from json import dumps
+from logging import getLogger
+from re import IGNORECASE, UNICODE, compile
+from typing import Iterable, Optional, Sequence, Tuple, Union
from prometheus_client import core
from opentelemetry.sdk._metrics.export import MetricReader
from opentelemetry.sdk._metrics.point import Gauge, Histogram, Metric, Sum
-_logger = logging.getLogger(__name__)
+_logger = getLogger(__name__)
def _convert_buckets(metric: Metric) -> Sequence[Tuple[str, int]]:
@@ -123,9 +124,9 @@
def __init__(self, prefix: str = ""):
self._prefix = prefix
self._callback = None
- self._metrics_to_export = collections.deque()
- self._non_letters_digits_underscore_re = re.compile(
- r"[^\w]", re.UNICODE | re.IGNORECASE
+ self._metrics_to_export = deque()
+ self._non_letters_digits_underscore_re = compile(
+ r"[^\w]", UNICODE | IGNORECASE
)
def add_metrics_data(self, export_records: Sequence[Metric]) -> None:
@@ -157,7 +158,7 @@
label_keys = []
for key, value in metric.attributes.items():
label_keys.append(self._sanitize(key))
- label_values.append(str(value))
+ label_values.append(self._check_value(value))
metric_name = ""
if self._prefix != "":
@@ -206,3 +207,10 @@
Replace all characters other than [A-Za-z0-9_] with '_'.
"""
return self._non_letters_digits_underscore_re.sub("_", key)
+
+ # pylint: disable=no-self-use
+ def _check_value(self, value: Union[int, float, str, Sequence]) -> str:
+ """Check the label value and return is appropriate representation"""
+ if not isinstance(value, str) and isinstance(value, Sequence):
+ return dumps(value, default=str)
+ return str(value)
|
{"golden_diff": "diff --git a/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py b/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py\n--- a/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py\n+++ b/exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py\n@@ -62,18 +62,19 @@\n ---\n \"\"\"\n \n-import collections\n-import logging\n-import re\n+from collections import deque\n from itertools import chain\n-from typing import Iterable, Optional, Sequence, Tuple\n+from json import dumps\n+from logging import getLogger\n+from re import IGNORECASE, UNICODE, compile\n+from typing import Iterable, Optional, Sequence, Tuple, Union\n \n from prometheus_client import core\n \n from opentelemetry.sdk._metrics.export import MetricReader\n from opentelemetry.sdk._metrics.point import Gauge, Histogram, Metric, Sum\n \n-_logger = logging.getLogger(__name__)\n+_logger = getLogger(__name__)\n \n \n def _convert_buckets(metric: Metric) -> Sequence[Tuple[str, int]]:\n@@ -123,9 +124,9 @@\n def __init__(self, prefix: str = \"\"):\n self._prefix = prefix\n self._callback = None\n- self._metrics_to_export = collections.deque()\n- self._non_letters_digits_underscore_re = re.compile(\n- r\"[^\\w]\", re.UNICODE | re.IGNORECASE\n+ self._metrics_to_export = deque()\n+ self._non_letters_digits_underscore_re = compile(\n+ r\"[^\\w]\", UNICODE | IGNORECASE\n )\n \n def add_metrics_data(self, export_records: Sequence[Metric]) -> None:\n@@ -157,7 +158,7 @@\n label_keys = []\n for key, value in metric.attributes.items():\n label_keys.append(self._sanitize(key))\n- label_values.append(str(value))\n+ label_values.append(self._check_value(value))\n \n metric_name = \"\"\n if self._prefix != \"\":\n@@ -206,3 +207,10 @@\n Replace all characters other than [A-Za-z0-9_] with '_'.\n \"\"\"\n return self._non_letters_digits_underscore_re.sub(\"_\", key)\n+\n+ # pylint: disable=no-self-use\n+ def _check_value(self, value: Union[int, float, str, Sequence]) -> str:\n+ \"\"\"Check the label value and return is appropriate representation\"\"\"\n+ if not isinstance(value, str) and isinstance(value, Sequence):\n+ return dumps(value, default=str)\n+ return str(value)\n", "issue": "Convert Values to JSON for prometheus collector\n* Fixes #2502\r\n\r\n# Convert values to native JSON \r\n\r\nConvert values to native JSON instead of string in the Prometheus Collector\r\n* Changes in the list for `_translate_to_prometheus`\r\n\r\nFixes #2502\r\n\r\n- [ ] Bug fix (non-breaking change which fixes an issue)\r\n- [x] New feature (non-breaking change which adds functionality)\r\n- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)\r\n- [ ] This change requires a documentation update\r\n\r\n# Checklist:\r\n\r\n- [x] Followed the style guidelines of this project\r\n- [x] Changelogs have been updated\r\n- [ ] Unit tests have been added\r\n- [ ] Documentation has been updated\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis library allows export of metrics data to `Prometheus <https://prometheus.io/>`_.\n\nUsage\n-----\n\nThe **OpenTelemetry Prometheus Exporter** allows export of `OpenTelemetry`_\nmetrics to `Prometheus`_.\n\n\n.. _Prometheus: https://prometheus.io/\n.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/\n\n.. code:: python\n\n from prometheus_client import start_http_server\n\n from opentelemetry._metrics import get_meter_provider, set_meter_provider\n from opentelemetry.exporter.prometheus import PrometheusMetricReader\n from opentelemetry.sdk._metrics import MeterProvider\n\n # Start Prometheus client\n start_http_server(port=8000, addr=\"localhost\")\n\n # Exporter to export metrics to Prometheus\n prefix = \"MyAppPrefix\"\n reader = PrometheusMetricReader(prefix)\n\n # Meter is responsible for creating and recording metrics\n set_meter_provider(MeterProvider(metric_readers=[reader]))\n meter = get_meter_provider().get_meter(\"myapp\", \"0.1.2\")\n\n counter = meter.create_counter(\n \"requests\",\n \"requests\",\n \"number of requests\",\n )\n\n # Labels are used to identify key-values that are associated with a specific\n # metric that you want to record. These are useful for pre-aggregation and can\n # be used to store custom dimensions pertaining to a metric\n labels = {\"environment\": \"staging\"}\n\n counter.add(25, labels)\n input(\"Press any key to exit...\")\n\nAPI\n---\n\"\"\"\n\nimport collections\nimport logging\nimport re\nfrom itertools import chain\nfrom typing import Iterable, Optional, Sequence, Tuple\n\nfrom prometheus_client import core\n\nfrom opentelemetry.sdk._metrics.export import MetricReader\nfrom opentelemetry.sdk._metrics.point import Gauge, Histogram, Metric, Sum\n\n_logger = logging.getLogger(__name__)\n\n\ndef _convert_buckets(metric: Metric) -> Sequence[Tuple[str, int]]:\n buckets = []\n total_count = 0\n for upper_bound, count in zip(\n chain(metric.point.explicit_bounds, [\"+Inf\"]),\n metric.point.bucket_counts,\n ):\n total_count += count\n buckets.append((f\"{upper_bound}\", total_count))\n\n return buckets\n\n\nclass PrometheusMetricReader(MetricReader):\n \"\"\"Prometheus metric exporter for OpenTelemetry.\n\n Args:\n prefix: single-word application prefix relevant to the domain\n the metric belongs to.\n \"\"\"\n\n def __init__(self, prefix: str = \"\") -> None:\n super().__init__()\n self._collector = _CustomCollector(prefix)\n core.REGISTRY.register(self._collector)\n self._collector._callback = self.collect\n\n def _receive_metrics(self, metrics: Iterable[Metric]) -> None:\n if metrics is None:\n return\n self._collector.add_metrics_data(metrics)\n\n def shutdown(self) -> bool:\n core.REGISTRY.unregister(self._collector)\n return True\n\n\nclass _CustomCollector:\n \"\"\"_CustomCollector represents the Prometheus Collector object\n\n See more:\n https://github.com/prometheus/client_python#custom-collectors\n \"\"\"\n\n def __init__(self, prefix: str = \"\"):\n self._prefix = prefix\n self._callback = None\n self._metrics_to_export = collections.deque()\n self._non_letters_digits_underscore_re = re.compile(\n r\"[^\\w]\", re.UNICODE | re.IGNORECASE\n )\n\n def add_metrics_data(self, export_records: Sequence[Metric]) -> None:\n \"\"\"Add metrics to Prometheus data\"\"\"\n self._metrics_to_export.append(export_records)\n\n def collect(self) -> None:\n \"\"\"Collect fetches the metrics from OpenTelemetry\n and delivers them as Prometheus Metrics.\n Collect is invoked every time a ``prometheus.Gatherer`` is run\n for example when the HTTP endpoint is invoked by Prometheus.\n \"\"\"\n if self._callback is not None:\n self._callback()\n\n while self._metrics_to_export:\n for export_record in self._metrics_to_export.popleft():\n prometheus_metric = self._translate_to_prometheus(\n export_record\n )\n if prometheus_metric is not None:\n yield prometheus_metric\n\n def _translate_to_prometheus(\n self, metric: Metric\n ) -> Optional[core.Metric]:\n prometheus_metric = None\n label_values = []\n label_keys = []\n for key, value in metric.attributes.items():\n label_keys.append(self._sanitize(key))\n label_values.append(str(value))\n\n metric_name = \"\"\n if self._prefix != \"\":\n metric_name = self._prefix + \"_\"\n metric_name += self._sanitize(metric.name)\n\n description = metric.description or \"\"\n if isinstance(metric.point, Sum):\n prometheus_metric = core.CounterMetricFamily(\n name=metric_name,\n documentation=description,\n labels=label_keys,\n unit=metric.unit,\n )\n prometheus_metric.add_metric(\n labels=label_values, value=metric.point.value\n )\n elif isinstance(metric.point, Gauge):\n prometheus_metric = core.GaugeMetricFamily(\n name=metric_name,\n documentation=description,\n labels=label_keys,\n unit=metric.unit,\n )\n prometheus_metric.add_metric(\n labels=label_values, value=metric.point.value\n )\n elif isinstance(metric.point, Histogram):\n value = metric.point.sum\n prometheus_metric = core.HistogramMetricFamily(\n name=metric_name,\n documentation=description,\n labels=label_keys,\n unit=metric.unit,\n )\n buckets = _convert_buckets(metric)\n prometheus_metric.add_metric(\n labels=label_values, buckets=buckets, sum_value=value\n )\n else:\n _logger.warning(\"Unsupported metric type. %s\", type(metric.point))\n return prometheus_metric\n\n def _sanitize(self, key: str) -> str:\n \"\"\"sanitize the given metric name or label according to Prometheus rule.\n Replace all characters other than [A-Za-z0-9_] with '_'.\n \"\"\"\n return self._non_letters_digits_underscore_re.sub(\"_\", key)\n", "path": "exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis library allows export of metrics data to `Prometheus <https://prometheus.io/>`_.\n\nUsage\n-----\n\nThe **OpenTelemetry Prometheus Exporter** allows export of `OpenTelemetry`_\nmetrics to `Prometheus`_.\n\n\n.. _Prometheus: https://prometheus.io/\n.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/\n\n.. code:: python\n\n from prometheus_client import start_http_server\n\n from opentelemetry._metrics import get_meter_provider, set_meter_provider\n from opentelemetry.exporter.prometheus import PrometheusMetricReader\n from opentelemetry.sdk._metrics import MeterProvider\n\n # Start Prometheus client\n start_http_server(port=8000, addr=\"localhost\")\n\n # Exporter to export metrics to Prometheus\n prefix = \"MyAppPrefix\"\n reader = PrometheusMetricReader(prefix)\n\n # Meter is responsible for creating and recording metrics\n set_meter_provider(MeterProvider(metric_readers=[reader]))\n meter = get_meter_provider().get_meter(\"myapp\", \"0.1.2\")\n\n counter = meter.create_counter(\n \"requests\",\n \"requests\",\n \"number of requests\",\n )\n\n # Labels are used to identify key-values that are associated with a specific\n # metric that you want to record. These are useful for pre-aggregation and can\n # be used to store custom dimensions pertaining to a metric\n labels = {\"environment\": \"staging\"}\n\n counter.add(25, labels)\n input(\"Press any key to exit...\")\n\nAPI\n---\n\"\"\"\n\nfrom collections import deque\nfrom itertools import chain\nfrom json import dumps\nfrom logging import getLogger\nfrom re import IGNORECASE, UNICODE, compile\nfrom typing import Iterable, Optional, Sequence, Tuple, Union\n\nfrom prometheus_client import core\n\nfrom opentelemetry.sdk._metrics.export import MetricReader\nfrom opentelemetry.sdk._metrics.point import Gauge, Histogram, Metric, Sum\n\n_logger = getLogger(__name__)\n\n\ndef _convert_buckets(metric: Metric) -> Sequence[Tuple[str, int]]:\n buckets = []\n total_count = 0\n for upper_bound, count in zip(\n chain(metric.point.explicit_bounds, [\"+Inf\"]),\n metric.point.bucket_counts,\n ):\n total_count += count\n buckets.append((f\"{upper_bound}\", total_count))\n\n return buckets\n\n\nclass PrometheusMetricReader(MetricReader):\n \"\"\"Prometheus metric exporter for OpenTelemetry.\n\n Args:\n prefix: single-word application prefix relevant to the domain\n the metric belongs to.\n \"\"\"\n\n def __init__(self, prefix: str = \"\") -> None:\n super().__init__()\n self._collector = _CustomCollector(prefix)\n core.REGISTRY.register(self._collector)\n self._collector._callback = self.collect\n\n def _receive_metrics(self, metrics: Iterable[Metric]) -> None:\n if metrics is None:\n return\n self._collector.add_metrics_data(metrics)\n\n def shutdown(self) -> bool:\n core.REGISTRY.unregister(self._collector)\n return True\n\n\nclass _CustomCollector:\n \"\"\"_CustomCollector represents the Prometheus Collector object\n\n See more:\n https://github.com/prometheus/client_python#custom-collectors\n \"\"\"\n\n def __init__(self, prefix: str = \"\"):\n self._prefix = prefix\n self._callback = None\n self._metrics_to_export = deque()\n self._non_letters_digits_underscore_re = compile(\n r\"[^\\w]\", UNICODE | IGNORECASE\n )\n\n def add_metrics_data(self, export_records: Sequence[Metric]) -> None:\n \"\"\"Add metrics to Prometheus data\"\"\"\n self._metrics_to_export.append(export_records)\n\n def collect(self) -> None:\n \"\"\"Collect fetches the metrics from OpenTelemetry\n and delivers them as Prometheus Metrics.\n Collect is invoked every time a ``prometheus.Gatherer`` is run\n for example when the HTTP endpoint is invoked by Prometheus.\n \"\"\"\n if self._callback is not None:\n self._callback()\n\n while self._metrics_to_export:\n for export_record in self._metrics_to_export.popleft():\n prometheus_metric = self._translate_to_prometheus(\n export_record\n )\n if prometheus_metric is not None:\n yield prometheus_metric\n\n def _translate_to_prometheus(\n self, metric: Metric\n ) -> Optional[core.Metric]:\n prometheus_metric = None\n label_values = []\n label_keys = []\n for key, value in metric.attributes.items():\n label_keys.append(self._sanitize(key))\n label_values.append(self._check_value(value))\n\n metric_name = \"\"\n if self._prefix != \"\":\n metric_name = self._prefix + \"_\"\n metric_name += self._sanitize(metric.name)\n\n description = metric.description or \"\"\n if isinstance(metric.point, Sum):\n prometheus_metric = core.CounterMetricFamily(\n name=metric_name,\n documentation=description,\n labels=label_keys,\n unit=metric.unit,\n )\n prometheus_metric.add_metric(\n labels=label_values, value=metric.point.value\n )\n elif isinstance(metric.point, Gauge):\n prometheus_metric = core.GaugeMetricFamily(\n name=metric_name,\n documentation=description,\n labels=label_keys,\n unit=metric.unit,\n )\n prometheus_metric.add_metric(\n labels=label_values, value=metric.point.value\n )\n elif isinstance(metric.point, Histogram):\n value = metric.point.sum\n prometheus_metric = core.HistogramMetricFamily(\n name=metric_name,\n documentation=description,\n labels=label_keys,\n unit=metric.unit,\n )\n buckets = _convert_buckets(metric)\n prometheus_metric.add_metric(\n labels=label_values, buckets=buckets, sum_value=value\n )\n else:\n _logger.warning(\"Unsupported metric type. %s\", type(metric.point))\n return prometheus_metric\n\n def _sanitize(self, key: str) -> str:\n \"\"\"sanitize the given metric name or label according to Prometheus rule.\n Replace all characters other than [A-Za-z0-9_] with '_'.\n \"\"\"\n return self._non_letters_digits_underscore_re.sub(\"_\", key)\n\n # pylint: disable=no-self-use\n def _check_value(self, value: Union[int, float, str, Sequence]) -> str:\n \"\"\"Check the label value and return is appropriate representation\"\"\"\n if not isinstance(value, str) and isinstance(value, Sequence):\n return dumps(value, default=str)\n return str(value)\n", "path": "exporter/opentelemetry-exporter-prometheus/src/opentelemetry/exporter/prometheus/__init__.py"}]}
| 2,410 | 581 |
gh_patches_debug_20332
|
rasdani/github-patches
|
git_diff
|
TheAlgorithms__Python-2467
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add Python type hints and doctests to other/two_sum.py
@tonydelanuez Would you be willing to add Python type hints and doctests to [`other/two_sum.py`](../blob/master/other/two_sum.py)? The function comments say that it will return a `list[int]` but instead it either returns a `list[list[int]]` or `False`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `other/two_sum.py`
Content:
```
1 """
2 Given an array of integers, return indices of the two numbers such that they add up to
3 a specific target.
4
5 You may assume that each input would have exactly one solution, and you may not use the
6 same element twice.
7
8 Example:
9 Given nums = [2, 7, 11, 15], target = 9,
10
11 Because nums[0] + nums[1] = 2 + 7 = 9,
12 return [0, 1].
13 """
14
15
16 def twoSum(nums, target):
17 """
18 :type nums: List[int]
19 :type target: int
20 :rtype: List[int]
21 """
22 chk_map = {}
23 for index, val in enumerate(nums):
24 compl = target - val
25 if compl in chk_map:
26 indices = [chk_map[compl], index]
27 print(indices)
28 return [indices]
29 else:
30 chk_map[val] = index
31 return False
32
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/other/two_sum.py b/other/two_sum.py
--- a/other/two_sum.py
+++ b/other/two_sum.py
@@ -11,21 +11,37 @@
Because nums[0] + nums[1] = 2 + 7 = 9,
return [0, 1].
"""
+from __future__ import annotations
-def twoSum(nums, target):
+def two_sum(nums: list[int], target: int) -> list[int]:
"""
- :type nums: List[int]
- :type target: int
- :rtype: List[int]
+ >>> two_sum([2, 7, 11, 15], 9)
+ [0, 1]
+ >>> two_sum([15, 2, 11, 7], 13)
+ [1, 2]
+ >>> two_sum([2, 7, 11, 15], 17)
+ [0, 3]
+ >>> two_sum([7, 15, 11, 2], 18)
+ [0, 2]
+ >>> two_sum([2, 7, 11, 15], 26)
+ [2, 3]
+ >>> two_sum([2, 7, 11, 15], 8)
+ []
+ >>> two_sum([3 * i for i in range(10)], 19)
+ []
"""
chk_map = {}
for index, val in enumerate(nums):
compl = target - val
if compl in chk_map:
- indices = [chk_map[compl], index]
- print(indices)
- return [indices]
- else:
- chk_map[val] = index
- return False
+ return [chk_map[compl], index]
+ chk_map[val] = index
+ return []
+
+
+if __name__ == "__main__":
+ import doctest
+
+ doctest.testmod()
+ print(f"{two_sum([2, 7, 11, 15], 9) = }")
|
{"golden_diff": "diff --git a/other/two_sum.py b/other/two_sum.py\n--- a/other/two_sum.py\n+++ b/other/two_sum.py\n@@ -11,21 +11,37 @@\n Because nums[0] + nums[1] = 2 + 7 = 9,\n return [0, 1].\n \"\"\"\n+from __future__ import annotations\n \n \n-def twoSum(nums, target):\n+def two_sum(nums: list[int], target: int) -> list[int]:\n \"\"\"\n- :type nums: List[int]\n- :type target: int\n- :rtype: List[int]\n+ >>> two_sum([2, 7, 11, 15], 9)\n+ [0, 1]\n+ >>> two_sum([15, 2, 11, 7], 13)\n+ [1, 2]\n+ >>> two_sum([2, 7, 11, 15], 17)\n+ [0, 3]\n+ >>> two_sum([7, 15, 11, 2], 18)\n+ [0, 2]\n+ >>> two_sum([2, 7, 11, 15], 26)\n+ [2, 3]\n+ >>> two_sum([2, 7, 11, 15], 8)\n+ []\n+ >>> two_sum([3 * i for i in range(10)], 19)\n+ []\n \"\"\"\n chk_map = {}\n for index, val in enumerate(nums):\n compl = target - val\n if compl in chk_map:\n- indices = [chk_map[compl], index]\n- print(indices)\n- return [indices]\n- else:\n- chk_map[val] = index\n- return False\n+ return [chk_map[compl], index]\n+ chk_map[val] = index\n+ return []\n+\n+\n+if __name__ == \"__main__\":\n+ import doctest\n+\n+ doctest.testmod()\n+ print(f\"{two_sum([2, 7, 11, 15], 9) = }\")\n", "issue": "Add Python type hints and doctests to other/two_sum.py\n@tonydelanuez Would you be willing to add Python type hints and doctests to [`other/two_sum.py`](../blob/master/other/two_sum.py)? The function comments say that it will return a `list[int]` but instead it either returns a `list[list[int]]` or `False`.\n", "before_files": [{"content": "\"\"\"\nGiven an array of integers, return indices of the two numbers such that they add up to\na specific target.\n\nYou may assume that each input would have exactly one solution, and you may not use the\nsame element twice.\n\nExample:\nGiven nums = [2, 7, 11, 15], target = 9,\n\nBecause nums[0] + nums[1] = 2 + 7 = 9,\nreturn [0, 1].\n\"\"\"\n\n\ndef twoSum(nums, target):\n \"\"\"\n :type nums: List[int]\n :type target: int\n :rtype: List[int]\n \"\"\"\n chk_map = {}\n for index, val in enumerate(nums):\n compl = target - val\n if compl in chk_map:\n indices = [chk_map[compl], index]\n print(indices)\n return [indices]\n else:\n chk_map[val] = index\n return False\n", "path": "other/two_sum.py"}], "after_files": [{"content": "\"\"\"\nGiven an array of integers, return indices of the two numbers such that they add up to\na specific target.\n\nYou may assume that each input would have exactly one solution, and you may not use the\nsame element twice.\n\nExample:\nGiven nums = [2, 7, 11, 15], target = 9,\n\nBecause nums[0] + nums[1] = 2 + 7 = 9,\nreturn [0, 1].\n\"\"\"\nfrom __future__ import annotations\n\n\ndef two_sum(nums: list[int], target: int) -> list[int]:\n \"\"\"\n >>> two_sum([2, 7, 11, 15], 9)\n [0, 1]\n >>> two_sum([15, 2, 11, 7], 13)\n [1, 2]\n >>> two_sum([2, 7, 11, 15], 17)\n [0, 3]\n >>> two_sum([7, 15, 11, 2], 18)\n [0, 2]\n >>> two_sum([2, 7, 11, 15], 26)\n [2, 3]\n >>> two_sum([2, 7, 11, 15], 8)\n []\n >>> two_sum([3 * i for i in range(10)], 19)\n []\n \"\"\"\n chk_map = {}\n for index, val in enumerate(nums):\n compl = target - val\n if compl in chk_map:\n return [chk_map[compl], index]\n chk_map[val] = index\n return []\n\n\nif __name__ == \"__main__\":\n import doctest\n\n doctest.testmod()\n print(f\"{two_sum([2, 7, 11, 15], 9) = }\")\n", "path": "other/two_sum.py"}]}
| 598 | 490 |
gh_patches_debug_34623
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-505
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add a function for summation along axes
`sum` function reduces all elements, while there are many cases that I want to take a summation over some axes. This is analogous to `numpy.sum` with the `axis` argument.
The implementation should contain a general function for GPU to compute a reduction over specified axes.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/functions/math/sum.py`
Content:
```
1 import numpy
2
3 from chainer import cuda
4 from chainer import function
5 from chainer.utils import type_check
6
7
8 class Sum(function.Function):
9 """Sum of array elements over a given axis."""
10
11 def __init__(self, axis=None):
12 self.axis = axis
13
14 def check_type_forward(self, in_types):
15 type_check.expect(
16 in_types.size() == 1,
17 in_types[0].dtype == numpy.float32
18 )
19
20 if self.axis is not None:
21 type_check.expect(
22 self.axis < in_types[0].ndim,
23 )
24
25 def forward(self, x):
26 xp = cuda.get_array_module(*x)
27 return xp.asarray(x[0].sum(axis=self.axis)),
28
29 def backward(self, x, gy):
30 xp = cuda.get_array_module(*x)
31
32 gx = xp.empty_like(x[0])
33 if self.axis is None:
34 gx[:] = gy[0]
35 else:
36 gx[:] = xp.expand_dims(gy[0], axis=self.axis)
37
38 return gx,
39
40
41 def sum(x, axis=None):
42 """Sum of array elements over a given axis.
43
44 Args:
45 x (~chainer.Variable): Elements to sum.
46 axis (None or int): Axis which a sum is performed.
47 The default (axis = None) is perform a sum over all the dimensions
48 of the input array.
49
50 Returns:
51 ~chainer.Variable: Output variable.
52
53 """
54 return Sum(axis)(x)
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/chainer/functions/math/sum.py b/chainer/functions/math/sum.py
--- a/chainer/functions/math/sum.py
+++ b/chainer/functions/math/sum.py
@@ -9,7 +9,17 @@
"""Sum of array elements over a given axis."""
def __init__(self, axis=None):
- self.axis = axis
+ if axis is None:
+ self.axis = None
+ elif isinstance(axis, int):
+ self.axis = (axis,)
+ elif isinstance(axis, tuple) and all(isinstance(a, int) for a in axis):
+ if len(set(axis)) != len(axis):
+ raise ValueError('duplicate value in axis: ({})'.format(
+ ', '.join(map(str, axis))))
+ self.axis = axis
+ else:
+ raise TypeError('None, int or tuple of int are required')
def check_type_forward(self, in_types):
type_check.expect(
@@ -18,9 +28,15 @@
)
if self.axis is not None:
- type_check.expect(
- self.axis < in_types[0].ndim,
- )
+ for axis in self.axis:
+ if axis >= 0:
+ type_check.expect(
+ axis < in_types[0].ndim,
+ )
+ else:
+ type_check.expect(
+ -axis - 1 < in_types[0].ndim,
+ )
def forward(self, x):
xp = cuda.get_array_module(*x)
@@ -33,7 +49,15 @@
if self.axis is None:
gx[:] = gy[0]
else:
- gx[:] = xp.expand_dims(gy[0], axis=self.axis)
+ gy = gy[0]
+ actual_axis = []
+ for axis in self.axis:
+ if axis < 0:
+ axis = len(gx.shape) + axis
+ actual_axis.append(axis)
+ for axis in sorted(actual_axis):
+ gy = xp.expand_dims(gy, axis=axis)
+ gx[:] = gy
return gx,
@@ -43,7 +67,7 @@
Args:
x (~chainer.Variable): Elements to sum.
- axis (None or int): Axis which a sum is performed.
+ axis (None, int, or tuple of int): Axis which a sum is performed.
The default (axis = None) is perform a sum over all the dimensions
of the input array.
|
{"golden_diff": "diff --git a/chainer/functions/math/sum.py b/chainer/functions/math/sum.py\n--- a/chainer/functions/math/sum.py\n+++ b/chainer/functions/math/sum.py\n@@ -9,7 +9,17 @@\n \"\"\"Sum of array elements over a given axis.\"\"\"\n \n def __init__(self, axis=None):\n- self.axis = axis\n+ if axis is None:\n+ self.axis = None\n+ elif isinstance(axis, int):\n+ self.axis = (axis,)\n+ elif isinstance(axis, tuple) and all(isinstance(a, int) for a in axis):\n+ if len(set(axis)) != len(axis):\n+ raise ValueError('duplicate value in axis: ({})'.format(\n+ ', '.join(map(str, axis))))\n+ self.axis = axis\n+ else:\n+ raise TypeError('None, int or tuple of int are required')\n \n def check_type_forward(self, in_types):\n type_check.expect(\n@@ -18,9 +28,15 @@\n )\n \n if self.axis is not None:\n- type_check.expect(\n- self.axis < in_types[0].ndim,\n- )\n+ for axis in self.axis:\n+ if axis >= 0:\n+ type_check.expect(\n+ axis < in_types[0].ndim,\n+ )\n+ else:\n+ type_check.expect(\n+ -axis - 1 < in_types[0].ndim,\n+ )\n \n def forward(self, x):\n xp = cuda.get_array_module(*x)\n@@ -33,7 +49,15 @@\n if self.axis is None:\n gx[:] = gy[0]\n else:\n- gx[:] = xp.expand_dims(gy[0], axis=self.axis)\n+ gy = gy[0]\n+ actual_axis = []\n+ for axis in self.axis:\n+ if axis < 0:\n+ axis = len(gx.shape) + axis\n+ actual_axis.append(axis)\n+ for axis in sorted(actual_axis):\n+ gy = xp.expand_dims(gy, axis=axis)\n+ gx[:] = gy\n \n return gx,\n \n@@ -43,7 +67,7 @@\n \n Args:\n x (~chainer.Variable): Elements to sum.\n- axis (None or int): Axis which a sum is performed.\n+ axis (None, int, or tuple of int): Axis which a sum is performed.\n The default (axis = None) is perform a sum over all the dimensions\n of the input array.\n", "issue": "Add a function for summation along axes\n`sum` function reduces all elements, while there are many cases that I want to take a summation over some axes. This is analogous to `numpy.sum` with the `axis` argument.\n\nThe implementation should contain a general function for GPU to compute a reduction over specified axes.\n\n", "before_files": [{"content": "import numpy\n\nfrom chainer import cuda\nfrom chainer import function\nfrom chainer.utils import type_check\n\n\nclass Sum(function.Function):\n \"\"\"Sum of array elements over a given axis.\"\"\"\n\n def __init__(self, axis=None):\n self.axis = axis\n\n def check_type_forward(self, in_types):\n type_check.expect(\n in_types.size() == 1,\n in_types[0].dtype == numpy.float32\n )\n\n if self.axis is not None:\n type_check.expect(\n self.axis < in_types[0].ndim,\n )\n\n def forward(self, x):\n xp = cuda.get_array_module(*x)\n return xp.asarray(x[0].sum(axis=self.axis)),\n\n def backward(self, x, gy):\n xp = cuda.get_array_module(*x)\n\n gx = xp.empty_like(x[0])\n if self.axis is None:\n gx[:] = gy[0]\n else:\n gx[:] = xp.expand_dims(gy[0], axis=self.axis)\n\n return gx,\n\n\ndef sum(x, axis=None):\n \"\"\"Sum of array elements over a given axis.\n\n Args:\n x (~chainer.Variable): Elements to sum.\n axis (None or int): Axis which a sum is performed.\n The default (axis = None) is perform a sum over all the dimensions\n of the input array.\n\n Returns:\n ~chainer.Variable: Output variable.\n\n \"\"\"\n return Sum(axis)(x)\n", "path": "chainer/functions/math/sum.py"}], "after_files": [{"content": "import numpy\n\nfrom chainer import cuda\nfrom chainer import function\nfrom chainer.utils import type_check\n\n\nclass Sum(function.Function):\n \"\"\"Sum of array elements over a given axis.\"\"\"\n\n def __init__(self, axis=None):\n if axis is None:\n self.axis = None\n elif isinstance(axis, int):\n self.axis = (axis,)\n elif isinstance(axis, tuple) and all(isinstance(a, int) for a in axis):\n if len(set(axis)) != len(axis):\n raise ValueError('duplicate value in axis: ({})'.format(\n ', '.join(map(str, axis))))\n self.axis = axis\n else:\n raise TypeError('None, int or tuple of int are required')\n\n def check_type_forward(self, in_types):\n type_check.expect(\n in_types.size() == 1,\n in_types[0].dtype == numpy.float32\n )\n\n if self.axis is not None:\n for axis in self.axis:\n if axis >= 0:\n type_check.expect(\n axis < in_types[0].ndim,\n )\n else:\n type_check.expect(\n -axis - 1 < in_types[0].ndim,\n )\n\n def forward(self, x):\n xp = cuda.get_array_module(*x)\n return xp.asarray(x[0].sum(axis=self.axis)),\n\n def backward(self, x, gy):\n xp = cuda.get_array_module(*x)\n\n gx = xp.empty_like(x[0])\n if self.axis is None:\n gx[:] = gy[0]\n else:\n gy = gy[0]\n actual_axis = []\n for axis in self.axis:\n if axis < 0:\n axis = len(gx.shape) + axis\n actual_axis.append(axis)\n for axis in sorted(actual_axis):\n gy = xp.expand_dims(gy, axis=axis)\n gx[:] = gy\n\n return gx,\n\n\ndef sum(x, axis=None):\n \"\"\"Sum of array elements over a given axis.\n\n Args:\n x (~chainer.Variable): Elements to sum.\n axis (None, int, or tuple of int): Axis which a sum is performed.\n The default (axis = None) is perform a sum over all the dimensions\n of the input array.\n\n Returns:\n ~chainer.Variable: Output variable.\n\n \"\"\"\n return Sum(axis)(x)\n", "path": "chainer/functions/math/sum.py"}]}
| 751 | 555 |
gh_patches_debug_28665
|
rasdani/github-patches
|
git_diff
|
netbox-community__netbox-14821
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Importing IKE proposal without authentication method is not possible
### Deployment Type
Self-hosted
### NetBox Version
v3.7.0
### Python Version
3.11
### Steps to Reproduce
Go to **VPN - IKE Proposals - Add**
1. Name = GUITEST
2. Authentication method = PSK
3. Encryption algorithm = 256-bit AES (GCM)
4. Group = Group 20
5. Create
(= is successful)
Go to **VPN - IKE Proposals - Import**, use the same fields as above:
```
name,authentication_method,encryption_algorithm,group
IMPORTTEST,preshared-keys,aes-256-gcm,20
```
### Expected Behavior
IKE proposal is imported successfully.
### Observed Behavior
Error is shown:
**Record 1 authentication_algorithm: This field is required.**
In the import field options the `authentication_algorithm `field is shown as required even though it is not required in GUI.
Also, in the `authentication_algorithm `value list there is no "empty" option to use, thus it is not possible to import IKE proposals that use GCM, unless you fake the algorithm during the import.
Also note that if authentication algorithm is nevertheless set and you want to remove it, you have to **single-edit** the proposal: it is not possible to select all proposals in the list and use **Edit Selected**, as it is not possible to clear the field there.
Related to #14499 where GUI requirement for the authentication method was relaxed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `netbox/vpn/forms/bulk_import.py`
Content:
```
1 from django.core.exceptions import ValidationError
2 from django.utils.translation import gettext_lazy as _
3
4 from dcim.models import Device, Interface
5 from ipam.models import IPAddress, VLAN
6 from netbox.forms import NetBoxModelImportForm
7 from tenancy.models import Tenant
8 from utilities.forms.fields import CSVChoiceField, CSVModelChoiceField, CSVModelMultipleChoiceField, SlugField
9 from virtualization.models import VirtualMachine, VMInterface
10 from vpn.choices import *
11 from vpn.models import *
12
13 __all__ = (
14 'IKEPolicyImportForm',
15 'IKEProposalImportForm',
16 'IPSecPolicyImportForm',
17 'IPSecProfileImportForm',
18 'IPSecProposalImportForm',
19 'L2VPNImportForm',
20 'L2VPNTerminationImportForm',
21 'TunnelImportForm',
22 'TunnelGroupImportForm',
23 'TunnelTerminationImportForm',
24 )
25
26
27 class TunnelGroupImportForm(NetBoxModelImportForm):
28 slug = SlugField()
29
30 class Meta:
31 model = TunnelGroup
32 fields = ('name', 'slug', 'description', 'tags')
33
34
35 class TunnelImportForm(NetBoxModelImportForm):
36 status = CSVChoiceField(
37 label=_('Status'),
38 choices=TunnelStatusChoices,
39 help_text=_('Operational status')
40 )
41 group = CSVModelChoiceField(
42 label=_('Tunnel group'),
43 queryset=TunnelGroup.objects.all(),
44 required=False,
45 to_field_name='name'
46 )
47 encapsulation = CSVChoiceField(
48 label=_('Encapsulation'),
49 choices=TunnelEncapsulationChoices,
50 help_text=_('Tunnel encapsulation')
51 )
52 ipsec_profile = CSVModelChoiceField(
53 label=_('IPSec profile'),
54 queryset=IPSecProfile.objects.all(),
55 required=False,
56 to_field_name='name'
57 )
58 tenant = CSVModelChoiceField(
59 label=_('Tenant'),
60 queryset=Tenant.objects.all(),
61 required=False,
62 to_field_name='name',
63 help_text=_('Assigned tenant')
64 )
65
66 class Meta:
67 model = Tunnel
68 fields = (
69 'name', 'status', 'group', 'encapsulation', 'ipsec_profile', 'tenant', 'tunnel_id', 'description',
70 'comments', 'tags',
71 )
72
73
74 class TunnelTerminationImportForm(NetBoxModelImportForm):
75 tunnel = CSVModelChoiceField(
76 label=_('Tunnel'),
77 queryset=Tunnel.objects.all(),
78 to_field_name='name'
79 )
80 role = CSVChoiceField(
81 label=_('Role'),
82 choices=TunnelTerminationRoleChoices,
83 help_text=_('Operational role')
84 )
85 device = CSVModelChoiceField(
86 label=_('Device'),
87 queryset=Device.objects.all(),
88 required=False,
89 to_field_name='name',
90 help_text=_('Parent device of assigned interface')
91 )
92 virtual_machine = CSVModelChoiceField(
93 label=_('Virtual machine'),
94 queryset=VirtualMachine.objects.all(),
95 required=False,
96 to_field_name='name',
97 help_text=_('Parent VM of assigned interface')
98 )
99 termination = CSVModelChoiceField(
100 label=_('Termination'),
101 queryset=Interface.objects.none(), # Can also refer to VMInterface
102 required=False,
103 to_field_name='name',
104 help_text=_('Device or virtual machine interface')
105 )
106 outside_ip = CSVModelChoiceField(
107 label=_('Outside IP'),
108 queryset=IPAddress.objects.all(),
109 required=False,
110 to_field_name='name'
111 )
112
113 class Meta:
114 model = TunnelTermination
115 fields = (
116 'tunnel', 'role', 'outside_ip', 'tags',
117 )
118
119 def __init__(self, data=None, *args, **kwargs):
120 super().__init__(data, *args, **kwargs)
121
122 if data:
123
124 # Limit termination queryset by assigned device/VM
125 if data.get('device'):
126 self.fields['termination'].queryset = Interface.objects.filter(
127 **{f"device__{self.fields['device'].to_field_name}": data['device']}
128 )
129 elif data.get('virtual_machine'):
130 self.fields['termination'].queryset = VMInterface.objects.filter(
131 **{f"virtual_machine__{self.fields['virtual_machine'].to_field_name}": data['virtual_machine']}
132 )
133
134 def save(self, *args, **kwargs):
135
136 # Assign termination object
137 if self.cleaned_data.get('termination'):
138 self.instance.termination = self.cleaned_data['termination']
139
140 return super().save(*args, **kwargs)
141
142
143 class IKEProposalImportForm(NetBoxModelImportForm):
144 authentication_method = CSVChoiceField(
145 label=_('Authentication method'),
146 choices=AuthenticationMethodChoices
147 )
148 encryption_algorithm = CSVChoiceField(
149 label=_('Encryption algorithm'),
150 choices=EncryptionAlgorithmChoices
151 )
152 authentication_algorithm = CSVChoiceField(
153 label=_('Authentication algorithm'),
154 choices=AuthenticationAlgorithmChoices
155 )
156 group = CSVChoiceField(
157 label=_('Group'),
158 choices=DHGroupChoices
159 )
160
161 class Meta:
162 model = IKEProposal
163 fields = (
164 'name', 'description', 'authentication_method', 'encryption_algorithm', 'authentication_algorithm',
165 'group', 'sa_lifetime', 'comments', 'tags',
166 )
167
168
169 class IKEPolicyImportForm(NetBoxModelImportForm):
170 version = CSVChoiceField(
171 label=_('Version'),
172 choices=IKEVersionChoices
173 )
174 mode = CSVChoiceField(
175 label=_('Mode'),
176 choices=IKEModeChoices
177 )
178 proposals = CSVModelMultipleChoiceField(
179 queryset=IKEProposal.objects.all(),
180 to_field_name='name',
181 help_text=_('IKE proposal(s)'),
182 )
183
184 class Meta:
185 model = IKEPolicy
186 fields = (
187 'name', 'description', 'version', 'mode', 'proposals', 'preshared_key', 'comments', 'tags',
188 )
189
190
191 class IPSecProposalImportForm(NetBoxModelImportForm):
192 encryption_algorithm = CSVChoiceField(
193 label=_('Encryption algorithm'),
194 choices=EncryptionAlgorithmChoices
195 )
196 authentication_algorithm = CSVChoiceField(
197 label=_('Authentication algorithm'),
198 choices=AuthenticationAlgorithmChoices
199 )
200
201 class Meta:
202 model = IPSecProposal
203 fields = (
204 'name', 'description', 'encryption_algorithm', 'authentication_algorithm', 'sa_lifetime_seconds',
205 'sa_lifetime_data', 'comments', 'tags',
206 )
207
208
209 class IPSecPolicyImportForm(NetBoxModelImportForm):
210 pfs_group = CSVChoiceField(
211 label=_('Diffie-Hellman group for Perfect Forward Secrecy'),
212 choices=DHGroupChoices
213 )
214 proposals = CSVModelMultipleChoiceField(
215 queryset=IPSecProposal.objects.all(),
216 to_field_name='name',
217 help_text=_('IPSec proposal(s)'),
218 )
219
220 class Meta:
221 model = IPSecPolicy
222 fields = (
223 'name', 'description', 'proposals', 'pfs_group', 'comments', 'tags',
224 )
225
226
227 class IPSecProfileImportForm(NetBoxModelImportForm):
228 mode = CSVChoiceField(
229 label=_('Mode'),
230 choices=IPSecModeChoices,
231 help_text=_('IPSec protocol')
232 )
233 ike_policy = CSVModelChoiceField(
234 label=_('IKE policy'),
235 queryset=IKEPolicy.objects.all(),
236 to_field_name='name'
237 )
238 ipsec_policy = CSVModelChoiceField(
239 label=_('IPSec policy'),
240 queryset=IPSecPolicy.objects.all(),
241 to_field_name='name'
242 )
243
244 class Meta:
245 model = IPSecProfile
246 fields = (
247 'name', 'mode', 'ike_policy', 'ipsec_policy', 'description', 'comments', 'tags',
248 )
249
250
251 class L2VPNImportForm(NetBoxModelImportForm):
252 tenant = CSVModelChoiceField(
253 label=_('Tenant'),
254 queryset=Tenant.objects.all(),
255 required=False,
256 to_field_name='name',
257 )
258 type = CSVChoiceField(
259 label=_('Type'),
260 choices=L2VPNTypeChoices,
261 help_text=_('L2VPN type')
262 )
263
264 class Meta:
265 model = L2VPN
266 fields = ('identifier', 'name', 'slug', 'tenant', 'type', 'description',
267 'comments', 'tags')
268
269
270 class L2VPNTerminationImportForm(NetBoxModelImportForm):
271 l2vpn = CSVModelChoiceField(
272 queryset=L2VPN.objects.all(),
273 required=True,
274 to_field_name='name',
275 label=_('L2VPN'),
276 )
277 device = CSVModelChoiceField(
278 label=_('Device'),
279 queryset=Device.objects.all(),
280 required=False,
281 to_field_name='name',
282 help_text=_('Parent device (for interface)')
283 )
284 virtual_machine = CSVModelChoiceField(
285 label=_('Virtual machine'),
286 queryset=VirtualMachine.objects.all(),
287 required=False,
288 to_field_name='name',
289 help_text=_('Parent virtual machine (for interface)')
290 )
291 interface = CSVModelChoiceField(
292 label=_('Interface'),
293 queryset=Interface.objects.none(), # Can also refer to VMInterface
294 required=False,
295 to_field_name='name',
296 help_text=_('Assigned interface (device or VM)')
297 )
298 vlan = CSVModelChoiceField(
299 label=_('VLAN'),
300 queryset=VLAN.objects.all(),
301 required=False,
302 to_field_name='name',
303 help_text=_('Assigned VLAN')
304 )
305
306 class Meta:
307 model = L2VPNTermination
308 fields = ('l2vpn', 'device', 'virtual_machine', 'interface', 'vlan', 'tags')
309
310 def __init__(self, data=None, *args, **kwargs):
311 super().__init__(data, *args, **kwargs)
312
313 if data:
314
315 # Limit interface queryset by device or VM
316 if data.get('device'):
317 self.fields['interface'].queryset = Interface.objects.filter(
318 **{f"device__{self.fields['device'].to_field_name}": data['device']}
319 )
320 elif data.get('virtual_machine'):
321 self.fields['interface'].queryset = VMInterface.objects.filter(
322 **{f"virtual_machine__{self.fields['virtual_machine'].to_field_name}": data['virtual_machine']}
323 )
324
325 def clean(self):
326 super().clean()
327
328 if self.cleaned_data.get('device') and self.cleaned_data.get('virtual_machine'):
329 raise ValidationError(_('Cannot import device and VM interface terminations simultaneously.'))
330 if not self.instance and not (self.cleaned_data.get('interface') or self.cleaned_data.get('vlan')):
331 raise ValidationError(_('Each termination must specify either an interface or a VLAN.'))
332 if self.cleaned_data.get('interface') and self.cleaned_data.get('vlan'):
333 raise ValidationError(_('Cannot assign both an interface and a VLAN.'))
334
335 # if this is an update we might not have interface or vlan in the form data
336 if self.cleaned_data.get('interface') or self.cleaned_data.get('vlan'):
337 self.instance.assigned_object = self.cleaned_data.get('interface') or self.cleaned_data.get('vlan')
338
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/netbox/vpn/forms/bulk_import.py b/netbox/vpn/forms/bulk_import.py
--- a/netbox/vpn/forms/bulk_import.py
+++ b/netbox/vpn/forms/bulk_import.py
@@ -151,7 +151,8 @@
)
authentication_algorithm = CSVChoiceField(
label=_('Authentication algorithm'),
- choices=AuthenticationAlgorithmChoices
+ choices=AuthenticationAlgorithmChoices,
+ required=False
)
group = CSVChoiceField(
label=_('Group'),
@@ -191,11 +192,13 @@
class IPSecProposalImportForm(NetBoxModelImportForm):
encryption_algorithm = CSVChoiceField(
label=_('Encryption algorithm'),
- choices=EncryptionAlgorithmChoices
+ choices=EncryptionAlgorithmChoices,
+ required=False
)
authentication_algorithm = CSVChoiceField(
label=_('Authentication algorithm'),
- choices=AuthenticationAlgorithmChoices
+ choices=AuthenticationAlgorithmChoices,
+ required=False
)
class Meta:
@@ -209,7 +212,8 @@
class IPSecPolicyImportForm(NetBoxModelImportForm):
pfs_group = CSVChoiceField(
label=_('Diffie-Hellman group for Perfect Forward Secrecy'),
- choices=DHGroupChoices
+ choices=DHGroupChoices,
+ required=False
)
proposals = CSVModelMultipleChoiceField(
queryset=IPSecProposal.objects.all(),
|
{"golden_diff": "diff --git a/netbox/vpn/forms/bulk_import.py b/netbox/vpn/forms/bulk_import.py\n--- a/netbox/vpn/forms/bulk_import.py\n+++ b/netbox/vpn/forms/bulk_import.py\n@@ -151,7 +151,8 @@\n )\n authentication_algorithm = CSVChoiceField(\n label=_('Authentication algorithm'),\n- choices=AuthenticationAlgorithmChoices\n+ choices=AuthenticationAlgorithmChoices,\n+ required=False\n )\n group = CSVChoiceField(\n label=_('Group'),\n@@ -191,11 +192,13 @@\n class IPSecProposalImportForm(NetBoxModelImportForm):\n encryption_algorithm = CSVChoiceField(\n label=_('Encryption algorithm'),\n- choices=EncryptionAlgorithmChoices\n+ choices=EncryptionAlgorithmChoices,\n+ required=False\n )\n authentication_algorithm = CSVChoiceField(\n label=_('Authentication algorithm'),\n- choices=AuthenticationAlgorithmChoices\n+ choices=AuthenticationAlgorithmChoices,\n+ required=False\n )\n \n class Meta:\n@@ -209,7 +212,8 @@\n class IPSecPolicyImportForm(NetBoxModelImportForm):\n pfs_group = CSVChoiceField(\n label=_('Diffie-Hellman group for Perfect Forward Secrecy'),\n- choices=DHGroupChoices\n+ choices=DHGroupChoices,\n+ required=False\n )\n proposals = CSVModelMultipleChoiceField(\n queryset=IPSecProposal.objects.all(),\n", "issue": "Importing IKE proposal without authentication method is not possible\n### Deployment Type\n\nSelf-hosted\n\n### NetBox Version\n\nv3.7.0\n\n### Python Version\n\n3.11\n\n### Steps to Reproduce\n\nGo to **VPN - IKE Proposals - Add**\r\n1. Name = GUITEST\r\n2. Authentication method = PSK\r\n3. Encryption algorithm = 256-bit AES (GCM)\r\n4. Group = Group 20\r\n5. Create\r\n\r\n(= is successful)\r\n\r\nGo to **VPN - IKE Proposals - Import**, use the same fields as above:\r\n\r\n```\r\nname,authentication_method,encryption_algorithm,group\r\nIMPORTTEST,preshared-keys,aes-256-gcm,20\r\n```\r\n\n\n### Expected Behavior\n\nIKE proposal is imported successfully.\n\n### Observed Behavior\n\nError is shown:\r\n\r\n**Record 1 authentication_algorithm: This field is required.** \r\n\r\nIn the import field options the `authentication_algorithm `field is shown as required even though it is not required in GUI.\r\n\r\nAlso, in the `authentication_algorithm `value list there is no \"empty\" option to use, thus it is not possible to import IKE proposals that use GCM, unless you fake the algorithm during the import.\r\n\r\nAlso note that if authentication algorithm is nevertheless set and you want to remove it, you have to **single-edit** the proposal: it is not possible to select all proposals in the list and use **Edit Selected**, as it is not possible to clear the field there.\r\n\r\nRelated to #14499 where GUI requirement for the authentication method was relaxed.\n", "before_files": [{"content": "from django.core.exceptions import ValidationError\nfrom django.utils.translation import gettext_lazy as _\n\nfrom dcim.models import Device, Interface\nfrom ipam.models import IPAddress, VLAN\nfrom netbox.forms import NetBoxModelImportForm\nfrom tenancy.models import Tenant\nfrom utilities.forms.fields import CSVChoiceField, CSVModelChoiceField, CSVModelMultipleChoiceField, SlugField\nfrom virtualization.models import VirtualMachine, VMInterface\nfrom vpn.choices import *\nfrom vpn.models import *\n\n__all__ = (\n 'IKEPolicyImportForm',\n 'IKEProposalImportForm',\n 'IPSecPolicyImportForm',\n 'IPSecProfileImportForm',\n 'IPSecProposalImportForm',\n 'L2VPNImportForm',\n 'L2VPNTerminationImportForm',\n 'TunnelImportForm',\n 'TunnelGroupImportForm',\n 'TunnelTerminationImportForm',\n)\n\n\nclass TunnelGroupImportForm(NetBoxModelImportForm):\n slug = SlugField()\n\n class Meta:\n model = TunnelGroup\n fields = ('name', 'slug', 'description', 'tags')\n\n\nclass TunnelImportForm(NetBoxModelImportForm):\n status = CSVChoiceField(\n label=_('Status'),\n choices=TunnelStatusChoices,\n help_text=_('Operational status')\n )\n group = CSVModelChoiceField(\n label=_('Tunnel group'),\n queryset=TunnelGroup.objects.all(),\n required=False,\n to_field_name='name'\n )\n encapsulation = CSVChoiceField(\n label=_('Encapsulation'),\n choices=TunnelEncapsulationChoices,\n help_text=_('Tunnel encapsulation')\n )\n ipsec_profile = CSVModelChoiceField(\n label=_('IPSec profile'),\n queryset=IPSecProfile.objects.all(),\n required=False,\n to_field_name='name'\n )\n tenant = CSVModelChoiceField(\n label=_('Tenant'),\n queryset=Tenant.objects.all(),\n required=False,\n to_field_name='name',\n help_text=_('Assigned tenant')\n )\n\n class Meta:\n model = Tunnel\n fields = (\n 'name', 'status', 'group', 'encapsulation', 'ipsec_profile', 'tenant', 'tunnel_id', 'description',\n 'comments', 'tags',\n )\n\n\nclass TunnelTerminationImportForm(NetBoxModelImportForm):\n tunnel = CSVModelChoiceField(\n label=_('Tunnel'),\n queryset=Tunnel.objects.all(),\n to_field_name='name'\n )\n role = CSVChoiceField(\n label=_('Role'),\n choices=TunnelTerminationRoleChoices,\n help_text=_('Operational role')\n )\n device = CSVModelChoiceField(\n label=_('Device'),\n queryset=Device.objects.all(),\n required=False,\n to_field_name='name',\n help_text=_('Parent device of assigned interface')\n )\n virtual_machine = CSVModelChoiceField(\n label=_('Virtual machine'),\n queryset=VirtualMachine.objects.all(),\n required=False,\n to_field_name='name',\n help_text=_('Parent VM of assigned interface')\n )\n termination = CSVModelChoiceField(\n label=_('Termination'),\n queryset=Interface.objects.none(), # Can also refer to VMInterface\n required=False,\n to_field_name='name',\n help_text=_('Device or virtual machine interface')\n )\n outside_ip = CSVModelChoiceField(\n label=_('Outside IP'),\n queryset=IPAddress.objects.all(),\n required=False,\n to_field_name='name'\n )\n\n class Meta:\n model = TunnelTermination\n fields = (\n 'tunnel', 'role', 'outside_ip', 'tags',\n )\n\n def __init__(self, data=None, *args, **kwargs):\n super().__init__(data, *args, **kwargs)\n\n if data:\n\n # Limit termination queryset by assigned device/VM\n if data.get('device'):\n self.fields['termination'].queryset = Interface.objects.filter(\n **{f\"device__{self.fields['device'].to_field_name}\": data['device']}\n )\n elif data.get('virtual_machine'):\n self.fields['termination'].queryset = VMInterface.objects.filter(\n **{f\"virtual_machine__{self.fields['virtual_machine'].to_field_name}\": data['virtual_machine']}\n )\n\n def save(self, *args, **kwargs):\n\n # Assign termination object\n if self.cleaned_data.get('termination'):\n self.instance.termination = self.cleaned_data['termination']\n\n return super().save(*args, **kwargs)\n\n\nclass IKEProposalImportForm(NetBoxModelImportForm):\n authentication_method = CSVChoiceField(\n label=_('Authentication method'),\n choices=AuthenticationMethodChoices\n )\n encryption_algorithm = CSVChoiceField(\n label=_('Encryption algorithm'),\n choices=EncryptionAlgorithmChoices\n )\n authentication_algorithm = CSVChoiceField(\n label=_('Authentication algorithm'),\n choices=AuthenticationAlgorithmChoices\n )\n group = CSVChoiceField(\n label=_('Group'),\n choices=DHGroupChoices\n )\n\n class Meta:\n model = IKEProposal\n fields = (\n 'name', 'description', 'authentication_method', 'encryption_algorithm', 'authentication_algorithm',\n 'group', 'sa_lifetime', 'comments', 'tags',\n )\n\n\nclass IKEPolicyImportForm(NetBoxModelImportForm):\n version = CSVChoiceField(\n label=_('Version'),\n choices=IKEVersionChoices\n )\n mode = CSVChoiceField(\n label=_('Mode'),\n choices=IKEModeChoices\n )\n proposals = CSVModelMultipleChoiceField(\n queryset=IKEProposal.objects.all(),\n to_field_name='name',\n help_text=_('IKE proposal(s)'),\n )\n\n class Meta:\n model = IKEPolicy\n fields = (\n 'name', 'description', 'version', 'mode', 'proposals', 'preshared_key', 'comments', 'tags',\n )\n\n\nclass IPSecProposalImportForm(NetBoxModelImportForm):\n encryption_algorithm = CSVChoiceField(\n label=_('Encryption algorithm'),\n choices=EncryptionAlgorithmChoices\n )\n authentication_algorithm = CSVChoiceField(\n label=_('Authentication algorithm'),\n choices=AuthenticationAlgorithmChoices\n )\n\n class Meta:\n model = IPSecProposal\n fields = (\n 'name', 'description', 'encryption_algorithm', 'authentication_algorithm', 'sa_lifetime_seconds',\n 'sa_lifetime_data', 'comments', 'tags',\n )\n\n\nclass IPSecPolicyImportForm(NetBoxModelImportForm):\n pfs_group = CSVChoiceField(\n label=_('Diffie-Hellman group for Perfect Forward Secrecy'),\n choices=DHGroupChoices\n )\n proposals = CSVModelMultipleChoiceField(\n queryset=IPSecProposal.objects.all(),\n to_field_name='name',\n help_text=_('IPSec proposal(s)'),\n )\n\n class Meta:\n model = IPSecPolicy\n fields = (\n 'name', 'description', 'proposals', 'pfs_group', 'comments', 'tags',\n )\n\n\nclass IPSecProfileImportForm(NetBoxModelImportForm):\n mode = CSVChoiceField(\n label=_('Mode'),\n choices=IPSecModeChoices,\n help_text=_('IPSec protocol')\n )\n ike_policy = CSVModelChoiceField(\n label=_('IKE policy'),\n queryset=IKEPolicy.objects.all(),\n to_field_name='name'\n )\n ipsec_policy = CSVModelChoiceField(\n label=_('IPSec policy'),\n queryset=IPSecPolicy.objects.all(),\n to_field_name='name'\n )\n\n class Meta:\n model = IPSecProfile\n fields = (\n 'name', 'mode', 'ike_policy', 'ipsec_policy', 'description', 'comments', 'tags',\n )\n\n\nclass L2VPNImportForm(NetBoxModelImportForm):\n tenant = CSVModelChoiceField(\n label=_('Tenant'),\n queryset=Tenant.objects.all(),\n required=False,\n to_field_name='name',\n )\n type = CSVChoiceField(\n label=_('Type'),\n choices=L2VPNTypeChoices,\n help_text=_('L2VPN type')\n )\n\n class Meta:\n model = L2VPN\n fields = ('identifier', 'name', 'slug', 'tenant', 'type', 'description',\n 'comments', 'tags')\n\n\nclass L2VPNTerminationImportForm(NetBoxModelImportForm):\n l2vpn = CSVModelChoiceField(\n queryset=L2VPN.objects.all(),\n required=True,\n to_field_name='name',\n label=_('L2VPN'),\n )\n device = CSVModelChoiceField(\n label=_('Device'),\n queryset=Device.objects.all(),\n required=False,\n to_field_name='name',\n help_text=_('Parent device (for interface)')\n )\n virtual_machine = CSVModelChoiceField(\n label=_('Virtual machine'),\n queryset=VirtualMachine.objects.all(),\n required=False,\n to_field_name='name',\n help_text=_('Parent virtual machine (for interface)')\n )\n interface = CSVModelChoiceField(\n label=_('Interface'),\n queryset=Interface.objects.none(), # Can also refer to VMInterface\n required=False,\n to_field_name='name',\n help_text=_('Assigned interface (device or VM)')\n )\n vlan = CSVModelChoiceField(\n label=_('VLAN'),\n queryset=VLAN.objects.all(),\n required=False,\n to_field_name='name',\n help_text=_('Assigned VLAN')\n )\n\n class Meta:\n model = L2VPNTermination\n fields = ('l2vpn', 'device', 'virtual_machine', 'interface', 'vlan', 'tags')\n\n def __init__(self, data=None, *args, **kwargs):\n super().__init__(data, *args, **kwargs)\n\n if data:\n\n # Limit interface queryset by device or VM\n if data.get('device'):\n self.fields['interface'].queryset = Interface.objects.filter(\n **{f\"device__{self.fields['device'].to_field_name}\": data['device']}\n )\n elif data.get('virtual_machine'):\n self.fields['interface'].queryset = VMInterface.objects.filter(\n **{f\"virtual_machine__{self.fields['virtual_machine'].to_field_name}\": data['virtual_machine']}\n )\n\n def clean(self):\n super().clean()\n\n if self.cleaned_data.get('device') and self.cleaned_data.get('virtual_machine'):\n raise ValidationError(_('Cannot import device and VM interface terminations simultaneously.'))\n if not self.instance and not (self.cleaned_data.get('interface') or self.cleaned_data.get('vlan')):\n raise ValidationError(_('Each termination must specify either an interface or a VLAN.'))\n if self.cleaned_data.get('interface') and self.cleaned_data.get('vlan'):\n raise ValidationError(_('Cannot assign both an interface and a VLAN.'))\n\n # if this is an update we might not have interface or vlan in the form data\n if self.cleaned_data.get('interface') or self.cleaned_data.get('vlan'):\n self.instance.assigned_object = self.cleaned_data.get('interface') or self.cleaned_data.get('vlan')\n", "path": "netbox/vpn/forms/bulk_import.py"}], "after_files": [{"content": "from django.core.exceptions import ValidationError\nfrom django.utils.translation import gettext_lazy as _\n\nfrom dcim.models import Device, Interface\nfrom ipam.models import IPAddress, VLAN\nfrom netbox.forms import NetBoxModelImportForm\nfrom tenancy.models import Tenant\nfrom utilities.forms.fields import CSVChoiceField, CSVModelChoiceField, CSVModelMultipleChoiceField, SlugField\nfrom virtualization.models import VirtualMachine, VMInterface\nfrom vpn.choices import *\nfrom vpn.models import *\n\n__all__ = (\n 'IKEPolicyImportForm',\n 'IKEProposalImportForm',\n 'IPSecPolicyImportForm',\n 'IPSecProfileImportForm',\n 'IPSecProposalImportForm',\n 'L2VPNImportForm',\n 'L2VPNTerminationImportForm',\n 'TunnelImportForm',\n 'TunnelGroupImportForm',\n 'TunnelTerminationImportForm',\n)\n\n\nclass TunnelGroupImportForm(NetBoxModelImportForm):\n slug = SlugField()\n\n class Meta:\n model = TunnelGroup\n fields = ('name', 'slug', 'description', 'tags')\n\n\nclass TunnelImportForm(NetBoxModelImportForm):\n status = CSVChoiceField(\n label=_('Status'),\n choices=TunnelStatusChoices,\n help_text=_('Operational status')\n )\n group = CSVModelChoiceField(\n label=_('Tunnel group'),\n queryset=TunnelGroup.objects.all(),\n required=False,\n to_field_name='name'\n )\n encapsulation = CSVChoiceField(\n label=_('Encapsulation'),\n choices=TunnelEncapsulationChoices,\n help_text=_('Tunnel encapsulation')\n )\n ipsec_profile = CSVModelChoiceField(\n label=_('IPSec profile'),\n queryset=IPSecProfile.objects.all(),\n required=False,\n to_field_name='name'\n )\n tenant = CSVModelChoiceField(\n label=_('Tenant'),\n queryset=Tenant.objects.all(),\n required=False,\n to_field_name='name',\n help_text=_('Assigned tenant')\n )\n\n class Meta:\n model = Tunnel\n fields = (\n 'name', 'status', 'group', 'encapsulation', 'ipsec_profile', 'tenant', 'tunnel_id', 'description',\n 'comments', 'tags',\n )\n\n\nclass TunnelTerminationImportForm(NetBoxModelImportForm):\n tunnel = CSVModelChoiceField(\n label=_('Tunnel'),\n queryset=Tunnel.objects.all(),\n to_field_name='name'\n )\n role = CSVChoiceField(\n label=_('Role'),\n choices=TunnelTerminationRoleChoices,\n help_text=_('Operational role')\n )\n device = CSVModelChoiceField(\n label=_('Device'),\n queryset=Device.objects.all(),\n required=False,\n to_field_name='name',\n help_text=_('Parent device of assigned interface')\n )\n virtual_machine = CSVModelChoiceField(\n label=_('Virtual machine'),\n queryset=VirtualMachine.objects.all(),\n required=False,\n to_field_name='name',\n help_text=_('Parent VM of assigned interface')\n )\n termination = CSVModelChoiceField(\n label=_('Termination'),\n queryset=Interface.objects.none(), # Can also refer to VMInterface\n required=False,\n to_field_name='name',\n help_text=_('Device or virtual machine interface')\n )\n outside_ip = CSVModelChoiceField(\n label=_('Outside IP'),\n queryset=IPAddress.objects.all(),\n required=False,\n to_field_name='name'\n )\n\n class Meta:\n model = TunnelTermination\n fields = (\n 'tunnel', 'role', 'outside_ip', 'tags',\n )\n\n def __init__(self, data=None, *args, **kwargs):\n super().__init__(data, *args, **kwargs)\n\n if data:\n\n # Limit termination queryset by assigned device/VM\n if data.get('device'):\n self.fields['termination'].queryset = Interface.objects.filter(\n **{f\"device__{self.fields['device'].to_field_name}\": data['device']}\n )\n elif data.get('virtual_machine'):\n self.fields['termination'].queryset = VMInterface.objects.filter(\n **{f\"virtual_machine__{self.fields['virtual_machine'].to_field_name}\": data['virtual_machine']}\n )\n\n def save(self, *args, **kwargs):\n\n # Assign termination object\n if self.cleaned_data.get('termination'):\n self.instance.termination = self.cleaned_data['termination']\n\n return super().save(*args, **kwargs)\n\n\nclass IKEProposalImportForm(NetBoxModelImportForm):\n authentication_method = CSVChoiceField(\n label=_('Authentication method'),\n choices=AuthenticationMethodChoices\n )\n encryption_algorithm = CSVChoiceField(\n label=_('Encryption algorithm'),\n choices=EncryptionAlgorithmChoices\n )\n authentication_algorithm = CSVChoiceField(\n label=_('Authentication algorithm'),\n choices=AuthenticationAlgorithmChoices,\n required=False\n )\n group = CSVChoiceField(\n label=_('Group'),\n choices=DHGroupChoices\n )\n\n class Meta:\n model = IKEProposal\n fields = (\n 'name', 'description', 'authentication_method', 'encryption_algorithm', 'authentication_algorithm',\n 'group', 'sa_lifetime', 'comments', 'tags',\n )\n\n\nclass IKEPolicyImportForm(NetBoxModelImportForm):\n version = CSVChoiceField(\n label=_('Version'),\n choices=IKEVersionChoices\n )\n mode = CSVChoiceField(\n label=_('Mode'),\n choices=IKEModeChoices\n )\n proposals = CSVModelMultipleChoiceField(\n queryset=IKEProposal.objects.all(),\n to_field_name='name',\n help_text=_('IKE proposal(s)'),\n )\n\n class Meta:\n model = IKEPolicy\n fields = (\n 'name', 'description', 'version', 'mode', 'proposals', 'preshared_key', 'comments', 'tags',\n )\n\n\nclass IPSecProposalImportForm(NetBoxModelImportForm):\n encryption_algorithm = CSVChoiceField(\n label=_('Encryption algorithm'),\n choices=EncryptionAlgorithmChoices,\n required=False\n )\n authentication_algorithm = CSVChoiceField(\n label=_('Authentication algorithm'),\n choices=AuthenticationAlgorithmChoices,\n required=False\n )\n\n class Meta:\n model = IPSecProposal\n fields = (\n 'name', 'description', 'encryption_algorithm', 'authentication_algorithm', 'sa_lifetime_seconds',\n 'sa_lifetime_data', 'comments', 'tags',\n )\n\n\nclass IPSecPolicyImportForm(NetBoxModelImportForm):\n pfs_group = CSVChoiceField(\n label=_('Diffie-Hellman group for Perfect Forward Secrecy'),\n choices=DHGroupChoices,\n required=False\n )\n proposals = CSVModelMultipleChoiceField(\n queryset=IPSecProposal.objects.all(),\n to_field_name='name',\n help_text=_('IPSec proposal(s)'),\n )\n\n class Meta:\n model = IPSecPolicy\n fields = (\n 'name', 'description', 'proposals', 'pfs_group', 'comments', 'tags',\n )\n\n\nclass IPSecProfileImportForm(NetBoxModelImportForm):\n mode = CSVChoiceField(\n label=_('Mode'),\n choices=IPSecModeChoices,\n help_text=_('IPSec protocol')\n )\n ike_policy = CSVModelChoiceField(\n label=_('IKE policy'),\n queryset=IKEPolicy.objects.all(),\n to_field_name='name'\n )\n ipsec_policy = CSVModelChoiceField(\n label=_('IPSec policy'),\n queryset=IPSecPolicy.objects.all(),\n to_field_name='name'\n )\n\n class Meta:\n model = IPSecProfile\n fields = (\n 'name', 'mode', 'ike_policy', 'ipsec_policy', 'description', 'comments', 'tags',\n )\n\n\nclass L2VPNImportForm(NetBoxModelImportForm):\n tenant = CSVModelChoiceField(\n label=_('Tenant'),\n queryset=Tenant.objects.all(),\n required=False,\n to_field_name='name',\n )\n type = CSVChoiceField(\n label=_('Type'),\n choices=L2VPNTypeChoices,\n help_text=_('L2VPN type')\n )\n\n class Meta:\n model = L2VPN\n fields = ('identifier', 'name', 'slug', 'tenant', 'type', 'description',\n 'comments', 'tags')\n\n\nclass L2VPNTerminationImportForm(NetBoxModelImportForm):\n l2vpn = CSVModelChoiceField(\n queryset=L2VPN.objects.all(),\n required=True,\n to_field_name='name',\n label=_('L2VPN'),\n )\n device = CSVModelChoiceField(\n label=_('Device'),\n queryset=Device.objects.all(),\n required=False,\n to_field_name='name',\n help_text=_('Parent device (for interface)')\n )\n virtual_machine = CSVModelChoiceField(\n label=_('Virtual machine'),\n queryset=VirtualMachine.objects.all(),\n required=False,\n to_field_name='name',\n help_text=_('Parent virtual machine (for interface)')\n )\n interface = CSVModelChoiceField(\n label=_('Interface'),\n queryset=Interface.objects.none(), # Can also refer to VMInterface\n required=False,\n to_field_name='name',\n help_text=_('Assigned interface (device or VM)')\n )\n vlan = CSVModelChoiceField(\n label=_('VLAN'),\n queryset=VLAN.objects.all(),\n required=False,\n to_field_name='name',\n help_text=_('Assigned VLAN')\n )\n\n class Meta:\n model = L2VPNTermination\n fields = ('l2vpn', 'device', 'virtual_machine', 'interface', 'vlan', 'tags')\n\n def __init__(self, data=None, *args, **kwargs):\n super().__init__(data, *args, **kwargs)\n\n if data:\n\n # Limit interface queryset by device or VM\n if data.get('device'):\n self.fields['interface'].queryset = Interface.objects.filter(\n **{f\"device__{self.fields['device'].to_field_name}\": data['device']}\n )\n elif data.get('virtual_machine'):\n self.fields['interface'].queryset = VMInterface.objects.filter(\n **{f\"virtual_machine__{self.fields['virtual_machine'].to_field_name}\": data['virtual_machine']}\n )\n\n def clean(self):\n super().clean()\n\n if self.cleaned_data.get('device') and self.cleaned_data.get('virtual_machine'):\n raise ValidationError(_('Cannot import device and VM interface terminations simultaneously.'))\n if not self.instance and not (self.cleaned_data.get('interface') or self.cleaned_data.get('vlan')):\n raise ValidationError(_('Each termination must specify either an interface or a VLAN.'))\n if self.cleaned_data.get('interface') and self.cleaned_data.get('vlan'):\n raise ValidationError(_('Cannot assign both an interface and a VLAN.'))\n\n # if this is an update we might not have interface or vlan in the form data\n if self.cleaned_data.get('interface') or self.cleaned_data.get('vlan'):\n self.instance.assigned_object = self.cleaned_data.get('interface') or self.cleaned_data.get('vlan')\n", "path": "netbox/vpn/forms/bulk_import.py"}]}
| 3,847 | 314 |
gh_patches_debug_37512
|
rasdani/github-patches
|
git_diff
|
pytorch__vision-2642
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Different types of IoU
## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Implementation of 3 other types of IoU metric in torchvision.ops.boxes:
1. Generalized IoU
2. Distance IoU
3. Complete IoU
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->
Since these metrics can lead to better results in some cases (object detection, etc.) it could be useful to have them natively implemented.
## Pitch
<!-- A clear and concise description of what you want to happen. -->
Users who need IoU computation can simply choose from 4 different implementation using the same inputs (e.g. two lists of boxes).
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
Mathematical details can be found here:
1. Generalized IoU: [Generalized Intersection over Union](https://giou.stanford.edu/)
2. Distance/Complete IoU: [Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression](https://arxiv.org/abs/1911.08287)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchvision/ops/__init__.py`
Content:
```
1 from .boxes import nms, batched_nms, remove_small_boxes, clip_boxes_to_image, box_area, box_iou
2 from .new_empty_tensor import _new_empty_tensor
3 from .deform_conv import deform_conv2d, DeformConv2d
4 from .roi_align import roi_align, RoIAlign
5 from .roi_pool import roi_pool, RoIPool
6 from .ps_roi_align import ps_roi_align, PSRoIAlign
7 from .ps_roi_pool import ps_roi_pool, PSRoIPool
8 from .poolers import MultiScaleRoIAlign
9 from .feature_pyramid_network import FeaturePyramidNetwork
10
11 from ._register_onnx_ops import _register_custom_op
12
13 _register_custom_op()
14
15
16 __all__ = [
17 'deform_conv2d', 'DeformConv2d', 'nms', 'batched_nms', 'remove_small_boxes',
18 'clip_boxes_to_image', 'box_area', 'box_iou', 'roi_align', 'RoIAlign', 'roi_pool',
19 'RoIPool', '_new_empty_tensor', 'ps_roi_align', 'PSRoIAlign', 'ps_roi_pool',
20 'PSRoIPool', 'MultiScaleRoIAlign', 'FeaturePyramidNetwork'
21 ]
22
```
Path: `torchvision/ops/boxes.py`
Content:
```
1 import torch
2 from torch.jit.annotations import Tuple
3 from torch import Tensor
4 import torchvision
5
6
7 def nms(boxes: Tensor, scores: Tensor, iou_threshold: float) -> Tensor:
8 """
9 Performs non-maximum suppression (NMS) on the boxes according
10 to their intersection-over-union (IoU).
11
12 NMS iteratively removes lower scoring boxes which have an
13 IoU greater than iou_threshold with another (higher scoring)
14 box.
15
16 If multiple boxes have the exact same score and satisfy the IoU
17 criterion with respect to a reference box, the selected box is
18 not guaranteed to be the same between CPU and GPU. This is similar
19 to the behavior of argsort in PyTorch when repeated values are present.
20
21 Parameters
22 ----------
23 boxes : Tensor[N, 4])
24 boxes to perform NMS on. They
25 are expected to be in (x1, y1, x2, y2) format
26 scores : Tensor[N]
27 scores for each one of the boxes
28 iou_threshold : float
29 discards all overlapping
30 boxes with IoU > iou_threshold
31
32 Returns
33 -------
34 keep : Tensor
35 int64 tensor with the indices
36 of the elements that have been kept
37 by NMS, sorted in decreasing order of scores
38 """
39 return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
40
41
42 @torch.jit._script_if_tracing
43 def batched_nms(
44 boxes: Tensor,
45 scores: Tensor,
46 idxs: Tensor,
47 iou_threshold: float,
48 ) -> Tensor:
49 """
50 Performs non-maximum suppression in a batched fashion.
51
52 Each index value correspond to a category, and NMS
53 will not be applied between elements of different categories.
54
55 Parameters
56 ----------
57 boxes : Tensor[N, 4]
58 boxes where NMS will be performed. They
59 are expected to be in (x1, y1, x2, y2) format
60 scores : Tensor[N]
61 scores for each one of the boxes
62 idxs : Tensor[N]
63 indices of the categories for each one of the boxes.
64 iou_threshold : float
65 discards all overlapping boxes
66 with IoU > iou_threshold
67
68 Returns
69 -------
70 keep : Tensor
71 int64 tensor with the indices of
72 the elements that have been kept by NMS, sorted
73 in decreasing order of scores
74 """
75 if boxes.numel() == 0:
76 return torch.empty((0,), dtype=torch.int64, device=boxes.device)
77 # strategy: in order to perform NMS independently per class.
78 # we add an offset to all the boxes. The offset is dependent
79 # only on the class idx, and is large enough so that boxes
80 # from different classes do not overlap
81 else:
82 max_coordinate = boxes.max()
83 offsets = idxs.to(boxes) * (max_coordinate + torch.tensor(1).to(boxes))
84 boxes_for_nms = boxes + offsets[:, None]
85 keep = nms(boxes_for_nms, scores, iou_threshold)
86 return keep
87
88
89 def remove_small_boxes(boxes: Tensor, min_size: float) -> Tensor:
90 """
91 Remove boxes which contains at least one side smaller than min_size.
92
93 Arguments:
94 boxes (Tensor[N, 4]): boxes in (x1, y1, x2, y2) format
95 min_size (float): minimum size
96
97 Returns:
98 keep (Tensor[K]): indices of the boxes that have both sides
99 larger than min_size
100 """
101 ws, hs = boxes[:, 2] - boxes[:, 0], boxes[:, 3] - boxes[:, 1]
102 keep = (ws >= min_size) & (hs >= min_size)
103 keep = torch.where(keep)[0]
104 return keep
105
106
107 def clip_boxes_to_image(boxes: Tensor, size: Tuple[int, int]) -> Tensor:
108 """
109 Clip boxes so that they lie inside an image of size `size`.
110
111 Arguments:
112 boxes (Tensor[N, 4]): boxes in (x1, y1, x2, y2) format
113 size (Tuple[height, width]): size of the image
114
115 Returns:
116 clipped_boxes (Tensor[N, 4])
117 """
118 dim = boxes.dim()
119 boxes_x = boxes[..., 0::2]
120 boxes_y = boxes[..., 1::2]
121 height, width = size
122
123 if torchvision._is_tracing():
124 boxes_x = torch.max(boxes_x, torch.tensor(0, dtype=boxes.dtype, device=boxes.device))
125 boxes_x = torch.min(boxes_x, torch.tensor(width, dtype=boxes.dtype, device=boxes.device))
126 boxes_y = torch.max(boxes_y, torch.tensor(0, dtype=boxes.dtype, device=boxes.device))
127 boxes_y = torch.min(boxes_y, torch.tensor(height, dtype=boxes.dtype, device=boxes.device))
128 else:
129 boxes_x = boxes_x.clamp(min=0, max=width)
130 boxes_y = boxes_y.clamp(min=0, max=height)
131
132 clipped_boxes = torch.stack((boxes_x, boxes_y), dim=dim)
133 return clipped_boxes.reshape(boxes.shape)
134
135
136 def box_area(boxes: Tensor) -> Tensor:
137 """
138 Computes the area of a set of bounding boxes, which are specified by its
139 (x1, y1, x2, y2) coordinates.
140
141 Arguments:
142 boxes (Tensor[N, 4]): boxes for which the area will be computed. They
143 are expected to be in (x1, y1, x2, y2) format
144
145 Returns:
146 area (Tensor[N]): area for each box
147 """
148 return (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
149
150
151 # implementation from https://github.com/kuangliu/torchcv/blob/master/torchcv/utils/box.py
152 # with slight modifications
153 def box_iou(boxes1: Tensor, boxes2: Tensor) -> Tensor:
154 """
155 Return intersection-over-union (Jaccard index) of boxes.
156
157 Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
158
159 Arguments:
160 boxes1 (Tensor[N, 4])
161 boxes2 (Tensor[M, 4])
162
163 Returns:
164 iou (Tensor[N, M]): the NxM matrix containing the pairwise
165 IoU values for every element in boxes1 and boxes2
166 """
167 area1 = box_area(boxes1)
168 area2 = box_area(boxes2)
169
170 lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2]
171 rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2]
172
173 wh = (rb - lt).clamp(min=0) # [N,M,2]
174 inter = wh[:, :, 0] * wh[:, :, 1] # [N,M]
175
176 iou = inter / (area1[:, None] + area2 - inter)
177 return iou
178
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/torchvision/ops/__init__.py b/torchvision/ops/__init__.py
--- a/torchvision/ops/__init__.py
+++ b/torchvision/ops/__init__.py
@@ -1,4 +1,4 @@
-from .boxes import nms, batched_nms, remove_small_boxes, clip_boxes_to_image, box_area, box_iou
+from .boxes import nms, batched_nms, remove_small_boxes, clip_boxes_to_image, box_area, box_iou, generalized_box_iou
from .new_empty_tensor import _new_empty_tensor
from .deform_conv import deform_conv2d, DeformConv2d
from .roi_align import roi_align, RoIAlign
@@ -15,7 +15,7 @@
__all__ = [
'deform_conv2d', 'DeformConv2d', 'nms', 'batched_nms', 'remove_small_boxes',
- 'clip_boxes_to_image', 'box_area', 'box_iou', 'roi_align', 'RoIAlign', 'roi_pool',
+ 'clip_boxes_to_image', 'box_area', 'box_iou', 'generalized_box_iou', 'roi_align', 'RoIAlign', 'roi_pool',
'RoIPool', '_new_empty_tensor', 'ps_roi_align', 'PSRoIAlign', 'ps_roi_pool',
'PSRoIPool', 'MultiScaleRoIAlign', 'FeaturePyramidNetwork'
]
diff --git a/torchvision/ops/boxes.py b/torchvision/ops/boxes.py
--- a/torchvision/ops/boxes.py
+++ b/torchvision/ops/boxes.py
@@ -161,8 +161,7 @@
boxes2 (Tensor[M, 4])
Returns:
- iou (Tensor[N, M]): the NxM matrix containing the pairwise
- IoU values for every element in boxes1 and boxes2
+ iou (Tensor[N, M]): the NxM matrix containing the pairwise IoU values for every element in boxes1 and boxes2
"""
area1 = box_area(boxes1)
area2 = box_area(boxes2)
@@ -175,3 +174,46 @@
iou = inter / (area1[:, None] + area2 - inter)
return iou
+
+
+# Implementation adapted from https://github.com/facebookresearch/detr/blob/master/util/box_ops.py
+def generalized_box_iou(boxes1: Tensor, boxes2: Tensor) -> Tensor:
+ """
+ Return generalized intersection-over-union (Jaccard index) of boxes.
+
+ Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
+
+ Arguments:
+ boxes1 (Tensor[N, 4])
+ boxes2 (Tensor[M, 4])
+
+ Returns:
+ generalized_iou (Tensor[N, M]): the NxM matrix containing the pairwise generalized_IoU values
+ for every element in boxes1 and boxes2
+ """
+
+ # degenerate boxes gives inf / nan results
+ # so do an early check
+ assert (boxes1[:, 2:] >= boxes1[:, :2]).all()
+ assert (boxes2[:, 2:] >= boxes2[:, :2]).all()
+
+ area1 = box_area(boxes1)
+ area2 = box_area(boxes2)
+
+ lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2]
+ rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2]
+
+ wh = (rb - lt).clamp(min=0) # [N,M,2]
+ inter = wh[:, :, 0] * wh[:, :, 1] # [N,M]
+
+ union = area1[:, None] + area2 - inter
+
+ iou = inter / union
+
+ lti = torch.min(boxes1[:, None, :2], boxes2[:, :2])
+ rbi = torch.max(boxes1[:, None, 2:], boxes2[:, 2:])
+
+ whi = (rbi - lti).clamp(min=0) # [N,M,2]
+ areai = whi[:, :, 0] * whi[:, :, 1]
+
+ return iou - (areai - union) / areai
|
{"golden_diff": "diff --git a/torchvision/ops/__init__.py b/torchvision/ops/__init__.py\n--- a/torchvision/ops/__init__.py\n+++ b/torchvision/ops/__init__.py\n@@ -1,4 +1,4 @@\n-from .boxes import nms, batched_nms, remove_small_boxes, clip_boxes_to_image, box_area, box_iou\n+from .boxes import nms, batched_nms, remove_small_boxes, clip_boxes_to_image, box_area, box_iou, generalized_box_iou\n from .new_empty_tensor import _new_empty_tensor\n from .deform_conv import deform_conv2d, DeformConv2d\n from .roi_align import roi_align, RoIAlign\n@@ -15,7 +15,7 @@\n \n __all__ = [\n 'deform_conv2d', 'DeformConv2d', 'nms', 'batched_nms', 'remove_small_boxes',\n- 'clip_boxes_to_image', 'box_area', 'box_iou', 'roi_align', 'RoIAlign', 'roi_pool',\n+ 'clip_boxes_to_image', 'box_area', 'box_iou', 'generalized_box_iou', 'roi_align', 'RoIAlign', 'roi_pool',\n 'RoIPool', '_new_empty_tensor', 'ps_roi_align', 'PSRoIAlign', 'ps_roi_pool',\n 'PSRoIPool', 'MultiScaleRoIAlign', 'FeaturePyramidNetwork'\n ]\ndiff --git a/torchvision/ops/boxes.py b/torchvision/ops/boxes.py\n--- a/torchvision/ops/boxes.py\n+++ b/torchvision/ops/boxes.py\n@@ -161,8 +161,7 @@\n boxes2 (Tensor[M, 4])\n \n Returns:\n- iou (Tensor[N, M]): the NxM matrix containing the pairwise\n- IoU values for every element in boxes1 and boxes2\n+ iou (Tensor[N, M]): the NxM matrix containing the pairwise IoU values for every element in boxes1 and boxes2\n \"\"\"\n area1 = box_area(boxes1)\n area2 = box_area(boxes2)\n@@ -175,3 +174,46 @@\n \n iou = inter / (area1[:, None] + area2 - inter)\n return iou\n+\n+\n+# Implementation adapted from https://github.com/facebookresearch/detr/blob/master/util/box_ops.py\n+def generalized_box_iou(boxes1: Tensor, boxes2: Tensor) -> Tensor:\n+ \"\"\"\n+ Return generalized intersection-over-union (Jaccard index) of boxes.\n+\n+ Both sets of boxes are expected to be in (x1, y1, x2, y2) format.\n+\n+ Arguments:\n+ boxes1 (Tensor[N, 4])\n+ boxes2 (Tensor[M, 4])\n+\n+ Returns:\n+ generalized_iou (Tensor[N, M]): the NxM matrix containing the pairwise generalized_IoU values\n+ for every element in boxes1 and boxes2\n+ \"\"\"\n+\n+ # degenerate boxes gives inf / nan results\n+ # so do an early check\n+ assert (boxes1[:, 2:] >= boxes1[:, :2]).all()\n+ assert (boxes2[:, 2:] >= boxes2[:, :2]).all()\n+\n+ area1 = box_area(boxes1)\n+ area2 = box_area(boxes2)\n+\n+ lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2]\n+ rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2]\n+\n+ wh = (rb - lt).clamp(min=0) # [N,M,2]\n+ inter = wh[:, :, 0] * wh[:, :, 1] # [N,M]\n+\n+ union = area1[:, None] + area2 - inter\n+\n+ iou = inter / union\n+\n+ lti = torch.min(boxes1[:, None, :2], boxes2[:, :2])\n+ rbi = torch.max(boxes1[:, None, 2:], boxes2[:, 2:])\n+\n+ whi = (rbi - lti).clamp(min=0) # [N,M,2]\n+ areai = whi[:, :, 0] * whi[:, :, 1]\n+\n+ return iou - (areai - union) / areai\n", "issue": "Different types of IoU\n## \ud83d\ude80 Feature\r\n<!-- A clear and concise description of the feature proposal -->\r\nImplementation of 3 other types of IoU metric in torchvision.ops.boxes:\r\n1. Generalized IoU\r\n2. Distance IoU\r\n3. Complete IoU\r\n\r\n## Motivation\r\n\r\n<!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too -->\r\nSince these metrics can lead to better results in some cases (object detection, etc.) it could be useful to have them natively implemented.\r\n\r\n## Pitch\r\n\r\n<!-- A clear and concise description of what you want to happen. -->\r\nUsers who need IoU computation can simply choose from 4 different implementation using the same inputs (e.g. two lists of boxes).\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\nMathematical details can be found here:\r\n1. Generalized IoU: [Generalized Intersection over Union](https://giou.stanford.edu/)\r\n2. Distance/Complete IoU: [Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression](https://arxiv.org/abs/1911.08287)\n", "before_files": [{"content": "from .boxes import nms, batched_nms, remove_small_boxes, clip_boxes_to_image, box_area, box_iou\nfrom .new_empty_tensor import _new_empty_tensor\nfrom .deform_conv import deform_conv2d, DeformConv2d\nfrom .roi_align import roi_align, RoIAlign\nfrom .roi_pool import roi_pool, RoIPool\nfrom .ps_roi_align import ps_roi_align, PSRoIAlign\nfrom .ps_roi_pool import ps_roi_pool, PSRoIPool\nfrom .poolers import MultiScaleRoIAlign\nfrom .feature_pyramid_network import FeaturePyramidNetwork\n\nfrom ._register_onnx_ops import _register_custom_op\n\n_register_custom_op()\n\n\n__all__ = [\n 'deform_conv2d', 'DeformConv2d', 'nms', 'batched_nms', 'remove_small_boxes',\n 'clip_boxes_to_image', 'box_area', 'box_iou', 'roi_align', 'RoIAlign', 'roi_pool',\n 'RoIPool', '_new_empty_tensor', 'ps_roi_align', 'PSRoIAlign', 'ps_roi_pool',\n 'PSRoIPool', 'MultiScaleRoIAlign', 'FeaturePyramidNetwork'\n]\n", "path": "torchvision/ops/__init__.py"}, {"content": "import torch\nfrom torch.jit.annotations import Tuple\nfrom torch import Tensor\nimport torchvision\n\n\ndef nms(boxes: Tensor, scores: Tensor, iou_threshold: float) -> Tensor:\n \"\"\"\n Performs non-maximum suppression (NMS) on the boxes according\n to their intersection-over-union (IoU).\n\n NMS iteratively removes lower scoring boxes which have an\n IoU greater than iou_threshold with another (higher scoring)\n box.\n\n If multiple boxes have the exact same score and satisfy the IoU\n criterion with respect to a reference box, the selected box is\n not guaranteed to be the same between CPU and GPU. This is similar\n to the behavior of argsort in PyTorch when repeated values are present.\n\n Parameters\n ----------\n boxes : Tensor[N, 4])\n boxes to perform NMS on. They\n are expected to be in (x1, y1, x2, y2) format\n scores : Tensor[N]\n scores for each one of the boxes\n iou_threshold : float\n discards all overlapping\n boxes with IoU > iou_threshold\n\n Returns\n -------\n keep : Tensor\n int64 tensor with the indices\n of the elements that have been kept\n by NMS, sorted in decreasing order of scores\n \"\"\"\n return torch.ops.torchvision.nms(boxes, scores, iou_threshold)\n\n\[email protected]._script_if_tracing\ndef batched_nms(\n boxes: Tensor,\n scores: Tensor,\n idxs: Tensor,\n iou_threshold: float,\n) -> Tensor:\n \"\"\"\n Performs non-maximum suppression in a batched fashion.\n\n Each index value correspond to a category, and NMS\n will not be applied between elements of different categories.\n\n Parameters\n ----------\n boxes : Tensor[N, 4]\n boxes where NMS will be performed. They\n are expected to be in (x1, y1, x2, y2) format\n scores : Tensor[N]\n scores for each one of the boxes\n idxs : Tensor[N]\n indices of the categories for each one of the boxes.\n iou_threshold : float\n discards all overlapping boxes\n with IoU > iou_threshold\n\n Returns\n -------\n keep : Tensor\n int64 tensor with the indices of\n the elements that have been kept by NMS, sorted\n in decreasing order of scores\n \"\"\"\n if boxes.numel() == 0:\n return torch.empty((0,), dtype=torch.int64, device=boxes.device)\n # strategy: in order to perform NMS independently per class.\n # we add an offset to all the boxes. The offset is dependent\n # only on the class idx, and is large enough so that boxes\n # from different classes do not overlap\n else:\n max_coordinate = boxes.max()\n offsets = idxs.to(boxes) * (max_coordinate + torch.tensor(1).to(boxes))\n boxes_for_nms = boxes + offsets[:, None]\n keep = nms(boxes_for_nms, scores, iou_threshold)\n return keep\n\n\ndef remove_small_boxes(boxes: Tensor, min_size: float) -> Tensor:\n \"\"\"\n Remove boxes which contains at least one side smaller than min_size.\n\n Arguments:\n boxes (Tensor[N, 4]): boxes in (x1, y1, x2, y2) format\n min_size (float): minimum size\n\n Returns:\n keep (Tensor[K]): indices of the boxes that have both sides\n larger than min_size\n \"\"\"\n ws, hs = boxes[:, 2] - boxes[:, 0], boxes[:, 3] - boxes[:, 1]\n keep = (ws >= min_size) & (hs >= min_size)\n keep = torch.where(keep)[0]\n return keep\n\n\ndef clip_boxes_to_image(boxes: Tensor, size: Tuple[int, int]) -> Tensor:\n \"\"\"\n Clip boxes so that they lie inside an image of size `size`.\n\n Arguments:\n boxes (Tensor[N, 4]): boxes in (x1, y1, x2, y2) format\n size (Tuple[height, width]): size of the image\n\n Returns:\n clipped_boxes (Tensor[N, 4])\n \"\"\"\n dim = boxes.dim()\n boxes_x = boxes[..., 0::2]\n boxes_y = boxes[..., 1::2]\n height, width = size\n\n if torchvision._is_tracing():\n boxes_x = torch.max(boxes_x, torch.tensor(0, dtype=boxes.dtype, device=boxes.device))\n boxes_x = torch.min(boxes_x, torch.tensor(width, dtype=boxes.dtype, device=boxes.device))\n boxes_y = torch.max(boxes_y, torch.tensor(0, dtype=boxes.dtype, device=boxes.device))\n boxes_y = torch.min(boxes_y, torch.tensor(height, dtype=boxes.dtype, device=boxes.device))\n else:\n boxes_x = boxes_x.clamp(min=0, max=width)\n boxes_y = boxes_y.clamp(min=0, max=height)\n\n clipped_boxes = torch.stack((boxes_x, boxes_y), dim=dim)\n return clipped_boxes.reshape(boxes.shape)\n\n\ndef box_area(boxes: Tensor) -> Tensor:\n \"\"\"\n Computes the area of a set of bounding boxes, which are specified by its\n (x1, y1, x2, y2) coordinates.\n\n Arguments:\n boxes (Tensor[N, 4]): boxes for which the area will be computed. They\n are expected to be in (x1, y1, x2, y2) format\n\n Returns:\n area (Tensor[N]): area for each box\n \"\"\"\n return (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])\n\n\n# implementation from https://github.com/kuangliu/torchcv/blob/master/torchcv/utils/box.py\n# with slight modifications\ndef box_iou(boxes1: Tensor, boxes2: Tensor) -> Tensor:\n \"\"\"\n Return intersection-over-union (Jaccard index) of boxes.\n\n Both sets of boxes are expected to be in (x1, y1, x2, y2) format.\n\n Arguments:\n boxes1 (Tensor[N, 4])\n boxes2 (Tensor[M, 4])\n\n Returns:\n iou (Tensor[N, M]): the NxM matrix containing the pairwise\n IoU values for every element in boxes1 and boxes2\n \"\"\"\n area1 = box_area(boxes1)\n area2 = box_area(boxes2)\n\n lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2]\n rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2]\n\n wh = (rb - lt).clamp(min=0) # [N,M,2]\n inter = wh[:, :, 0] * wh[:, :, 1] # [N,M]\n\n iou = inter / (area1[:, None] + area2 - inter)\n return iou\n", "path": "torchvision/ops/boxes.py"}], "after_files": [{"content": "from .boxes import nms, batched_nms, remove_small_boxes, clip_boxes_to_image, box_area, box_iou, generalized_box_iou\nfrom .new_empty_tensor import _new_empty_tensor\nfrom .deform_conv import deform_conv2d, DeformConv2d\nfrom .roi_align import roi_align, RoIAlign\nfrom .roi_pool import roi_pool, RoIPool\nfrom .ps_roi_align import ps_roi_align, PSRoIAlign\nfrom .ps_roi_pool import ps_roi_pool, PSRoIPool\nfrom .poolers import MultiScaleRoIAlign\nfrom .feature_pyramid_network import FeaturePyramidNetwork\n\nfrom ._register_onnx_ops import _register_custom_op\n\n_register_custom_op()\n\n\n__all__ = [\n 'deform_conv2d', 'DeformConv2d', 'nms', 'batched_nms', 'remove_small_boxes',\n 'clip_boxes_to_image', 'box_area', 'box_iou', 'generalized_box_iou', 'roi_align', 'RoIAlign', 'roi_pool',\n 'RoIPool', '_new_empty_tensor', 'ps_roi_align', 'PSRoIAlign', 'ps_roi_pool',\n 'PSRoIPool', 'MultiScaleRoIAlign', 'FeaturePyramidNetwork'\n]\n", "path": "torchvision/ops/__init__.py"}, {"content": "import torch\nfrom torch.jit.annotations import Tuple\nfrom torch import Tensor\nimport torchvision\n\n\ndef nms(boxes: Tensor, scores: Tensor, iou_threshold: float) -> Tensor:\n \"\"\"\n Performs non-maximum suppression (NMS) on the boxes according\n to their intersection-over-union (IoU).\n\n NMS iteratively removes lower scoring boxes which have an\n IoU greater than iou_threshold with another (higher scoring)\n box.\n\n If multiple boxes have the exact same score and satisfy the IoU\n criterion with respect to a reference box, the selected box is\n not guaranteed to be the same between CPU and GPU. This is similar\n to the behavior of argsort in PyTorch when repeated values are present.\n\n Parameters\n ----------\n boxes : Tensor[N, 4])\n boxes to perform NMS on. They\n are expected to be in (x1, y1, x2, y2) format\n scores : Tensor[N]\n scores for each one of the boxes\n iou_threshold : float\n discards all overlapping\n boxes with IoU > iou_threshold\n\n Returns\n -------\n keep : Tensor\n int64 tensor with the indices\n of the elements that have been kept\n by NMS, sorted in decreasing order of scores\n \"\"\"\n return torch.ops.torchvision.nms(boxes, scores, iou_threshold)\n\n\[email protected]._script_if_tracing\ndef batched_nms(\n boxes: Tensor,\n scores: Tensor,\n idxs: Tensor,\n iou_threshold: float,\n) -> Tensor:\n \"\"\"\n Performs non-maximum suppression in a batched fashion.\n\n Each index value correspond to a category, and NMS\n will not be applied between elements of different categories.\n\n Parameters\n ----------\n boxes : Tensor[N, 4]\n boxes where NMS will be performed. They\n are expected to be in (x1, y1, x2, y2) format\n scores : Tensor[N]\n scores for each one of the boxes\n idxs : Tensor[N]\n indices of the categories for each one of the boxes.\n iou_threshold : float\n discards all overlapping boxes\n with IoU > iou_threshold\n\n Returns\n -------\n keep : Tensor\n int64 tensor with the indices of\n the elements that have been kept by NMS, sorted\n in decreasing order of scores\n \"\"\"\n if boxes.numel() == 0:\n return torch.empty((0,), dtype=torch.int64, device=boxes.device)\n # strategy: in order to perform NMS independently per class.\n # we add an offset to all the boxes. The offset is dependent\n # only on the class idx, and is large enough so that boxes\n # from different classes do not overlap\n else:\n max_coordinate = boxes.max()\n offsets = idxs.to(boxes) * (max_coordinate + torch.tensor(1).to(boxes))\n boxes_for_nms = boxes + offsets[:, None]\n keep = nms(boxes_for_nms, scores, iou_threshold)\n return keep\n\n\ndef remove_small_boxes(boxes: Tensor, min_size: float) -> Tensor:\n \"\"\"\n Remove boxes which contains at least one side smaller than min_size.\n\n Arguments:\n boxes (Tensor[N, 4]): boxes in (x1, y1, x2, y2) format\n min_size (float): minimum size\n\n Returns:\n keep (Tensor[K]): indices of the boxes that have both sides\n larger than min_size\n \"\"\"\n ws, hs = boxes[:, 2] - boxes[:, 0], boxes[:, 3] - boxes[:, 1]\n keep = (ws >= min_size) & (hs >= min_size)\n keep = torch.where(keep)[0]\n return keep\n\n\ndef clip_boxes_to_image(boxes: Tensor, size: Tuple[int, int]) -> Tensor:\n \"\"\"\n Clip boxes so that they lie inside an image of size `size`.\n\n Arguments:\n boxes (Tensor[N, 4]): boxes in (x1, y1, x2, y2) format\n size (Tuple[height, width]): size of the image\n\n Returns:\n clipped_boxes (Tensor[N, 4])\n \"\"\"\n dim = boxes.dim()\n boxes_x = boxes[..., 0::2]\n boxes_y = boxes[..., 1::2]\n height, width = size\n\n if torchvision._is_tracing():\n boxes_x = torch.max(boxes_x, torch.tensor(0, dtype=boxes.dtype, device=boxes.device))\n boxes_x = torch.min(boxes_x, torch.tensor(width, dtype=boxes.dtype, device=boxes.device))\n boxes_y = torch.max(boxes_y, torch.tensor(0, dtype=boxes.dtype, device=boxes.device))\n boxes_y = torch.min(boxes_y, torch.tensor(height, dtype=boxes.dtype, device=boxes.device))\n else:\n boxes_x = boxes_x.clamp(min=0, max=width)\n boxes_y = boxes_y.clamp(min=0, max=height)\n\n clipped_boxes = torch.stack((boxes_x, boxes_y), dim=dim)\n return clipped_boxes.reshape(boxes.shape)\n\n\ndef box_area(boxes: Tensor) -> Tensor:\n \"\"\"\n Computes the area of a set of bounding boxes, which are specified by its\n (x1, y1, x2, y2) coordinates.\n\n Arguments:\n boxes (Tensor[N, 4]): boxes for which the area will be computed. They\n are expected to be in (x1, y1, x2, y2) format\n\n Returns:\n area (Tensor[N]): area for each box\n \"\"\"\n return (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])\n\n\n# implementation from https://github.com/kuangliu/torchcv/blob/master/torchcv/utils/box.py\n# with slight modifications\ndef box_iou(boxes1: Tensor, boxes2: Tensor) -> Tensor:\n \"\"\"\n Return intersection-over-union (Jaccard index) of boxes.\n\n Both sets of boxes are expected to be in (x1, y1, x2, y2) format.\n\n Arguments:\n boxes1 (Tensor[N, 4])\n boxes2 (Tensor[M, 4])\n\n Returns:\n iou (Tensor[N, M]): the NxM matrix containing the pairwise IoU values for every element in boxes1 and boxes2\n \"\"\"\n area1 = box_area(boxes1)\n area2 = box_area(boxes2)\n\n lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2]\n rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2]\n\n wh = (rb - lt).clamp(min=0) # [N,M,2]\n inter = wh[:, :, 0] * wh[:, :, 1] # [N,M]\n\n iou = inter / (area1[:, None] + area2 - inter)\n return iou\n\n\n# Implementation adapted from https://github.com/facebookresearch/detr/blob/master/util/box_ops.py\ndef generalized_box_iou(boxes1: Tensor, boxes2: Tensor) -> Tensor:\n \"\"\"\n Return generalized intersection-over-union (Jaccard index) of boxes.\n\n Both sets of boxes are expected to be in (x1, y1, x2, y2) format.\n\n Arguments:\n boxes1 (Tensor[N, 4])\n boxes2 (Tensor[M, 4])\n\n Returns:\n generalized_iou (Tensor[N, M]): the NxM matrix containing the pairwise generalized_IoU values\n for every element in boxes1 and boxes2\n \"\"\"\n\n # degenerate boxes gives inf / nan results\n # so do an early check\n assert (boxes1[:, 2:] >= boxes1[:, :2]).all()\n assert (boxes2[:, 2:] >= boxes2[:, :2]).all()\n\n area1 = box_area(boxes1)\n area2 = box_area(boxes2)\n\n lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2]\n rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2]\n\n wh = (rb - lt).clamp(min=0) # [N,M,2]\n inter = wh[:, :, 0] * wh[:, :, 1] # [N,M]\n\n union = area1[:, None] + area2 - inter\n\n iou = inter / union\n\n lti = torch.min(boxes1[:, None, :2], boxes2[:, :2])\n rbi = torch.max(boxes1[:, None, 2:], boxes2[:, 2:])\n\n whi = (rbi - lti).clamp(min=0) # [N,M,2]\n areai = whi[:, :, 0] * whi[:, :, 1]\n\n return iou - (areai - union) / areai\n", "path": "torchvision/ops/boxes.py"}]}
| 2,864 | 1,001 |
gh_patches_debug_43314
|
rasdani/github-patches
|
git_diff
|
pymedusa__Medusa-1988
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error while searching on T411
I was searching ChäoS;Child and I saw on my debug log this error because of the ";"
This should url-encoded, but it doesn't seems so..

Thanks a lot,
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `medusa/providers/torrent/json/t411.py`
Content:
```
1 # coding=utf-8
2 # Author: djoole <[email protected]>
3 #
4 # This file is part of Medusa.
5 #
6 # Medusa is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # Medusa is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with Medusa. If not, see <http://www.gnu.org/licenses/>.
18 """Provider code for T411."""
19 from __future__ import unicode_literals
20
21 import time
22 import traceback
23
24 from operator import itemgetter
25
26 from requests.auth import AuthBase
27 from requests.compat import urljoin
28
29 from ..torrent_provider import TorrentProvider
30 from .... import logger, tv_cache
31 from ....common import USER_AGENT
32 from ....helper.common import convert_size, try_int
33
34
35 class T411Provider(TorrentProvider):
36 """T411 Torrent provider."""
37
38 def __init__(self):
39 """Initialize the class."""
40 super(self.__class__, self).__init__("T411")
41
42 # Credentials
43 self.username = None
44 self.password = None
45 self.token = None
46 self.tokenLastUpdate = None
47
48 # URLs
49 self.url = 'https://api.t411.li'
50 self.urls = {
51 'search': urljoin(self.url, 'torrents/search/%s*?cid=%s&limit=100'),
52 'rss': urljoin(self.url, 'torrents/top/today'),
53 'login_page': urljoin(self.url, 'auth'),
54 'download': urljoin(self.url, 'torrents/download/%s'),
55 }
56
57 # Proper Strings
58
59 # Miscellaneous Options
60 self.headers.update({'User-Agent': USER_AGENT})
61 self.subcategories = [433, 637, 455, 639]
62 self.confirmed = False
63
64 # Torrent Stats
65 self.minseed = 0
66 self.minleech = 0
67
68 # Cache
69 self.cache = tv_cache.TVCache(self, min_time=10) # Only poll T411 every 10 minutes max
70
71 def search(self, search_strings, age=0, ep_obj=None):
72 """Search a provider and parse the results.
73
74 :param search_strings: A dict with mode (key) and the search value (value)
75 :param age: Not used
76 :param ep_obj: Not used
77 :returns: A list of search results (structure)
78 """
79 results = []
80 if not self.login():
81 return results
82
83 for mode in search_strings:
84 logger.log('Search mode: {0}'.format(mode), logger.DEBUG)
85
86 for search_string in search_strings[mode]:
87 if mode != 'RSS':
88 logger.log('Search string: {search}'.format
89 (search=search_string), logger.DEBUG)
90 if self.confirmed:
91 logger.log('Searching only confirmed torrents', logger.DEBUG)
92
93 search_urls = ([self.urls['search'] % (search_string, u)
94 for u in self.subcategories], [self.urls['rss']])[mode == 'RSS']
95 for search_url in search_urls:
96 response = self.get_url(search_url, returns='response')
97
98 if not response or not response.content:
99 logger.log('No data returned from provider', logger.DEBUG)
100 continue
101
102 try:
103 jdata = response.json()
104 except ValueError: # also catches JSONDecodeError if simplejson is installed
105 logger.log('No data returned from provider', logger.DEBUG)
106 continue
107
108 results += self.parse(jdata, mode)
109
110 return results
111
112 def parse(self, data, mode):
113 """Parse search results for items.
114
115 :param data: The raw response from a search
116 :param mode: The current mode used to search, e.g. RSS
117
118 :return: A list of items found
119 """
120 items = []
121
122 unsorted_torrent_rows = data.get('torrents') if mode != 'RSS' else data
123
124 if not unsorted_torrent_rows or not isinstance(unsorted_torrent_rows, dict):
125 logger.log('Data returned from provider does not contain any {0}torrents'.format(
126 'confirmed ' if self.confirmed else ''), logger.DEBUG)
127 return items
128
129 torrent_rows = sorted(unsorted_torrent_rows, key=itemgetter('added'), reverse=True)
130
131 for row in torrent_rows:
132 if not isinstance(row, dict):
133 logger.log('Invalid data returned from provider', logger.WARNING)
134 continue
135
136 if mode == 'RSS' and 'category' in row and try_int(row['category'], 0) not in self.subcategories:
137 continue
138
139 try:
140 title = row['name']
141 torrent_id = row['id']
142 download_url = (self.urls['download'] % torrent_id)
143 if not all([title, download_url]):
144 continue
145
146 seeders = try_int(row['seeders'])
147 leechers = try_int(row['leechers'])
148 verified = bool(row['isVerified'])
149
150 # Filter unseeded torrent
151 if seeders < min(self.minseed, 1):
152 if mode != 'RSS':
153 logger.log("Discarding torrent because it doesn't meet the "
154 "minimum seeders: {0}. Seeders: {1}".format
155 (title, seeders), logger.DEBUG)
156 continue
157
158 if self.confirmed and not verified and mode != 'RSS':
159 logger.log("Found result {0} but that doesn't seem like a verified"
160 " result so I'm ignoring it".format(title), logger.DEBUG)
161 continue
162
163 torrent_size = row['size']
164 size = convert_size(torrent_size) or -1
165
166 item = {
167 'title': title,
168 'link': download_url,
169 'size': size,
170 'seeders': seeders,
171 'leechers': leechers,
172 'pubdate': None,
173 }
174 if mode != 'RSS':
175 logger.log('Found result: {0} with {1} seeders and {2} leechers'.format
176 (title, seeders, leechers), logger.DEBUG)
177
178 items.append(item)
179 except (AttributeError, TypeError, KeyError, ValueError, IndexError):
180 logger.log('Failed parsing provider. Traceback: {0!r}'.format
181 (traceback.format_exc()), logger.ERROR)
182
183 return items
184
185 def login(self):
186 """Log into provider."""
187 if self.token is not None:
188 if time.time() < (self.tokenLastUpdate + 30 * 60):
189 return True
190
191 login_params = {
192 'username': self.username,
193 'password': self.password,
194 }
195
196 response = self.get_url(self.urls['login_page'], post_data=login_params, returns='json')
197 if not response:
198 logger.log('Unable to connect to provider', logger.WARNING)
199 return False
200
201 if response and 'token' in response:
202 self.token = response['token']
203 self.tokenLastUpdate = time.time()
204 # self.uid = response['uid'].encode('ascii', 'ignore')
205 self.session.auth = T411Auth(self.token)
206 return True
207 else:
208 logger.log('Token not found in authentication response', logger.WARNING)
209 return False
210
211
212 class T411Auth(AuthBase):
213 """Attach HTTP Authentication to the given Request object."""
214
215 def __init__(self, token):
216 """Init object."""
217 self.token = token
218
219 def __call__(self, r):
220 """Add token to request header."""
221 r.headers['Authorization'] = self.token
222 return r
223
224
225 provider = T411Provider()
226
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/medusa/providers/torrent/json/t411.py b/medusa/providers/torrent/json/t411.py
--- a/medusa/providers/torrent/json/t411.py
+++ b/medusa/providers/torrent/json/t411.py
@@ -24,7 +24,7 @@
from operator import itemgetter
from requests.auth import AuthBase
-from requests.compat import urljoin
+from requests.compat import quote, urljoin
from ..torrent_provider import TorrentProvider
from .... import logger, tv_cache
@@ -48,10 +48,10 @@
# URLs
self.url = 'https://api.t411.li'
self.urls = {
- 'search': urljoin(self.url, 'torrents/search/%s*?cid=%s&limit=100'),
+ 'search': urljoin(self.url, 'torrents/search/{search}'),
'rss': urljoin(self.url, 'torrents/top/today'),
'login_page': urljoin(self.url, 'auth'),
- 'download': urljoin(self.url, 'torrents/download/%s'),
+ 'download': urljoin(self.url, 'torrents/download/{id}'),
}
# Proper Strings
@@ -80,6 +80,8 @@
if not self.login():
return results
+ search_params = {}
+
for mode in search_strings:
logger.log('Search mode: {0}'.format(mode), logger.DEBUG)
@@ -90,10 +92,25 @@
if self.confirmed:
logger.log('Searching only confirmed torrents', logger.DEBUG)
- search_urls = ([self.urls['search'] % (search_string, u)
- for u in self.subcategories], [self.urls['rss']])[mode == 'RSS']
- for search_url in search_urls:
- response = self.get_url(search_url, returns='response')
+ # use string formatting to safely coerce the search term
+ # to unicode then utf-8 encode the unicode string
+ term = '{term}'.format(term=search_string).encode('utf-8')
+ # build the search URL
+ search_url = self.urls['search'].format(
+ search=quote(term) # URL encode the search term
+ )
+ categories = self.subcategories
+ search_params.update({'limit': 100})
+ else:
+ search_url = self.urls['rss']
+ # Using None as a category removes it as a search param
+ categories = [None] # Must be a list for iteration
+
+ for category in categories:
+ search_params.update({'cid': category})
+ response = self.get_url(
+ search_url, params=search_params, returns='response'
+ )
if not response or not response.content:
logger.log('No data returned from provider', logger.DEBUG)
@@ -121,9 +138,12 @@
unsorted_torrent_rows = data.get('torrents') if mode != 'RSS' else data
- if not unsorted_torrent_rows or not isinstance(unsorted_torrent_rows, dict):
- logger.log('Data returned from provider does not contain any {0}torrents'.format(
- 'confirmed ' if self.confirmed else ''), logger.DEBUG)
+ if not unsorted_torrent_rows:
+ logger.log(
+ 'Data returned from provider does not contain any {torrents}'.format(
+ torrents='confirmed torrents' if self.confirmed else 'torrents'
+ ), logger.DEBUG
+ )
return items
torrent_rows = sorted(unsorted_torrent_rows, key=itemgetter('added'), reverse=True)
@@ -139,7 +159,7 @@
try:
title = row['name']
torrent_id = row['id']
- download_url = (self.urls['download'] % torrent_id)
+ download_url = self.urls['download'].format(id=torrent_id)
if not all([title, download_url]):
continue
|
{"golden_diff": "diff --git a/medusa/providers/torrent/json/t411.py b/medusa/providers/torrent/json/t411.py\n--- a/medusa/providers/torrent/json/t411.py\n+++ b/medusa/providers/torrent/json/t411.py\n@@ -24,7 +24,7 @@\n from operator import itemgetter\n \n from requests.auth import AuthBase\n-from requests.compat import urljoin\n+from requests.compat import quote, urljoin\n \n from ..torrent_provider import TorrentProvider\n from .... import logger, tv_cache\n@@ -48,10 +48,10 @@\n # URLs\n self.url = 'https://api.t411.li'\n self.urls = {\n- 'search': urljoin(self.url, 'torrents/search/%s*?cid=%s&limit=100'),\n+ 'search': urljoin(self.url, 'torrents/search/{search}'),\n 'rss': urljoin(self.url, 'torrents/top/today'),\n 'login_page': urljoin(self.url, 'auth'),\n- 'download': urljoin(self.url, 'torrents/download/%s'),\n+ 'download': urljoin(self.url, 'torrents/download/{id}'),\n }\n \n # Proper Strings\n@@ -80,6 +80,8 @@\n if not self.login():\n return results\n \n+ search_params = {}\n+\n for mode in search_strings:\n logger.log('Search mode: {0}'.format(mode), logger.DEBUG)\n \n@@ -90,10 +92,25 @@\n if self.confirmed:\n logger.log('Searching only confirmed torrents', logger.DEBUG)\n \n- search_urls = ([self.urls['search'] % (search_string, u)\n- for u in self.subcategories], [self.urls['rss']])[mode == 'RSS']\n- for search_url in search_urls:\n- response = self.get_url(search_url, returns='response')\n+ # use string formatting to safely coerce the search term\n+ # to unicode then utf-8 encode the unicode string\n+ term = '{term}'.format(term=search_string).encode('utf-8')\n+ # build the search URL\n+ search_url = self.urls['search'].format(\n+ search=quote(term) # URL encode the search term\n+ )\n+ categories = self.subcategories\n+ search_params.update({'limit': 100})\n+ else:\n+ search_url = self.urls['rss']\n+ # Using None as a category removes it as a search param\n+ categories = [None] # Must be a list for iteration\n+\n+ for category in categories:\n+ search_params.update({'cid': category})\n+ response = self.get_url(\n+ search_url, params=search_params, returns='response'\n+ )\n \n if not response or not response.content:\n logger.log('No data returned from provider', logger.DEBUG)\n@@ -121,9 +138,12 @@\n \n unsorted_torrent_rows = data.get('torrents') if mode != 'RSS' else data\n \n- if not unsorted_torrent_rows or not isinstance(unsorted_torrent_rows, dict):\n- logger.log('Data returned from provider does not contain any {0}torrents'.format(\n- 'confirmed ' if self.confirmed else ''), logger.DEBUG)\n+ if not unsorted_torrent_rows:\n+ logger.log(\n+ 'Data returned from provider does not contain any {torrents}'.format(\n+ torrents='confirmed torrents' if self.confirmed else 'torrents'\n+ ), logger.DEBUG\n+ )\n return items\n \n torrent_rows = sorted(unsorted_torrent_rows, key=itemgetter('added'), reverse=True)\n@@ -139,7 +159,7 @@\n try:\n title = row['name']\n torrent_id = row['id']\n- download_url = (self.urls['download'] % torrent_id)\n+ download_url = self.urls['download'].format(id=torrent_id)\n if not all([title, download_url]):\n continue\n", "issue": "Error while searching on T411\nI was searching Ch\u00e4oS;Child and I saw on my debug log this error because of the \";\"\r\n\r\nThis should url-encoded, but it doesn't seems so..\r\n\r\n\r\n\r\nThanks a lot,\r\n\n", "before_files": [{"content": "# coding=utf-8\n# Author: djoole <[email protected]>\n#\n# This file is part of Medusa.\n#\n# Medusa is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Medusa is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Medusa. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"Provider code for T411.\"\"\"\nfrom __future__ import unicode_literals\n\nimport time\nimport traceback\n\nfrom operator import itemgetter\n\nfrom requests.auth import AuthBase\nfrom requests.compat import urljoin\n\nfrom ..torrent_provider import TorrentProvider\nfrom .... import logger, tv_cache\nfrom ....common import USER_AGENT\nfrom ....helper.common import convert_size, try_int\n\n\nclass T411Provider(TorrentProvider):\n \"\"\"T411 Torrent provider.\"\"\"\n\n def __init__(self):\n \"\"\"Initialize the class.\"\"\"\n super(self.__class__, self).__init__(\"T411\")\n\n # Credentials\n self.username = None\n self.password = None\n self.token = None\n self.tokenLastUpdate = None\n\n # URLs\n self.url = 'https://api.t411.li'\n self.urls = {\n 'search': urljoin(self.url, 'torrents/search/%s*?cid=%s&limit=100'),\n 'rss': urljoin(self.url, 'torrents/top/today'),\n 'login_page': urljoin(self.url, 'auth'),\n 'download': urljoin(self.url, 'torrents/download/%s'),\n }\n\n # Proper Strings\n\n # Miscellaneous Options\n self.headers.update({'User-Agent': USER_AGENT})\n self.subcategories = [433, 637, 455, 639]\n self.confirmed = False\n\n # Torrent Stats\n self.minseed = 0\n self.minleech = 0\n\n # Cache\n self.cache = tv_cache.TVCache(self, min_time=10) # Only poll T411 every 10 minutes max\n\n def search(self, search_strings, age=0, ep_obj=None):\n \"\"\"Search a provider and parse the results.\n\n :param search_strings: A dict with mode (key) and the search value (value)\n :param age: Not used\n :param ep_obj: Not used\n :returns: A list of search results (structure)\n \"\"\"\n results = []\n if not self.login():\n return results\n\n for mode in search_strings:\n logger.log('Search mode: {0}'.format(mode), logger.DEBUG)\n\n for search_string in search_strings[mode]:\n if mode != 'RSS':\n logger.log('Search string: {search}'.format\n (search=search_string), logger.DEBUG)\n if self.confirmed:\n logger.log('Searching only confirmed torrents', logger.DEBUG)\n\n search_urls = ([self.urls['search'] % (search_string, u)\n for u in self.subcategories], [self.urls['rss']])[mode == 'RSS']\n for search_url in search_urls:\n response = self.get_url(search_url, returns='response')\n\n if not response or not response.content:\n logger.log('No data returned from provider', logger.DEBUG)\n continue\n\n try:\n jdata = response.json()\n except ValueError: # also catches JSONDecodeError if simplejson is installed\n logger.log('No data returned from provider', logger.DEBUG)\n continue\n\n results += self.parse(jdata, mode)\n\n return results\n\n def parse(self, data, mode):\n \"\"\"Parse search results for items.\n\n :param data: The raw response from a search\n :param mode: The current mode used to search, e.g. RSS\n\n :return: A list of items found\n \"\"\"\n items = []\n\n unsorted_torrent_rows = data.get('torrents') if mode != 'RSS' else data\n\n if not unsorted_torrent_rows or not isinstance(unsorted_torrent_rows, dict):\n logger.log('Data returned from provider does not contain any {0}torrents'.format(\n 'confirmed ' if self.confirmed else ''), logger.DEBUG)\n return items\n\n torrent_rows = sorted(unsorted_torrent_rows, key=itemgetter('added'), reverse=True)\n\n for row in torrent_rows:\n if not isinstance(row, dict):\n logger.log('Invalid data returned from provider', logger.WARNING)\n continue\n\n if mode == 'RSS' and 'category' in row and try_int(row['category'], 0) not in self.subcategories:\n continue\n\n try:\n title = row['name']\n torrent_id = row['id']\n download_url = (self.urls['download'] % torrent_id)\n if not all([title, download_url]):\n continue\n\n seeders = try_int(row['seeders'])\n leechers = try_int(row['leechers'])\n verified = bool(row['isVerified'])\n\n # Filter unseeded torrent\n if seeders < min(self.minseed, 1):\n if mode != 'RSS':\n logger.log(\"Discarding torrent because it doesn't meet the \"\n \"minimum seeders: {0}. Seeders: {1}\".format\n (title, seeders), logger.DEBUG)\n continue\n\n if self.confirmed and not verified and mode != 'RSS':\n logger.log(\"Found result {0} but that doesn't seem like a verified\"\n \" result so I'm ignoring it\".format(title), logger.DEBUG)\n continue\n\n torrent_size = row['size']\n size = convert_size(torrent_size) or -1\n\n item = {\n 'title': title,\n 'link': download_url,\n 'size': size,\n 'seeders': seeders,\n 'leechers': leechers,\n 'pubdate': None,\n }\n if mode != 'RSS':\n logger.log('Found result: {0} with {1} seeders and {2} leechers'.format\n (title, seeders, leechers), logger.DEBUG)\n\n items.append(item)\n except (AttributeError, TypeError, KeyError, ValueError, IndexError):\n logger.log('Failed parsing provider. Traceback: {0!r}'.format\n (traceback.format_exc()), logger.ERROR)\n\n return items\n\n def login(self):\n \"\"\"Log into provider.\"\"\"\n if self.token is not None:\n if time.time() < (self.tokenLastUpdate + 30 * 60):\n return True\n\n login_params = {\n 'username': self.username,\n 'password': self.password,\n }\n\n response = self.get_url(self.urls['login_page'], post_data=login_params, returns='json')\n if not response:\n logger.log('Unable to connect to provider', logger.WARNING)\n return False\n\n if response and 'token' in response:\n self.token = response['token']\n self.tokenLastUpdate = time.time()\n # self.uid = response['uid'].encode('ascii', 'ignore')\n self.session.auth = T411Auth(self.token)\n return True\n else:\n logger.log('Token not found in authentication response', logger.WARNING)\n return False\n\n\nclass T411Auth(AuthBase):\n \"\"\"Attach HTTP Authentication to the given Request object.\"\"\"\n\n def __init__(self, token):\n \"\"\"Init object.\"\"\"\n self.token = token\n\n def __call__(self, r):\n \"\"\"Add token to request header.\"\"\"\n r.headers['Authorization'] = self.token\n return r\n\n\nprovider = T411Provider()\n", "path": "medusa/providers/torrent/json/t411.py"}], "after_files": [{"content": "# coding=utf-8\n# Author: djoole <[email protected]>\n#\n# This file is part of Medusa.\n#\n# Medusa is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Medusa is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Medusa. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"Provider code for T411.\"\"\"\nfrom __future__ import unicode_literals\n\nimport time\nimport traceback\n\nfrom operator import itemgetter\n\nfrom requests.auth import AuthBase\nfrom requests.compat import quote, urljoin\n\nfrom ..torrent_provider import TorrentProvider\nfrom .... import logger, tv_cache\nfrom ....common import USER_AGENT\nfrom ....helper.common import convert_size, try_int\n\n\nclass T411Provider(TorrentProvider):\n \"\"\"T411 Torrent provider.\"\"\"\n\n def __init__(self):\n \"\"\"Initialize the class.\"\"\"\n super(self.__class__, self).__init__(\"T411\")\n\n # Credentials\n self.username = None\n self.password = None\n self.token = None\n self.tokenLastUpdate = None\n\n # URLs\n self.url = 'https://api.t411.li'\n self.urls = {\n 'search': urljoin(self.url, 'torrents/search/{search}'),\n 'rss': urljoin(self.url, 'torrents/top/today'),\n 'login_page': urljoin(self.url, 'auth'),\n 'download': urljoin(self.url, 'torrents/download/{id}'),\n }\n\n # Proper Strings\n\n # Miscellaneous Options\n self.headers.update({'User-Agent': USER_AGENT})\n self.subcategories = [433, 637, 455, 639]\n self.confirmed = False\n\n # Torrent Stats\n self.minseed = 0\n self.minleech = 0\n\n # Cache\n self.cache = tv_cache.TVCache(self, min_time=10) # Only poll T411 every 10 minutes max\n\n def search(self, search_strings, age=0, ep_obj=None):\n \"\"\"Search a provider and parse the results.\n\n :param search_strings: A dict with mode (key) and the search value (value)\n :param age: Not used\n :param ep_obj: Not used\n :returns: A list of search results (structure)\n \"\"\"\n results = []\n if not self.login():\n return results\n\n search_params = {}\n\n for mode in search_strings:\n logger.log('Search mode: {0}'.format(mode), logger.DEBUG)\n\n for search_string in search_strings[mode]:\n if mode != 'RSS':\n logger.log('Search string: {search}'.format\n (search=search_string), logger.DEBUG)\n if self.confirmed:\n logger.log('Searching only confirmed torrents', logger.DEBUG)\n\n # use string formatting to safely coerce the search term\n # to unicode then utf-8 encode the unicode string\n term = '{term}'.format(term=search_string).encode('utf-8')\n # build the search URL\n search_url = self.urls['search'].format(\n search=quote(term) # URL encode the search term\n )\n categories = self.subcategories\n search_params.update({'limit': 100})\n else:\n search_url = self.urls['rss']\n # Using None as a category removes it as a search param\n categories = [None] # Must be a list for iteration\n\n for category in categories:\n search_params.update({'cid': category})\n response = self.get_url(\n search_url, params=search_params, returns='response'\n )\n\n if not response or not response.content:\n logger.log('No data returned from provider', logger.DEBUG)\n continue\n\n try:\n jdata = response.json()\n except ValueError: # also catches JSONDecodeError if simplejson is installed\n logger.log('No data returned from provider', logger.DEBUG)\n continue\n\n results += self.parse(jdata, mode)\n\n return results\n\n def parse(self, data, mode):\n \"\"\"Parse search results for items.\n\n :param data: The raw response from a search\n :param mode: The current mode used to search, e.g. RSS\n\n :return: A list of items found\n \"\"\"\n items = []\n\n unsorted_torrent_rows = data.get('torrents') if mode != 'RSS' else data\n\n if not unsorted_torrent_rows:\n logger.log(\n 'Data returned from provider does not contain any {torrents}'.format(\n torrents='confirmed torrents' if self.confirmed else 'torrents'\n ), logger.DEBUG\n )\n return items\n\n torrent_rows = sorted(unsorted_torrent_rows, key=itemgetter('added'), reverse=True)\n\n for row in torrent_rows:\n if not isinstance(row, dict):\n logger.log('Invalid data returned from provider', logger.WARNING)\n continue\n\n if mode == 'RSS' and 'category' in row and try_int(row['category'], 0) not in self.subcategories:\n continue\n\n try:\n title = row['name']\n torrent_id = row['id']\n download_url = self.urls['download'].format(id=torrent_id)\n if not all([title, download_url]):\n continue\n\n seeders = try_int(row['seeders'])\n leechers = try_int(row['leechers'])\n verified = bool(row['isVerified'])\n\n # Filter unseeded torrent\n if seeders < min(self.minseed, 1):\n if mode != 'RSS':\n logger.log(\"Discarding torrent because it doesn't meet the \"\n \"minimum seeders: {0}. Seeders: {1}\".format\n (title, seeders), logger.DEBUG)\n continue\n\n if self.confirmed and not verified and mode != 'RSS':\n logger.log(\"Found result {0} but that doesn't seem like a verified\"\n \" result so I'm ignoring it\".format(title), logger.DEBUG)\n continue\n\n torrent_size = row['size']\n size = convert_size(torrent_size) or -1\n\n item = {\n 'title': title,\n 'link': download_url,\n 'size': size,\n 'seeders': seeders,\n 'leechers': leechers,\n 'pubdate': None,\n }\n if mode != 'RSS':\n logger.log('Found result: {0} with {1} seeders and {2} leechers'.format\n (title, seeders, leechers), logger.DEBUG)\n\n items.append(item)\n except (AttributeError, TypeError, KeyError, ValueError, IndexError):\n logger.log('Failed parsing provider. Traceback: {0!r}'.format\n (traceback.format_exc()), logger.ERROR)\n\n return items\n\n def login(self):\n \"\"\"Log into provider.\"\"\"\n if self.token is not None:\n if time.time() < (self.tokenLastUpdate + 30 * 60):\n return True\n\n login_params = {\n 'username': self.username,\n 'password': self.password,\n }\n\n response = self.get_url(self.urls['login_page'], post_data=login_params, returns='json')\n if not response:\n logger.log('Unable to connect to provider', logger.WARNING)\n return False\n\n if response and 'token' in response:\n self.token = response['token']\n self.tokenLastUpdate = time.time()\n # self.uid = response['uid'].encode('ascii', 'ignore')\n self.session.auth = T411Auth(self.token)\n return True\n else:\n logger.log('Token not found in authentication response', logger.WARNING)\n return False\n\n\nclass T411Auth(AuthBase):\n \"\"\"Attach HTTP Authentication to the given Request object.\"\"\"\n\n def __init__(self, token):\n \"\"\"Init object.\"\"\"\n self.token = token\n\n def __call__(self, r):\n \"\"\"Add token to request header.\"\"\"\n r.headers['Authorization'] = self.token\n return r\n\n\nprovider = T411Provider()\n", "path": "medusa/providers/torrent/json/t411.py"}]}
| 2,651 | 885 |
gh_patches_debug_41954
|
rasdani/github-patches
|
git_diff
|
litestar-org__litestar-2124
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `litestar/contrib/sqlalchemy/dto.py`
Content:
```
1 from __future__ import annotations
2
3 from dataclasses import replace
4 from functools import singledispatchmethod
5 from typing import TYPE_CHECKING, Collection, Generic, Optional, TypeVar
6
7 from sqlalchemy import Column, inspect, orm, sql
8 from sqlalchemy.ext.associationproxy import AssociationProxy, AssociationProxyExtensionType
9 from sqlalchemy.ext.hybrid import HybridExtensionType, hybrid_property
10 from sqlalchemy.orm import (
11 ColumnProperty,
12 DeclarativeBase,
13 InspectionAttr,
14 Mapped,
15 NotExtension,
16 QueryableAttribute,
17 RelationshipDirection,
18 RelationshipProperty,
19 )
20
21 from litestar.dto.base_dto import AbstractDTO
22 from litestar.dto.data_structures import DTOFieldDefinition
23 from litestar.dto.field import DTO_FIELD_META_KEY, DTOField, Mark
24 from litestar.exceptions import ImproperlyConfiguredException
25 from litestar.types.empty import Empty
26 from litestar.typing import FieldDefinition
27 from litestar.utils.signature import ParsedSignature
28
29 if TYPE_CHECKING:
30 from typing import Any, Generator
31
32 from typing_extensions import TypeAlias
33
34 __all__ = ("SQLAlchemyDTO",)
35
36 T = TypeVar("T", bound="DeclarativeBase | Collection[DeclarativeBase]")
37
38 ElementType: TypeAlias = "Column | RelationshipProperty"
39 SQLA_NS = {**vars(orm), **vars(sql)}
40
41
42 class SQLAlchemyDTO(AbstractDTO[T], Generic[T]):
43 """Support for domain modelling with SQLAlchemy."""
44
45 @singledispatchmethod
46 @classmethod
47 def handle_orm_descriptor(
48 cls,
49 extension_type: NotExtension | AssociationProxyExtensionType | HybridExtensionType,
50 orm_descriptor: InspectionAttr,
51 key: str,
52 model_type_hints: dict[str, FieldDefinition],
53 model_name: str,
54 ) -> list[DTOFieldDefinition]:
55 raise NotImplementedError(f"Unsupported extension type: {extension_type}")
56
57 @handle_orm_descriptor.register(NotExtension)
58 @classmethod
59 def _(
60 cls,
61 extension_type: NotExtension,
62 key: str,
63 orm_descriptor: InspectionAttr,
64 model_type_hints: dict[str, FieldDefinition],
65 model_name: str,
66 ) -> list[DTOFieldDefinition]:
67 if not isinstance(orm_descriptor, QueryableAttribute):
68 raise NotImplementedError(f"Unexpected descriptor type for '{extension_type}': '{orm_descriptor}'")
69
70 elem: ElementType
71 if isinstance(orm_descriptor.property, ColumnProperty):
72 if not isinstance(orm_descriptor.property.expression, Column):
73 raise NotImplementedError(f"Expected 'Column', got: '{orm_descriptor.property.expression}'")
74 elem = orm_descriptor.property.expression
75 elif isinstance(orm_descriptor.property, RelationshipProperty):
76 elem = orm_descriptor.property
77 else:
78 raise NotImplementedError(f"Unhandled property type: '{orm_descriptor.property}'")
79
80 default, default_factory = _detect_defaults(elem)
81
82 try:
83 if (field_definition := model_type_hints[key]).origin is Mapped:
84 (field_definition,) = field_definition.inner_types
85 else:
86 raise NotImplementedError(f"Expected 'Mapped' origin, got: '{field_definition.origin}'")
87 except KeyError:
88 field_definition = parse_type_from_element(elem)
89
90 return [
91 DTOFieldDefinition.from_field_definition(
92 field_definition=replace(
93 field_definition,
94 name=key,
95 default=default,
96 ),
97 default_factory=default_factory,
98 dto_field=elem.info.get(DTO_FIELD_META_KEY, DTOField()),
99 model_name=model_name,
100 )
101 ]
102
103 @handle_orm_descriptor.register(AssociationProxyExtensionType)
104 @classmethod
105 def _(
106 cls,
107 extension_type: AssociationProxyExtensionType,
108 key: str,
109 orm_descriptor: InspectionAttr,
110 model_type_hints: dict[str, FieldDefinition],
111 model_name: str,
112 ) -> list[DTOFieldDefinition]:
113 if not isinstance(orm_descriptor, AssociationProxy):
114 raise NotImplementedError(f"Unexpected descriptor type '{orm_descriptor}' for '{extension_type}'")
115
116 if (field_definition := model_type_hints[key]).origin is AssociationProxy:
117 (field_definition,) = field_definition.inner_types
118 else:
119 raise NotImplementedError(f"Expected 'AssociationProxy' origin, got: '{field_definition.origin}'")
120
121 return [
122 DTOFieldDefinition.from_field_definition(
123 field_definition=replace(
124 field_definition,
125 name=key,
126 default=Empty,
127 ),
128 default_factory=None,
129 dto_field=orm_descriptor.info.get(DTO_FIELD_META_KEY, DTOField(mark=Mark.READ_ONLY)),
130 model_name=model_name,
131 )
132 ]
133
134 @handle_orm_descriptor.register(HybridExtensionType)
135 @classmethod
136 def _(
137 cls,
138 extension_type: HybridExtensionType,
139 key: str,
140 orm_descriptor: InspectionAttr,
141 model_type_hints: dict[str, FieldDefinition],
142 model_name: str,
143 ) -> list[DTOFieldDefinition]:
144 if not isinstance(orm_descriptor, hybrid_property):
145 raise NotImplementedError(f"Unexpected descriptor type '{orm_descriptor}' for '{extension_type}'")
146
147 getter_sig = ParsedSignature.from_fn(orm_descriptor.fget, {})
148
149 field_defs = [
150 DTOFieldDefinition.from_field_definition(
151 field_definition=replace(
152 getter_sig.return_type,
153 name=orm_descriptor.__name__,
154 default=Empty,
155 ),
156 default_factory=None,
157 dto_field=orm_descriptor.info.get(DTO_FIELD_META_KEY, DTOField(mark=Mark.READ_ONLY)),
158 model_name=model_name,
159 )
160 ]
161
162 if orm_descriptor.fset is not None:
163 setter_sig = ParsedSignature.from_fn(orm_descriptor.fset, {})
164 field_defs.append(
165 DTOFieldDefinition.from_field_definition(
166 field_definition=replace(
167 next(iter(setter_sig.parameters.values())),
168 name=orm_descriptor.__name__,
169 default=Empty,
170 ),
171 default_factory=None,
172 dto_field=orm_descriptor.info.get(DTO_FIELD_META_KEY, DTOField(mark=Mark.WRITE_ONLY)),
173 model_name=model_name,
174 )
175 )
176
177 return field_defs
178
179 @classmethod
180 def generate_field_definitions(cls, model_type: type[DeclarativeBase]) -> Generator[DTOFieldDefinition, None, None]:
181 if (mapper := inspect(model_type)) is None: # pragma: no cover
182 raise RuntimeError("Unexpected `None` value for mapper.")
183
184 # includes SQLAlchemy names and other mapped class names in the forward reference resolution namespace
185 namespace = {**SQLA_NS, **{m.class_.__name__: m.class_ for m in mapper.registry.mappers if m is not mapper}}
186 model_type_hints = cls.get_model_type_hints(model_type, namespace=namespace)
187 model_name = model_type.__name__
188
189 # the same hybrid property descriptor can be included in `all_orm_descriptors` multiple times, once
190 # for each method name it is bound to. We only need to see it once, so track views of it here.
191 seen_hybrid_descriptors: set[hybrid_property] = set()
192 for key, orm_descriptor in mapper.all_orm_descriptors.items():
193 if isinstance(orm_descriptor, hybrid_property):
194 if orm_descriptor in seen_hybrid_descriptors:
195 continue
196
197 seen_hybrid_descriptors.add(orm_descriptor)
198
199 yield from cls.handle_orm_descriptor(
200 orm_descriptor.extension_type, key, orm_descriptor, model_type_hints, model_name
201 )
202
203 @classmethod
204 def detect_nested_field(cls, field_definition: FieldDefinition) -> bool:
205 return field_definition.is_subclass_of(DeclarativeBase)
206
207
208 def _detect_defaults(elem: ElementType) -> tuple[Any, Any]:
209 default: Any = Empty
210 default_factory: Any = None # pyright:ignore
211 if sqla_default := getattr(elem, "default", None):
212 if sqla_default.is_scalar:
213 default = sqla_default.arg
214 elif sqla_default.is_callable:
215
216 def default_factory(d: Any = sqla_default) -> Any:
217 return d.arg({})
218
219 elif sqla_default.is_sequence or sqla_default.is_sentinel:
220 # SQLAlchemy sequences represent server side defaults
221 # so we cannot infer a reasonable default value for
222 # them on the client side
223 pass
224 else:
225 raise ValueError("Unexpected default type")
226 else:
227 if (
228 isinstance(elem, RelationshipProperty)
229 and detect_nullable_relationship(elem)
230 or getattr(elem, "nullable", False)
231 ):
232 default = None
233
234 return default, default_factory
235
236
237 def parse_type_from_element(elem: ElementType) -> FieldDefinition:
238 """Parses a type from a SQLAlchemy element.
239
240 Args:
241 elem: The SQLAlchemy element to parse.
242
243 Returns:
244 FieldDefinition: The parsed type.
245
246 Raises:
247 ImproperlyConfiguredException: If the type cannot be parsed.
248 """
249
250 if isinstance(elem, Column):
251 if elem.nullable:
252 return FieldDefinition.from_annotation(Optional[elem.type.python_type])
253 return FieldDefinition.from_annotation(elem.type.python_type)
254
255 if isinstance(elem, RelationshipProperty):
256 if elem.direction in (RelationshipDirection.ONETOMANY, RelationshipDirection.MANYTOMANY):
257 collection_type = FieldDefinition.from_annotation(elem.collection_class or list)
258 return FieldDefinition.from_annotation(collection_type.safe_generic_origin[elem.mapper.class_])
259
260 if detect_nullable_relationship(elem):
261 return FieldDefinition.from_annotation(Optional[elem.mapper.class_])
262
263 return FieldDefinition.from_annotation(elem.mapper.class_)
264
265 raise ImproperlyConfiguredException(
266 f"Unable to parse type from element '{elem}'. Consider adding a type hint.",
267 )
268
269
270 def detect_nullable_relationship(elem: RelationshipProperty) -> bool:
271 """Detects if a relationship is nullable.
272
273 This attempts to decide if we should allow a ``None`` default value for a relationship by looking at the
274 foreign key fields. If all foreign key fields are nullable, then we allow a ``None`` default value.
275
276 Args:
277 elem: The relationship to check.
278
279 Returns:
280 bool: ``True`` if the relationship is nullable, ``False`` otherwise.
281 """
282 return elem.direction == RelationshipDirection.MANYTOONE and all(c.nullable for c in elem.local_columns)
283
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/litestar/contrib/sqlalchemy/dto.py b/litestar/contrib/sqlalchemy/dto.py
--- a/litestar/contrib/sqlalchemy/dto.py
+++ b/litestar/contrib/sqlalchemy/dto.py
@@ -9,9 +9,11 @@
from sqlalchemy.ext.hybrid import HybridExtensionType, hybrid_property
from sqlalchemy.orm import (
ColumnProperty,
+ CompositeProperty,
DeclarativeBase,
InspectionAttr,
Mapped,
+ MappedColumn,
NotExtension,
QueryableAttribute,
RelationshipDirection,
@@ -35,7 +37,7 @@
T = TypeVar("T", bound="DeclarativeBase | Collection[DeclarativeBase]")
-ElementType: TypeAlias = "Column | RelationshipProperty"
+ElementType: TypeAlias = "Column | RelationshipProperty | CompositeProperty"
SQLA_NS = {**vars(orm), **vars(sql)}
@@ -72,7 +74,7 @@
if not isinstance(orm_descriptor.property.expression, Column):
raise NotImplementedError(f"Expected 'Column', got: '{orm_descriptor.property.expression}'")
elem = orm_descriptor.property.expression
- elif isinstance(orm_descriptor.property, RelationshipProperty):
+ elif isinstance(orm_descriptor.property, (RelationshipProperty, CompositeProperty)):
elem = orm_descriptor.property
else:
raise NotImplementedError(f"Unhandled property type: '{orm_descriptor.property}'")
@@ -189,6 +191,13 @@
# the same hybrid property descriptor can be included in `all_orm_descriptors` multiple times, once
# for each method name it is bound to. We only need to see it once, so track views of it here.
seen_hybrid_descriptors: set[hybrid_property] = set()
+ skipped_columns: set[str] = set()
+ for composite_property in mapper.composites:
+ for attr in composite_property.attrs:
+ if isinstance(attr, (MappedColumn, Column)):
+ skipped_columns.add(attr.name)
+ elif isinstance(attr, str):
+ skipped_columns.add(attr)
for key, orm_descriptor in mapper.all_orm_descriptors.items():
if isinstance(orm_descriptor, hybrid_property):
if orm_descriptor in seen_hybrid_descriptors:
@@ -196,6 +205,9 @@
seen_hybrid_descriptors.add(orm_descriptor)
+ if key in skipped_columns:
+ continue
+
yield from cls.handle_orm_descriptor(
orm_descriptor.extension_type, key, orm_descriptor, model_type_hints, model_name
)
@@ -262,6 +274,9 @@
return FieldDefinition.from_annotation(elem.mapper.class_)
+ if isinstance(elem, CompositeProperty):
+ return FieldDefinition.from_annotation(elem.composite_class)
+
raise ImproperlyConfiguredException(
f"Unable to parse type from element '{elem}'. Consider adding a type hint.",
)
|
{"golden_diff": "diff --git a/litestar/contrib/sqlalchemy/dto.py b/litestar/contrib/sqlalchemy/dto.py\n--- a/litestar/contrib/sqlalchemy/dto.py\n+++ b/litestar/contrib/sqlalchemy/dto.py\n@@ -9,9 +9,11 @@\n from sqlalchemy.ext.hybrid import HybridExtensionType, hybrid_property\n from sqlalchemy.orm import (\n ColumnProperty,\n+ CompositeProperty,\n DeclarativeBase,\n InspectionAttr,\n Mapped,\n+ MappedColumn,\n NotExtension,\n QueryableAttribute,\n RelationshipDirection,\n@@ -35,7 +37,7 @@\n \n T = TypeVar(\"T\", bound=\"DeclarativeBase | Collection[DeclarativeBase]\")\n \n-ElementType: TypeAlias = \"Column | RelationshipProperty\"\n+ElementType: TypeAlias = \"Column | RelationshipProperty | CompositeProperty\"\n SQLA_NS = {**vars(orm), **vars(sql)}\n \n \n@@ -72,7 +74,7 @@\n if not isinstance(orm_descriptor.property.expression, Column):\n raise NotImplementedError(f\"Expected 'Column', got: '{orm_descriptor.property.expression}'\")\n elem = orm_descriptor.property.expression\n- elif isinstance(orm_descriptor.property, RelationshipProperty):\n+ elif isinstance(orm_descriptor.property, (RelationshipProperty, CompositeProperty)):\n elem = orm_descriptor.property\n else:\n raise NotImplementedError(f\"Unhandled property type: '{orm_descriptor.property}'\")\n@@ -189,6 +191,13 @@\n # the same hybrid property descriptor can be included in `all_orm_descriptors` multiple times, once\n # for each method name it is bound to. We only need to see it once, so track views of it here.\n seen_hybrid_descriptors: set[hybrid_property] = set()\n+ skipped_columns: set[str] = set()\n+ for composite_property in mapper.composites:\n+ for attr in composite_property.attrs:\n+ if isinstance(attr, (MappedColumn, Column)):\n+ skipped_columns.add(attr.name)\n+ elif isinstance(attr, str):\n+ skipped_columns.add(attr)\n for key, orm_descriptor in mapper.all_orm_descriptors.items():\n if isinstance(orm_descriptor, hybrid_property):\n if orm_descriptor in seen_hybrid_descriptors:\n@@ -196,6 +205,9 @@\n \n seen_hybrid_descriptors.add(orm_descriptor)\n \n+ if key in skipped_columns:\n+ continue\n+\n yield from cls.handle_orm_descriptor(\n orm_descriptor.extension_type, key, orm_descriptor, model_type_hints, model_name\n )\n@@ -262,6 +274,9 @@\n \n return FieldDefinition.from_annotation(elem.mapper.class_)\n \n+ if isinstance(elem, CompositeProperty):\n+ return FieldDefinition.from_annotation(elem.composite_class)\n+\n raise ImproperlyConfiguredException(\n f\"Unable to parse type from element '{elem}'. Consider adding a type hint.\",\n )\n", "issue": "StaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom dataclasses import replace\nfrom functools import singledispatchmethod\nfrom typing import TYPE_CHECKING, Collection, Generic, Optional, TypeVar\n\nfrom sqlalchemy import Column, inspect, orm, sql\nfrom sqlalchemy.ext.associationproxy import AssociationProxy, AssociationProxyExtensionType\nfrom sqlalchemy.ext.hybrid import HybridExtensionType, hybrid_property\nfrom sqlalchemy.orm import (\n ColumnProperty,\n DeclarativeBase,\n InspectionAttr,\n Mapped,\n NotExtension,\n QueryableAttribute,\n RelationshipDirection,\n RelationshipProperty,\n)\n\nfrom litestar.dto.base_dto import AbstractDTO\nfrom litestar.dto.data_structures import DTOFieldDefinition\nfrom litestar.dto.field import DTO_FIELD_META_KEY, DTOField, Mark\nfrom litestar.exceptions import ImproperlyConfiguredException\nfrom litestar.types.empty import Empty\nfrom litestar.typing import FieldDefinition\nfrom litestar.utils.signature import ParsedSignature\n\nif TYPE_CHECKING:\n from typing import Any, Generator\n\n from typing_extensions import TypeAlias\n\n__all__ = (\"SQLAlchemyDTO\",)\n\nT = TypeVar(\"T\", bound=\"DeclarativeBase | Collection[DeclarativeBase]\")\n\nElementType: TypeAlias = \"Column | RelationshipProperty\"\nSQLA_NS = {**vars(orm), **vars(sql)}\n\n\nclass SQLAlchemyDTO(AbstractDTO[T], Generic[T]):\n \"\"\"Support for domain modelling with SQLAlchemy.\"\"\"\n\n @singledispatchmethod\n @classmethod\n def handle_orm_descriptor(\n cls,\n extension_type: NotExtension | AssociationProxyExtensionType | HybridExtensionType,\n orm_descriptor: InspectionAttr,\n key: str,\n model_type_hints: dict[str, FieldDefinition],\n model_name: str,\n ) -> list[DTOFieldDefinition]:\n raise NotImplementedError(f\"Unsupported extension type: {extension_type}\")\n\n @handle_orm_descriptor.register(NotExtension)\n @classmethod\n def _(\n cls,\n extension_type: NotExtension,\n key: str,\n orm_descriptor: InspectionAttr,\n model_type_hints: dict[str, FieldDefinition],\n model_name: str,\n ) -> list[DTOFieldDefinition]:\n if not isinstance(orm_descriptor, QueryableAttribute):\n raise NotImplementedError(f\"Unexpected descriptor type for '{extension_type}': '{orm_descriptor}'\")\n\n elem: ElementType\n if isinstance(orm_descriptor.property, ColumnProperty):\n if not isinstance(orm_descriptor.property.expression, Column):\n raise NotImplementedError(f\"Expected 'Column', got: '{orm_descriptor.property.expression}'\")\n elem = orm_descriptor.property.expression\n elif isinstance(orm_descriptor.property, RelationshipProperty):\n elem = orm_descriptor.property\n else:\n raise NotImplementedError(f\"Unhandled property type: '{orm_descriptor.property}'\")\n\n default, default_factory = _detect_defaults(elem)\n\n try:\n if (field_definition := model_type_hints[key]).origin is Mapped:\n (field_definition,) = field_definition.inner_types\n else:\n raise NotImplementedError(f\"Expected 'Mapped' origin, got: '{field_definition.origin}'\")\n except KeyError:\n field_definition = parse_type_from_element(elem)\n\n return [\n DTOFieldDefinition.from_field_definition(\n field_definition=replace(\n field_definition,\n name=key,\n default=default,\n ),\n default_factory=default_factory,\n dto_field=elem.info.get(DTO_FIELD_META_KEY, DTOField()),\n model_name=model_name,\n )\n ]\n\n @handle_orm_descriptor.register(AssociationProxyExtensionType)\n @classmethod\n def _(\n cls,\n extension_type: AssociationProxyExtensionType,\n key: str,\n orm_descriptor: InspectionAttr,\n model_type_hints: dict[str, FieldDefinition],\n model_name: str,\n ) -> list[DTOFieldDefinition]:\n if not isinstance(orm_descriptor, AssociationProxy):\n raise NotImplementedError(f\"Unexpected descriptor type '{orm_descriptor}' for '{extension_type}'\")\n\n if (field_definition := model_type_hints[key]).origin is AssociationProxy:\n (field_definition,) = field_definition.inner_types\n else:\n raise NotImplementedError(f\"Expected 'AssociationProxy' origin, got: '{field_definition.origin}'\")\n\n return [\n DTOFieldDefinition.from_field_definition(\n field_definition=replace(\n field_definition,\n name=key,\n default=Empty,\n ),\n default_factory=None,\n dto_field=orm_descriptor.info.get(DTO_FIELD_META_KEY, DTOField(mark=Mark.READ_ONLY)),\n model_name=model_name,\n )\n ]\n\n @handle_orm_descriptor.register(HybridExtensionType)\n @classmethod\n def _(\n cls,\n extension_type: HybridExtensionType,\n key: str,\n orm_descriptor: InspectionAttr,\n model_type_hints: dict[str, FieldDefinition],\n model_name: str,\n ) -> list[DTOFieldDefinition]:\n if not isinstance(orm_descriptor, hybrid_property):\n raise NotImplementedError(f\"Unexpected descriptor type '{orm_descriptor}' for '{extension_type}'\")\n\n getter_sig = ParsedSignature.from_fn(orm_descriptor.fget, {})\n\n field_defs = [\n DTOFieldDefinition.from_field_definition(\n field_definition=replace(\n getter_sig.return_type,\n name=orm_descriptor.__name__,\n default=Empty,\n ),\n default_factory=None,\n dto_field=orm_descriptor.info.get(DTO_FIELD_META_KEY, DTOField(mark=Mark.READ_ONLY)),\n model_name=model_name,\n )\n ]\n\n if orm_descriptor.fset is not None:\n setter_sig = ParsedSignature.from_fn(orm_descriptor.fset, {})\n field_defs.append(\n DTOFieldDefinition.from_field_definition(\n field_definition=replace(\n next(iter(setter_sig.parameters.values())),\n name=orm_descriptor.__name__,\n default=Empty,\n ),\n default_factory=None,\n dto_field=orm_descriptor.info.get(DTO_FIELD_META_KEY, DTOField(mark=Mark.WRITE_ONLY)),\n model_name=model_name,\n )\n )\n\n return field_defs\n\n @classmethod\n def generate_field_definitions(cls, model_type: type[DeclarativeBase]) -> Generator[DTOFieldDefinition, None, None]:\n if (mapper := inspect(model_type)) is None: # pragma: no cover\n raise RuntimeError(\"Unexpected `None` value for mapper.\")\n\n # includes SQLAlchemy names and other mapped class names in the forward reference resolution namespace\n namespace = {**SQLA_NS, **{m.class_.__name__: m.class_ for m in mapper.registry.mappers if m is not mapper}}\n model_type_hints = cls.get_model_type_hints(model_type, namespace=namespace)\n model_name = model_type.__name__\n\n # the same hybrid property descriptor can be included in `all_orm_descriptors` multiple times, once\n # for each method name it is bound to. We only need to see it once, so track views of it here.\n seen_hybrid_descriptors: set[hybrid_property] = set()\n for key, orm_descriptor in mapper.all_orm_descriptors.items():\n if isinstance(orm_descriptor, hybrid_property):\n if orm_descriptor in seen_hybrid_descriptors:\n continue\n\n seen_hybrid_descriptors.add(orm_descriptor)\n\n yield from cls.handle_orm_descriptor(\n orm_descriptor.extension_type, key, orm_descriptor, model_type_hints, model_name\n )\n\n @classmethod\n def detect_nested_field(cls, field_definition: FieldDefinition) -> bool:\n return field_definition.is_subclass_of(DeclarativeBase)\n\n\ndef _detect_defaults(elem: ElementType) -> tuple[Any, Any]:\n default: Any = Empty\n default_factory: Any = None # pyright:ignore\n if sqla_default := getattr(elem, \"default\", None):\n if sqla_default.is_scalar:\n default = sqla_default.arg\n elif sqla_default.is_callable:\n\n def default_factory(d: Any = sqla_default) -> Any:\n return d.arg({})\n\n elif sqla_default.is_sequence or sqla_default.is_sentinel:\n # SQLAlchemy sequences represent server side defaults\n # so we cannot infer a reasonable default value for\n # them on the client side\n pass\n else:\n raise ValueError(\"Unexpected default type\")\n else:\n if (\n isinstance(elem, RelationshipProperty)\n and detect_nullable_relationship(elem)\n or getattr(elem, \"nullable\", False)\n ):\n default = None\n\n return default, default_factory\n\n\ndef parse_type_from_element(elem: ElementType) -> FieldDefinition:\n \"\"\"Parses a type from a SQLAlchemy element.\n\n Args:\n elem: The SQLAlchemy element to parse.\n\n Returns:\n FieldDefinition: The parsed type.\n\n Raises:\n ImproperlyConfiguredException: If the type cannot be parsed.\n \"\"\"\n\n if isinstance(elem, Column):\n if elem.nullable:\n return FieldDefinition.from_annotation(Optional[elem.type.python_type])\n return FieldDefinition.from_annotation(elem.type.python_type)\n\n if isinstance(elem, RelationshipProperty):\n if elem.direction in (RelationshipDirection.ONETOMANY, RelationshipDirection.MANYTOMANY):\n collection_type = FieldDefinition.from_annotation(elem.collection_class or list)\n return FieldDefinition.from_annotation(collection_type.safe_generic_origin[elem.mapper.class_])\n\n if detect_nullable_relationship(elem):\n return FieldDefinition.from_annotation(Optional[elem.mapper.class_])\n\n return FieldDefinition.from_annotation(elem.mapper.class_)\n\n raise ImproperlyConfiguredException(\n f\"Unable to parse type from element '{elem}'. Consider adding a type hint.\",\n )\n\n\ndef detect_nullable_relationship(elem: RelationshipProperty) -> bool:\n \"\"\"Detects if a relationship is nullable.\n\n This attempts to decide if we should allow a ``None`` default value for a relationship by looking at the\n foreign key fields. If all foreign key fields are nullable, then we allow a ``None`` default value.\n\n Args:\n elem: The relationship to check.\n\n Returns:\n bool: ``True`` if the relationship is nullable, ``False`` otherwise.\n \"\"\"\n return elem.direction == RelationshipDirection.MANYTOONE and all(c.nullable for c in elem.local_columns)\n", "path": "litestar/contrib/sqlalchemy/dto.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom dataclasses import replace\nfrom functools import singledispatchmethod\nfrom typing import TYPE_CHECKING, Collection, Generic, Optional, TypeVar\n\nfrom sqlalchemy import Column, inspect, orm, sql\nfrom sqlalchemy.ext.associationproxy import AssociationProxy, AssociationProxyExtensionType\nfrom sqlalchemy.ext.hybrid import HybridExtensionType, hybrid_property\nfrom sqlalchemy.orm import (\n ColumnProperty,\n CompositeProperty,\n DeclarativeBase,\n InspectionAttr,\n Mapped,\n MappedColumn,\n NotExtension,\n QueryableAttribute,\n RelationshipDirection,\n RelationshipProperty,\n)\n\nfrom litestar.dto.base_dto import AbstractDTO\nfrom litestar.dto.data_structures import DTOFieldDefinition\nfrom litestar.dto.field import DTO_FIELD_META_KEY, DTOField, Mark\nfrom litestar.exceptions import ImproperlyConfiguredException\nfrom litestar.types.empty import Empty\nfrom litestar.typing import FieldDefinition\nfrom litestar.utils.signature import ParsedSignature\n\nif TYPE_CHECKING:\n from typing import Any, Generator\n\n from typing_extensions import TypeAlias\n\n__all__ = (\"SQLAlchemyDTO\",)\n\nT = TypeVar(\"T\", bound=\"DeclarativeBase | Collection[DeclarativeBase]\")\n\nElementType: TypeAlias = \"Column | RelationshipProperty | CompositeProperty\"\nSQLA_NS = {**vars(orm), **vars(sql)}\n\n\nclass SQLAlchemyDTO(AbstractDTO[T], Generic[T]):\n \"\"\"Support for domain modelling with SQLAlchemy.\"\"\"\n\n @singledispatchmethod\n @classmethod\n def handle_orm_descriptor(\n cls,\n extension_type: NotExtension | AssociationProxyExtensionType | HybridExtensionType,\n orm_descriptor: InspectionAttr,\n key: str,\n model_type_hints: dict[str, FieldDefinition],\n model_name: str,\n ) -> list[DTOFieldDefinition]:\n raise NotImplementedError(f\"Unsupported extension type: {extension_type}\")\n\n @handle_orm_descriptor.register(NotExtension)\n @classmethod\n def _(\n cls,\n extension_type: NotExtension,\n key: str,\n orm_descriptor: InspectionAttr,\n model_type_hints: dict[str, FieldDefinition],\n model_name: str,\n ) -> list[DTOFieldDefinition]:\n if not isinstance(orm_descriptor, QueryableAttribute):\n raise NotImplementedError(f\"Unexpected descriptor type for '{extension_type}': '{orm_descriptor}'\")\n\n elem: ElementType\n if isinstance(orm_descriptor.property, ColumnProperty):\n if not isinstance(orm_descriptor.property.expression, Column):\n raise NotImplementedError(f\"Expected 'Column', got: '{orm_descriptor.property.expression}'\")\n elem = orm_descriptor.property.expression\n elif isinstance(orm_descriptor.property, (RelationshipProperty, CompositeProperty)):\n elem = orm_descriptor.property\n else:\n raise NotImplementedError(f\"Unhandled property type: '{orm_descriptor.property}'\")\n\n default, default_factory = _detect_defaults(elem)\n\n try:\n if (field_definition := model_type_hints[key]).origin is Mapped:\n (field_definition,) = field_definition.inner_types\n else:\n raise NotImplementedError(f\"Expected 'Mapped' origin, got: '{field_definition.origin}'\")\n except KeyError:\n field_definition = parse_type_from_element(elem)\n\n return [\n DTOFieldDefinition.from_field_definition(\n field_definition=replace(\n field_definition,\n name=key,\n default=default,\n ),\n default_factory=default_factory,\n dto_field=elem.info.get(DTO_FIELD_META_KEY, DTOField()),\n model_name=model_name,\n )\n ]\n\n @handle_orm_descriptor.register(AssociationProxyExtensionType)\n @classmethod\n def _(\n cls,\n extension_type: AssociationProxyExtensionType,\n key: str,\n orm_descriptor: InspectionAttr,\n model_type_hints: dict[str, FieldDefinition],\n model_name: str,\n ) -> list[DTOFieldDefinition]:\n if not isinstance(orm_descriptor, AssociationProxy):\n raise NotImplementedError(f\"Unexpected descriptor type '{orm_descriptor}' for '{extension_type}'\")\n\n if (field_definition := model_type_hints[key]).origin is AssociationProxy:\n (field_definition,) = field_definition.inner_types\n else:\n raise NotImplementedError(f\"Expected 'AssociationProxy' origin, got: '{field_definition.origin}'\")\n\n return [\n DTOFieldDefinition.from_field_definition(\n field_definition=replace(\n field_definition,\n name=key,\n default=Empty,\n ),\n default_factory=None,\n dto_field=orm_descriptor.info.get(DTO_FIELD_META_KEY, DTOField(mark=Mark.READ_ONLY)),\n model_name=model_name,\n )\n ]\n\n @handle_orm_descriptor.register(HybridExtensionType)\n @classmethod\n def _(\n cls,\n extension_type: HybridExtensionType,\n key: str,\n orm_descriptor: InspectionAttr,\n model_type_hints: dict[str, FieldDefinition],\n model_name: str,\n ) -> list[DTOFieldDefinition]:\n if not isinstance(orm_descriptor, hybrid_property):\n raise NotImplementedError(f\"Unexpected descriptor type '{orm_descriptor}' for '{extension_type}'\")\n\n getter_sig = ParsedSignature.from_fn(orm_descriptor.fget, {})\n\n field_defs = [\n DTOFieldDefinition.from_field_definition(\n field_definition=replace(\n getter_sig.return_type,\n name=orm_descriptor.__name__,\n default=Empty,\n ),\n default_factory=None,\n dto_field=orm_descriptor.info.get(DTO_FIELD_META_KEY, DTOField(mark=Mark.READ_ONLY)),\n model_name=model_name,\n )\n ]\n\n if orm_descriptor.fset is not None:\n setter_sig = ParsedSignature.from_fn(orm_descriptor.fset, {})\n field_defs.append(\n DTOFieldDefinition.from_field_definition(\n field_definition=replace(\n next(iter(setter_sig.parameters.values())),\n name=orm_descriptor.__name__,\n default=Empty,\n ),\n default_factory=None,\n dto_field=orm_descriptor.info.get(DTO_FIELD_META_KEY, DTOField(mark=Mark.WRITE_ONLY)),\n model_name=model_name,\n )\n )\n\n return field_defs\n\n @classmethod\n def generate_field_definitions(cls, model_type: type[DeclarativeBase]) -> Generator[DTOFieldDefinition, None, None]:\n if (mapper := inspect(model_type)) is None: # pragma: no cover\n raise RuntimeError(\"Unexpected `None` value for mapper.\")\n\n # includes SQLAlchemy names and other mapped class names in the forward reference resolution namespace\n namespace = {**SQLA_NS, **{m.class_.__name__: m.class_ for m in mapper.registry.mappers if m is not mapper}}\n model_type_hints = cls.get_model_type_hints(model_type, namespace=namespace)\n model_name = model_type.__name__\n\n # the same hybrid property descriptor can be included in `all_orm_descriptors` multiple times, once\n # for each method name it is bound to. We only need to see it once, so track views of it here.\n seen_hybrid_descriptors: set[hybrid_property] = set()\n skipped_columns: set[str] = set()\n for composite_property in mapper.composites:\n for attr in composite_property.attrs:\n if isinstance(attr, (MappedColumn, Column)):\n skipped_columns.add(attr.name)\n elif isinstance(attr, str):\n skipped_columns.add(attr)\n for key, orm_descriptor in mapper.all_orm_descriptors.items():\n if isinstance(orm_descriptor, hybrid_property):\n if orm_descriptor in seen_hybrid_descriptors:\n continue\n\n seen_hybrid_descriptors.add(orm_descriptor)\n\n if key in skipped_columns:\n continue\n\n yield from cls.handle_orm_descriptor(\n orm_descriptor.extension_type, key, orm_descriptor, model_type_hints, model_name\n )\n\n @classmethod\n def detect_nested_field(cls, field_definition: FieldDefinition) -> bool:\n return field_definition.is_subclass_of(DeclarativeBase)\n\n\ndef _detect_defaults(elem: ElementType) -> tuple[Any, Any]:\n default: Any = Empty\n default_factory: Any = None # pyright:ignore\n if sqla_default := getattr(elem, \"default\", None):\n if sqla_default.is_scalar:\n default = sqla_default.arg\n elif sqla_default.is_callable:\n\n def default_factory(d: Any = sqla_default) -> Any:\n return d.arg({})\n\n elif sqla_default.is_sequence or sqla_default.is_sentinel:\n # SQLAlchemy sequences represent server side defaults\n # so we cannot infer a reasonable default value for\n # them on the client side\n pass\n else:\n raise ValueError(\"Unexpected default type\")\n else:\n if (\n isinstance(elem, RelationshipProperty)\n and detect_nullable_relationship(elem)\n or getattr(elem, \"nullable\", False)\n ):\n default = None\n\n return default, default_factory\n\n\ndef parse_type_from_element(elem: ElementType) -> FieldDefinition:\n \"\"\"Parses a type from a SQLAlchemy element.\n\n Args:\n elem: The SQLAlchemy element to parse.\n\n Returns:\n FieldDefinition: The parsed type.\n\n Raises:\n ImproperlyConfiguredException: If the type cannot be parsed.\n \"\"\"\n\n if isinstance(elem, Column):\n if elem.nullable:\n return FieldDefinition.from_annotation(Optional[elem.type.python_type])\n return FieldDefinition.from_annotation(elem.type.python_type)\n\n if isinstance(elem, RelationshipProperty):\n if elem.direction in (RelationshipDirection.ONETOMANY, RelationshipDirection.MANYTOMANY):\n collection_type = FieldDefinition.from_annotation(elem.collection_class or list)\n return FieldDefinition.from_annotation(collection_type.safe_generic_origin[elem.mapper.class_])\n\n if detect_nullable_relationship(elem):\n return FieldDefinition.from_annotation(Optional[elem.mapper.class_])\n\n return FieldDefinition.from_annotation(elem.mapper.class_)\n\n if isinstance(elem, CompositeProperty):\n return FieldDefinition.from_annotation(elem.composite_class)\n\n raise ImproperlyConfiguredException(\n f\"Unable to parse type from element '{elem}'. Consider adding a type hint.\",\n )\n\n\ndef detect_nullable_relationship(elem: RelationshipProperty) -> bool:\n \"\"\"Detects if a relationship is nullable.\n\n This attempts to decide if we should allow a ``None`` default value for a relationship by looking at the\n foreign key fields. If all foreign key fields are nullable, then we allow a ``None`` default value.\n\n Args:\n elem: The relationship to check.\n\n Returns:\n bool: ``True`` if the relationship is nullable, ``False`` otherwise.\n \"\"\"\n return elem.direction == RelationshipDirection.MANYTOONE and all(c.nullable for c in elem.local_columns)\n", "path": "litestar/contrib/sqlalchemy/dto.py"}]}
| 3,321 | 631 |
gh_patches_debug_3104
|
rasdani/github-patches
|
git_diff
|
certbot__certbot-2004
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
don't add includeSubDomains for Strict-Transport-Security header
Hi,
I think the letsencrypt client shouldn’t add `includeSubDomains` to the Strict-Transport-Security header by default.
If you let the letsencrypt client modify the configuration for a webserver which hosts example.com, this would make it only possible to reach subdomain hosts (foo.example.com) with https. If you forgot only one webserver which serves a subdomain by http only, you can't reach this one any more.
And as browsers actually don't have an option to clear the HSTS cache, It won't help you to remove the includeSubDomains later from the webserver config.
includeSubDomains is something, that an admin should only add if he truly knows, that all subdomains are reachable by https. So I think this should never be added by an automatic tool, like the letsencrypt client.
RFC 6797 section 11.1 [1] also describes some implications of includeSubDomains
Chris
[1] https://tools.ietf.org/html/rfc6797#section-11.4
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `letsencrypt-apache/letsencrypt_apache/constants.py`
Content:
```
1 """Apache plugin constants."""
2 import pkg_resources
3
4
5 CLI_DEFAULTS = dict(
6 server_root="/etc/apache2",
7 ctl="apache2ctl",
8 enmod="a2enmod",
9 dismod="a2dismod",
10 le_vhost_ext="-le-ssl.conf",
11 )
12 """CLI defaults."""
13
14 MOD_SSL_CONF_DEST = "options-ssl-apache.conf"
15 """Name of the mod_ssl config file as saved in `IConfig.config_dir`."""
16
17 MOD_SSL_CONF_SRC = pkg_resources.resource_filename(
18 "letsencrypt_apache", "options-ssl-apache.conf")
19 """Path to the Apache mod_ssl config file found in the Let's Encrypt
20 distribution."""
21
22 AUGEAS_LENS_DIR = pkg_resources.resource_filename(
23 "letsencrypt_apache", "augeas_lens")
24 """Path to the Augeas lens directory"""
25
26 REWRITE_HTTPS_ARGS = [
27 "^", "https://%{SERVER_NAME}%{REQUEST_URI}", "[L,QSA,R=permanent]"]
28 """Apache version<2.3.9 rewrite rule arguments used for redirections to https vhost"""
29
30 REWRITE_HTTPS_ARGS_WITH_END = [
31 "^", "https://%{SERVER_NAME}%{REQUEST_URI}", "[END,QSA,R=permanent]"]
32 """Apache version >= 2.3.9 rewrite rule arguments used for redirections to
33 https vhost"""
34
35 HSTS_ARGS = ["always", "set", "Strict-Transport-Security",
36 "\"max-age=31536000; includeSubDomains\""]
37 """Apache header arguments for HSTS"""
38
39 UIR_ARGS = ["always", "set", "Content-Security-Policy",
40 "upgrade-insecure-requests"]
41
42 HEADER_ARGS = {"Strict-Transport-Security": HSTS_ARGS,
43 "Upgrade-Insecure-Requests": UIR_ARGS}
44
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/letsencrypt-apache/letsencrypt_apache/constants.py b/letsencrypt-apache/letsencrypt_apache/constants.py
--- a/letsencrypt-apache/letsencrypt_apache/constants.py
+++ b/letsencrypt-apache/letsencrypt_apache/constants.py
@@ -33,7 +33,7 @@
https vhost"""
HSTS_ARGS = ["always", "set", "Strict-Transport-Security",
- "\"max-age=31536000; includeSubDomains\""]
+ "\"max-age=31536000\""]
"""Apache header arguments for HSTS"""
UIR_ARGS = ["always", "set", "Content-Security-Policy",
|
{"golden_diff": "diff --git a/letsencrypt-apache/letsencrypt_apache/constants.py b/letsencrypt-apache/letsencrypt_apache/constants.py\n--- a/letsencrypt-apache/letsencrypt_apache/constants.py\n+++ b/letsencrypt-apache/letsencrypt_apache/constants.py\n@@ -33,7 +33,7 @@\n https vhost\"\"\"\n \n HSTS_ARGS = [\"always\", \"set\", \"Strict-Transport-Security\",\n- \"\\\"max-age=31536000; includeSubDomains\\\"\"]\n+ \"\\\"max-age=31536000\\\"\"]\n \"\"\"Apache header arguments for HSTS\"\"\"\n \n UIR_ARGS = [\"always\", \"set\", \"Content-Security-Policy\",\n", "issue": "don't add includeSubDomains for Strict-Transport-Security header\nHi,\n\nI think the letsencrypt client shouldn\u2019t add `includeSubDomains` to the Strict-Transport-Security header by default.\n\nIf you let the letsencrypt client modify the configuration for a webserver which hosts example.com, this would make it only possible to reach subdomain hosts (foo.example.com) with https. If you forgot only one webserver which serves a subdomain by http only, you can't reach this one any more.\nAnd as browsers actually don't have an option to clear the HSTS cache, It won't help you to remove the includeSubDomains later from the webserver config.\n\nincludeSubDomains is something, that an admin should only add if he truly knows, that all subdomains are reachable by https. So I think this should never be added by an automatic tool, like the letsencrypt client.\n\nRFC 6797 section 11.1 [1] also describes some implications of includeSubDomains\n\nChris\n\n[1] https://tools.ietf.org/html/rfc6797#section-11.4\n\n", "before_files": [{"content": "\"\"\"Apache plugin constants.\"\"\"\nimport pkg_resources\n\n\nCLI_DEFAULTS = dict(\n server_root=\"/etc/apache2\",\n ctl=\"apache2ctl\",\n enmod=\"a2enmod\",\n dismod=\"a2dismod\",\n le_vhost_ext=\"-le-ssl.conf\",\n)\n\"\"\"CLI defaults.\"\"\"\n\nMOD_SSL_CONF_DEST = \"options-ssl-apache.conf\"\n\"\"\"Name of the mod_ssl config file as saved in `IConfig.config_dir`.\"\"\"\n\nMOD_SSL_CONF_SRC = pkg_resources.resource_filename(\n \"letsencrypt_apache\", \"options-ssl-apache.conf\")\n\"\"\"Path to the Apache mod_ssl config file found in the Let's Encrypt\ndistribution.\"\"\"\n\nAUGEAS_LENS_DIR = pkg_resources.resource_filename(\n \"letsencrypt_apache\", \"augeas_lens\")\n\"\"\"Path to the Augeas lens directory\"\"\"\n\nREWRITE_HTTPS_ARGS = [\n \"^\", \"https://%{SERVER_NAME}%{REQUEST_URI}\", \"[L,QSA,R=permanent]\"]\n\"\"\"Apache version<2.3.9 rewrite rule arguments used for redirections to https vhost\"\"\"\n\nREWRITE_HTTPS_ARGS_WITH_END = [\n \"^\", \"https://%{SERVER_NAME}%{REQUEST_URI}\", \"[END,QSA,R=permanent]\"]\n\"\"\"Apache version >= 2.3.9 rewrite rule arguments used for redirections to\n https vhost\"\"\"\n\nHSTS_ARGS = [\"always\", \"set\", \"Strict-Transport-Security\",\n \"\\\"max-age=31536000; includeSubDomains\\\"\"]\n\"\"\"Apache header arguments for HSTS\"\"\"\n\nUIR_ARGS = [\"always\", \"set\", \"Content-Security-Policy\",\n \"upgrade-insecure-requests\"]\n\nHEADER_ARGS = {\"Strict-Transport-Security\": HSTS_ARGS,\n \"Upgrade-Insecure-Requests\": UIR_ARGS}\n\n", "path": "letsencrypt-apache/letsencrypt_apache/constants.py"}], "after_files": [{"content": "\"\"\"Apache plugin constants.\"\"\"\nimport pkg_resources\n\n\nCLI_DEFAULTS = dict(\n server_root=\"/etc/apache2\",\n ctl=\"apache2ctl\",\n enmod=\"a2enmod\",\n dismod=\"a2dismod\",\n le_vhost_ext=\"-le-ssl.conf\",\n)\n\"\"\"CLI defaults.\"\"\"\n\nMOD_SSL_CONF_DEST = \"options-ssl-apache.conf\"\n\"\"\"Name of the mod_ssl config file as saved in `IConfig.config_dir`.\"\"\"\n\nMOD_SSL_CONF_SRC = pkg_resources.resource_filename(\n \"letsencrypt_apache\", \"options-ssl-apache.conf\")\n\"\"\"Path to the Apache mod_ssl config file found in the Let's Encrypt\ndistribution.\"\"\"\n\nAUGEAS_LENS_DIR = pkg_resources.resource_filename(\n \"letsencrypt_apache\", \"augeas_lens\")\n\"\"\"Path to the Augeas lens directory\"\"\"\n\nREWRITE_HTTPS_ARGS = [\n \"^\", \"https://%{SERVER_NAME}%{REQUEST_URI}\", \"[L,QSA,R=permanent]\"]\n\"\"\"Apache version<2.3.9 rewrite rule arguments used for redirections to https vhost\"\"\"\n\nREWRITE_HTTPS_ARGS_WITH_END = [\n \"^\", \"https://%{SERVER_NAME}%{REQUEST_URI}\", \"[END,QSA,R=permanent]\"]\n\"\"\"Apache version >= 2.3.9 rewrite rule arguments used for redirections to\n https vhost\"\"\"\n\nHSTS_ARGS = [\"always\", \"set\", \"Strict-Transport-Security\",\n \"\\\"max-age=31536000\\\"\"]\n\"\"\"Apache header arguments for HSTS\"\"\"\n\nUIR_ARGS = [\"always\", \"set\", \"Content-Security-Policy\",\n \"upgrade-insecure-requests\"]\n\nHEADER_ARGS = {\"Strict-Transport-Security\": HSTS_ARGS,\n \"Upgrade-Insecure-Requests\": UIR_ARGS}\n\n", "path": "letsencrypt-apache/letsencrypt_apache/constants.py"}]}
| 964 | 157 |
gh_patches_debug_41488
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-5029
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add requirements check for ideep4py
Add requirements check for iDeep4py as we do in cupy.
Refs #4933
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/_runtime_info.py`
Content:
```
1 import platform
2 import sys
3
4 import numpy
5 import six
6
7 import chainer
8 from chainer.backends import cuda
9
10
11 class _RuntimeInfo(object):
12
13 chainer_version = None
14 numpy_version = None
15 cuda_info = None
16
17 def __init__(self):
18 self.chainer_version = chainer.__version__
19 self.numpy_version = numpy.__version__
20 self.platform_version = platform.platform()
21 if cuda.available:
22 self.cuda_info = cuda.cupyx.get_runtime_info()
23 else:
24 self.cuda_info = None
25
26 def __str__(self):
27 s = six.StringIO()
28 s.write('''Platform: {}\n'''.format(self.platform_version))
29 s.write('''Chainer: {}\n'''.format(self.chainer_version))
30 s.write('''NumPy: {}\n'''.format(self.numpy_version))
31 if self.cuda_info is None:
32 s.write('''CuPy: Not Available\n''')
33 else:
34 s.write('''CuPy:\n''')
35 for line in str(self.cuda_info).splitlines():
36 s.write(''' {}\n'''.format(line))
37 return s.getvalue()
38
39
40 def get_runtime_info():
41 return _RuntimeInfo()
42
43
44 def print_runtime_info(out=None):
45 if out is None:
46 out = sys.stdout
47 out.write(str(get_runtime_info()))
48 if hasattr(out, 'flush'):
49 out.flush()
50
```
Path: `chainer/backends/intel64.py`
Content:
```
1 from __future__ import absolute_import
2
3 import chainer
4 from chainer.configuration import config
5
6
7 _ideep_version = None
8 _error = None
9
10 try:
11 import ideep4py as ideep # NOQA
12 from ideep4py import mdarray # NOQA
13 _ideep_version = 0
14 except ImportError as e:
15 _error = e
16
17 class mdarray(object):
18 pass # for type testing
19
20
21 # ------------------------------------------------------------------------------
22 # ideep configuration
23 # ------------------------------------------------------------------------------
24 _SHOULD_USE_IDEEP = {
25 '==always': {'always': True, 'auto': False, 'never': False},
26 '>=auto': {'always': True, 'auto': True, 'never': False},
27 }
28
29
30 def is_ideep_available():
31 """Returns if iDeep is available.
32
33 Returns:
34 bool: ``True`` if iDeep is installed.
35 """
36
37 return _ideep_version is not None
38
39
40 def check_ideep_available():
41 """Checks if iDeep is available.
42
43 When iDeep is correctly set up, nothing happens.
44 Otherwise it raises ``RuntimeError``.
45 """
46 if _ideep_version is None:
47 # If the error is missing shared object, append a message to
48 # redirect to the ideep website.
49 msg = str(_error)
50 if 'cannot open shared object file' in msg:
51 msg += ('\n\nEnsure iDeep requirements are satisfied: '
52 'https://github.com/intel/ideep')
53 raise RuntimeError(
54 'iDeep is not available.\n'
55 'Reason: {}: {}'.format(type(_error).__name__, msg))
56
57
58 def should_use_ideep(level):
59 """Determines if we should use iDeep.
60
61 This function checks ``chainer.config.use_ideep`` and availability
62 of ``ideep4py`` package.
63
64 Args:
65 level (str): iDeep use level. It must be either ``'==always'`` or
66 ``'>=auto'``. ``'==always'`` indicates that the ``use_ideep``
67 config must be ``'always'`` to use iDeep.
68
69 Returns:
70 bool: ``True`` if the caller should use iDeep.
71
72 """
73 if _ideep_version is None:
74 return False
75
76 # TODO(niboshi):
77 # Add lowest_version argument and compare with ideep version.
78 # Currently ideep does not provide a way to retrieve its version.
79
80 if level not in _SHOULD_USE_IDEEP:
81 raise ValueError('invalid iDeep use level: %s '
82 '(must be either of "==always" or ">=auto")' %
83 repr(level))
84
85 flags = _SHOULD_USE_IDEEP[level]
86
87 use_ideep = config.use_ideep
88 if use_ideep not in flags:
89 raise ValueError('invalid use_ideep configuration: %s '
90 '(must be either of "always", "auto", or "never")' %
91 repr(use_ideep))
92 return flags[use_ideep]
93
94
95 def inputs_all_ready(inputs, supported_ndim=(2, 4)):
96 """Checks if input arrays are supported for an iDeep primitive.
97
98 Before calling an iDeep primitive (e.g., ``ideep4py.linear.Forward``), you
99 need to make sure that all input arrays are ready for the primitive by
100 calling this function.
101 Information to be checked includes array types, dimesions and data types.
102 The function checks ``inputs`` info and ``supported_ndim``.
103
104 Inputs to be tested can be any of ``Variable``, ``numpy.ndarray`` or
105 ``ideep4py.mdarray``. However, all inputs to iDeep primitives must be
106 ``ideep4py.mdarray``. Callers of iDeep primitives are responsible of
107 converting all inputs to ``ideep4py.mdarray``.
108
109 Args:
110 inputs (sequence of arrays or variables):
111 Inputs to be checked.
112 supported_ndim (tuple of ints):
113 Supported ndim values for the iDeep primitive.
114
115 Returns:
116 bool: ``True`` if all conditions meet.
117
118 """
119
120 def _is_supported_array_type(a):
121 return isinstance(a, ideep.mdarray) or ideep.check_type([a])
122
123 if _ideep_version is None:
124 return False
125
126 inputs = [x.data if isinstance(x, chainer.variable.Variable)
127 else x for x in inputs]
128
129 return (ideep.check_ndim(inputs, supported_ndim)
130 and all([_is_supported_array_type(a) for a in inputs]))
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/chainer/_runtime_info.py b/chainer/_runtime_info.py
--- a/chainer/_runtime_info.py
+++ b/chainer/_runtime_info.py
@@ -6,6 +6,7 @@
import chainer
from chainer.backends import cuda
+from chainer.backends import intel64
class _RuntimeInfo(object):
@@ -13,6 +14,7 @@
chainer_version = None
numpy_version = None
cuda_info = None
+ ideep_version = None
def __init__(self):
self.chainer_version = chainer.__version__
@@ -22,6 +24,10 @@
self.cuda_info = cuda.cupyx.get_runtime_info()
else:
self.cuda_info = None
+ if intel64.is_ideep_available():
+ self.ideep_version = intel64.ideep.__version__
+ else:
+ self.ideep_version = None
def __str__(self):
s = six.StringIO()
@@ -34,6 +40,10 @@
s.write('''CuPy:\n''')
for line in str(self.cuda_info).splitlines():
s.write(''' {}\n'''.format(line))
+ if self.ideep_version is None:
+ s.write('''iDeep: Not Available\n''')
+ else:
+ s.write('''iDeep: {}\n'''.format(self.ideep_version))
return s.getvalue()
diff --git a/chainer/backends/intel64.py b/chainer/backends/intel64.py
--- a/chainer/backends/intel64.py
+++ b/chainer/backends/intel64.py
@@ -10,9 +10,10 @@
try:
import ideep4py as ideep # NOQA
from ideep4py import mdarray # NOQA
- _ideep_version = 0
+ _ideep_version = 2 if hasattr(ideep, '__version__') else 1
except ImportError as e:
_error = e
+ _ideep_version = None
class mdarray(object):
pass # for type testing
@@ -31,10 +32,9 @@
"""Returns if iDeep is available.
Returns:
- bool: ``True`` if iDeep is installed.
+ bool: ``True`` if the supported version of iDeep is installed.
"""
-
- return _ideep_version is not None
+ return _ideep_version is not None and _ideep_version == 2
def check_ideep_available():
@@ -53,6 +53,10 @@
raise RuntimeError(
'iDeep is not available.\n'
'Reason: {}: {}'.format(type(_error).__name__, msg))
+ elif _ideep_version != 2:
+ raise RuntimeError(
+ 'iDeep is not available.\n'
+ 'Reason: Unsupported iDeep version ({})'.format(_ideep_version))
def should_use_ideep(level):
@@ -70,7 +74,7 @@
bool: ``True`` if the caller should use iDeep.
"""
- if _ideep_version is None:
+ if not is_ideep_available():
return False
# TODO(niboshi):
@@ -120,7 +124,7 @@
def _is_supported_array_type(a):
return isinstance(a, ideep.mdarray) or ideep.check_type([a])
- if _ideep_version is None:
+ if not is_ideep_available():
return False
inputs = [x.data if isinstance(x, chainer.variable.Variable)
|
{"golden_diff": "diff --git a/chainer/_runtime_info.py b/chainer/_runtime_info.py\n--- a/chainer/_runtime_info.py\n+++ b/chainer/_runtime_info.py\n@@ -6,6 +6,7 @@\n \n import chainer\n from chainer.backends import cuda\n+from chainer.backends import intel64\n \n \n class _RuntimeInfo(object):\n@@ -13,6 +14,7 @@\n chainer_version = None\n numpy_version = None\n cuda_info = None\n+ ideep_version = None\n \n def __init__(self):\n self.chainer_version = chainer.__version__\n@@ -22,6 +24,10 @@\n self.cuda_info = cuda.cupyx.get_runtime_info()\n else:\n self.cuda_info = None\n+ if intel64.is_ideep_available():\n+ self.ideep_version = intel64.ideep.__version__\n+ else:\n+ self.ideep_version = None\n \n def __str__(self):\n s = six.StringIO()\n@@ -34,6 +40,10 @@\n s.write('''CuPy:\\n''')\n for line in str(self.cuda_info).splitlines():\n s.write(''' {}\\n'''.format(line))\n+ if self.ideep_version is None:\n+ s.write('''iDeep: Not Available\\n''')\n+ else:\n+ s.write('''iDeep: {}\\n'''.format(self.ideep_version))\n return s.getvalue()\n \n \ndiff --git a/chainer/backends/intel64.py b/chainer/backends/intel64.py\n--- a/chainer/backends/intel64.py\n+++ b/chainer/backends/intel64.py\n@@ -10,9 +10,10 @@\n try:\n import ideep4py as ideep # NOQA\n from ideep4py import mdarray # NOQA\n- _ideep_version = 0\n+ _ideep_version = 2 if hasattr(ideep, '__version__') else 1\n except ImportError as e:\n _error = e\n+ _ideep_version = None\n \n class mdarray(object):\n pass # for type testing\n@@ -31,10 +32,9 @@\n \"\"\"Returns if iDeep is available.\n \n Returns:\n- bool: ``True`` if iDeep is installed.\n+ bool: ``True`` if the supported version of iDeep is installed.\n \"\"\"\n-\n- return _ideep_version is not None\n+ return _ideep_version is not None and _ideep_version == 2\n \n \n def check_ideep_available():\n@@ -53,6 +53,10 @@\n raise RuntimeError(\n 'iDeep is not available.\\n'\n 'Reason: {}: {}'.format(type(_error).__name__, msg))\n+ elif _ideep_version != 2:\n+ raise RuntimeError(\n+ 'iDeep is not available.\\n'\n+ 'Reason: Unsupported iDeep version ({})'.format(_ideep_version))\n \n \n def should_use_ideep(level):\n@@ -70,7 +74,7 @@\n bool: ``True`` if the caller should use iDeep.\n \n \"\"\"\n- if _ideep_version is None:\n+ if not is_ideep_available():\n return False\n \n # TODO(niboshi):\n@@ -120,7 +124,7 @@\n def _is_supported_array_type(a):\n return isinstance(a, ideep.mdarray) or ideep.check_type([a])\n \n- if _ideep_version is None:\n+ if not is_ideep_available():\n return False\n \n inputs = [x.data if isinstance(x, chainer.variable.Variable)\n", "issue": "Add requirements check for ideep4py\nAdd requirements check for iDeep4py as we do in cupy.\r\nRefs #4933\n", "before_files": [{"content": "import platform\nimport sys\n\nimport numpy\nimport six\n\nimport chainer\nfrom chainer.backends import cuda\n\n\nclass _RuntimeInfo(object):\n\n chainer_version = None\n numpy_version = None\n cuda_info = None\n\n def __init__(self):\n self.chainer_version = chainer.__version__\n self.numpy_version = numpy.__version__\n self.platform_version = platform.platform()\n if cuda.available:\n self.cuda_info = cuda.cupyx.get_runtime_info()\n else:\n self.cuda_info = None\n\n def __str__(self):\n s = six.StringIO()\n s.write('''Platform: {}\\n'''.format(self.platform_version))\n s.write('''Chainer: {}\\n'''.format(self.chainer_version))\n s.write('''NumPy: {}\\n'''.format(self.numpy_version))\n if self.cuda_info is None:\n s.write('''CuPy: Not Available\\n''')\n else:\n s.write('''CuPy:\\n''')\n for line in str(self.cuda_info).splitlines():\n s.write(''' {}\\n'''.format(line))\n return s.getvalue()\n\n\ndef get_runtime_info():\n return _RuntimeInfo()\n\n\ndef print_runtime_info(out=None):\n if out is None:\n out = sys.stdout\n out.write(str(get_runtime_info()))\n if hasattr(out, 'flush'):\n out.flush()\n", "path": "chainer/_runtime_info.py"}, {"content": "from __future__ import absolute_import\n\nimport chainer\nfrom chainer.configuration import config\n\n\n_ideep_version = None\n_error = None\n\ntry:\n import ideep4py as ideep # NOQA\n from ideep4py import mdarray # NOQA\n _ideep_version = 0\nexcept ImportError as e:\n _error = e\n\n class mdarray(object):\n pass # for type testing\n\n\n# ------------------------------------------------------------------------------\n# ideep configuration\n# ------------------------------------------------------------------------------\n_SHOULD_USE_IDEEP = {\n '==always': {'always': True, 'auto': False, 'never': False},\n '>=auto': {'always': True, 'auto': True, 'never': False},\n}\n\n\ndef is_ideep_available():\n \"\"\"Returns if iDeep is available.\n\n Returns:\n bool: ``True`` if iDeep is installed.\n \"\"\"\n\n return _ideep_version is not None\n\n\ndef check_ideep_available():\n \"\"\"Checks if iDeep is available.\n\n When iDeep is correctly set up, nothing happens.\n Otherwise it raises ``RuntimeError``.\n \"\"\"\n if _ideep_version is None:\n # If the error is missing shared object, append a message to\n # redirect to the ideep website.\n msg = str(_error)\n if 'cannot open shared object file' in msg:\n msg += ('\\n\\nEnsure iDeep requirements are satisfied: '\n 'https://github.com/intel/ideep')\n raise RuntimeError(\n 'iDeep is not available.\\n'\n 'Reason: {}: {}'.format(type(_error).__name__, msg))\n\n\ndef should_use_ideep(level):\n \"\"\"Determines if we should use iDeep.\n\n This function checks ``chainer.config.use_ideep`` and availability\n of ``ideep4py`` package.\n\n Args:\n level (str): iDeep use level. It must be either ``'==always'`` or\n ``'>=auto'``. ``'==always'`` indicates that the ``use_ideep``\n config must be ``'always'`` to use iDeep.\n\n Returns:\n bool: ``True`` if the caller should use iDeep.\n\n \"\"\"\n if _ideep_version is None:\n return False\n\n # TODO(niboshi):\n # Add lowest_version argument and compare with ideep version.\n # Currently ideep does not provide a way to retrieve its version.\n\n if level not in _SHOULD_USE_IDEEP:\n raise ValueError('invalid iDeep use level: %s '\n '(must be either of \"==always\" or \">=auto\")' %\n repr(level))\n\n flags = _SHOULD_USE_IDEEP[level]\n\n use_ideep = config.use_ideep\n if use_ideep not in flags:\n raise ValueError('invalid use_ideep configuration: %s '\n '(must be either of \"always\", \"auto\", or \"never\")' %\n repr(use_ideep))\n return flags[use_ideep]\n\n\ndef inputs_all_ready(inputs, supported_ndim=(2, 4)):\n \"\"\"Checks if input arrays are supported for an iDeep primitive.\n\n Before calling an iDeep primitive (e.g., ``ideep4py.linear.Forward``), you\n need to make sure that all input arrays are ready for the primitive by\n calling this function.\n Information to be checked includes array types, dimesions and data types.\n The function checks ``inputs`` info and ``supported_ndim``.\n\n Inputs to be tested can be any of ``Variable``, ``numpy.ndarray`` or\n ``ideep4py.mdarray``. However, all inputs to iDeep primitives must be\n ``ideep4py.mdarray``. Callers of iDeep primitives are responsible of\n converting all inputs to ``ideep4py.mdarray``.\n\n Args:\n inputs (sequence of arrays or variables):\n Inputs to be checked.\n supported_ndim (tuple of ints):\n Supported ndim values for the iDeep primitive.\n\n Returns:\n bool: ``True`` if all conditions meet.\n\n \"\"\"\n\n def _is_supported_array_type(a):\n return isinstance(a, ideep.mdarray) or ideep.check_type([a])\n\n if _ideep_version is None:\n return False\n\n inputs = [x.data if isinstance(x, chainer.variable.Variable)\n else x for x in inputs]\n\n return (ideep.check_ndim(inputs, supported_ndim)\n and all([_is_supported_array_type(a) for a in inputs]))\n", "path": "chainer/backends/intel64.py"}], "after_files": [{"content": "import platform\nimport sys\n\nimport numpy\nimport six\n\nimport chainer\nfrom chainer.backends import cuda\nfrom chainer.backends import intel64\n\n\nclass _RuntimeInfo(object):\n\n chainer_version = None\n numpy_version = None\n cuda_info = None\n ideep_version = None\n\n def __init__(self):\n self.chainer_version = chainer.__version__\n self.numpy_version = numpy.__version__\n self.platform_version = platform.platform()\n if cuda.available:\n self.cuda_info = cuda.cupyx.get_runtime_info()\n else:\n self.cuda_info = None\n if intel64.is_ideep_available():\n self.ideep_version = intel64.ideep.__version__\n else:\n self.ideep_version = None\n\n def __str__(self):\n s = six.StringIO()\n s.write('''Platform: {}\\n'''.format(self.platform_version))\n s.write('''Chainer: {}\\n'''.format(self.chainer_version))\n s.write('''NumPy: {}\\n'''.format(self.numpy_version))\n if self.cuda_info is None:\n s.write('''CuPy: Not Available\\n''')\n else:\n s.write('''CuPy:\\n''')\n for line in str(self.cuda_info).splitlines():\n s.write(''' {}\\n'''.format(line))\n if self.ideep_version is None:\n s.write('''iDeep: Not Available\\n''')\n else:\n s.write('''iDeep: {}\\n'''.format(self.ideep_version))\n return s.getvalue()\n\n\ndef get_runtime_info():\n return _RuntimeInfo()\n\n\ndef print_runtime_info(out=None):\n if out is None:\n out = sys.stdout\n out.write(str(get_runtime_info()))\n if hasattr(out, 'flush'):\n out.flush()\n", "path": "chainer/_runtime_info.py"}, {"content": "from __future__ import absolute_import\n\nimport chainer\nfrom chainer.configuration import config\n\n\n_ideep_version = None\n_error = None\n\ntry:\n import ideep4py as ideep # NOQA\n from ideep4py import mdarray # NOQA\n _ideep_version = 2 if hasattr(ideep, '__version__') else 1\nexcept ImportError as e:\n _error = e\n _ideep_version = None\n\n class mdarray(object):\n pass # for type testing\n\n\n# ------------------------------------------------------------------------------\n# ideep configuration\n# ------------------------------------------------------------------------------\n_SHOULD_USE_IDEEP = {\n '==always': {'always': True, 'auto': False, 'never': False},\n '>=auto': {'always': True, 'auto': True, 'never': False},\n}\n\n\ndef is_ideep_available():\n \"\"\"Returns if iDeep is available.\n\n Returns:\n bool: ``True`` if the supported version of iDeep is installed.\n \"\"\"\n return _ideep_version is not None and _ideep_version == 2\n\n\ndef check_ideep_available():\n \"\"\"Checks if iDeep is available.\n\n When iDeep is correctly set up, nothing happens.\n Otherwise it raises ``RuntimeError``.\n \"\"\"\n if _ideep_version is None:\n # If the error is missing shared object, append a message to\n # redirect to the ideep website.\n msg = str(_error)\n if 'cannot open shared object file' in msg:\n msg += ('\\n\\nEnsure iDeep requirements are satisfied: '\n 'https://github.com/intel/ideep')\n raise RuntimeError(\n 'iDeep is not available.\\n'\n 'Reason: {}: {}'.format(type(_error).__name__, msg))\n elif _ideep_version != 2:\n raise RuntimeError(\n 'iDeep is not available.\\n'\n 'Reason: Unsupported iDeep version ({})'.format(_ideep_version))\n\n\ndef should_use_ideep(level):\n \"\"\"Determines if we should use iDeep.\n\n This function checks ``chainer.config.use_ideep`` and availability\n of ``ideep4py`` package.\n\n Args:\n level (str): iDeep use level. It must be either ``'==always'`` or\n ``'>=auto'``. ``'==always'`` indicates that the ``use_ideep``\n config must be ``'always'`` to use iDeep.\n\n Returns:\n bool: ``True`` if the caller should use iDeep.\n\n \"\"\"\n if not is_ideep_available():\n return False\n\n # TODO(niboshi):\n # Add lowest_version argument and compare with ideep version.\n # Currently ideep does not provide a way to retrieve its version.\n\n if level not in _SHOULD_USE_IDEEP:\n raise ValueError('invalid iDeep use level: %s '\n '(must be either of \"==always\" or \">=auto\")' %\n repr(level))\n\n flags = _SHOULD_USE_IDEEP[level]\n\n use_ideep = config.use_ideep\n if use_ideep not in flags:\n raise ValueError('invalid use_ideep configuration: %s '\n '(must be either of \"always\", \"auto\", or \"never\")' %\n repr(use_ideep))\n return flags[use_ideep]\n\n\ndef inputs_all_ready(inputs, supported_ndim=(2, 4)):\n \"\"\"Checks if input arrays are supported for an iDeep primitive.\n\n Before calling an iDeep primitive (e.g., ``ideep4py.linear.Forward``), you\n need to make sure that all input arrays are ready for the primitive by\n calling this function.\n Information to be checked includes array types, dimesions and data types.\n The function checks ``inputs`` info and ``supported_ndim``.\n\n Inputs to be tested can be any of ``Variable``, ``numpy.ndarray`` or\n ``ideep4py.mdarray``. However, all inputs to iDeep primitives must be\n ``ideep4py.mdarray``. Callers of iDeep primitives are responsible of\n converting all inputs to ``ideep4py.mdarray``.\n\n Args:\n inputs (sequence of arrays or variables):\n Inputs to be checked.\n supported_ndim (tuple of ints):\n Supported ndim values for the iDeep primitive.\n\n Returns:\n bool: ``True`` if all conditions meet.\n\n \"\"\"\n\n def _is_supported_array_type(a):\n return isinstance(a, ideep.mdarray) or ideep.check_type([a])\n\n if not is_ideep_available():\n return False\n\n inputs = [x.data if isinstance(x, chainer.variable.Variable)\n else x for x in inputs]\n\n return (ideep.check_ndim(inputs, supported_ndim)\n and all([_is_supported_array_type(a) for a in inputs]))\n", "path": "chainer/backends/intel64.py"}]}
| 1,988 | 812 |
gh_patches_debug_21326
|
rasdani/github-patches
|
git_diff
|
hylang__hy-1266
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Single quote crashes hy2py
## x.hy
```
'
```
## `hy2py x.hy`
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "hy/cmdline.py", line 406, in hy2py_main
if stdin_text is None
File "hy/cmdline.py", line 186, in pretty_error
print(e, file=sys.stderr)
File "hy/lex/exceptions.py", line 43, in __str__
source = self.source.split("\n")
AttributeError: 'NoneType' object has no attribute 'split'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hy/lex/exceptions.py`
Content:
```
1 # Copyright (c) 2013 Nicolas Dandrimont <[email protected]>
2 # Copyright (c) 2013 Bob Tolbert <[email protected]>
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining a
5 # copy of this software and associated documentation files (the "Software"),
6 # to deal in the Software without restriction, including without limitation
7 # the rights to use, copy, modify, merge, publish, distribute, sublicense,
8 # and/or sell copies of the Software, and to permit persons to whom the
9 # Software is furnished to do so, subject to the following conditions:
10 #
11 # The above copyright notice and this permission notice shall be included in
12 # all copies or substantial portions of the Software.
13 #
14 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
15 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
16 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
17 # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
18 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
19 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
20 # DEALINGS IN THE SOFTWARE.
21
22 from hy.errors import HyError
23
24
25 class LexException(HyError):
26 """Error during the Lexing of a Hython expression."""
27 def __init__(self, message, lineno, colno):
28 super(LexException, self).__init__(message)
29 self.message = message
30 self.lineno = lineno
31 self.colno = colno
32 self.source = None
33 self.filename = '<stdin>'
34
35 def __str__(self):
36 from hy.errors import colored
37
38 line = self.lineno
39 start = self.colno
40
41 result = ""
42
43 source = self.source.split("\n")
44
45 if line > 0 and start > 0:
46 result += ' File "%s", line %d, column %d\n\n' % (self.filename,
47 line,
48 start)
49
50 if len(self.source) > 0:
51 source_line = source[line-1]
52 else:
53 source_line = ""
54
55 result += ' %s\n' % colored.red(source_line)
56 result += ' %s%s\n' % (' '*(start-1), colored.green('^'))
57
58 result += colored.yellow("LexException: %s\n\n" % self.message)
59
60 return result
61
62
63 class PrematureEndOfInput(LexException):
64 """We got a premature end of input"""
65 def __init__(self, message):
66 super(PrematureEndOfInput, self).__init__(message, -1, -1)
67
```
Path: `hy/lex/__init__.py`
Content:
```
1 # Copyright (c) 2013 Paul Tagliamonte <[email protected]>
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a
4 # copy of this software and associated documentation files (the "Software"),
5 # to deal in the Software without restriction, including without limitation
6 # the rights to use, copy, modify, merge, publish, distribute, sublicense,
7 # and/or sell copies of the Software, and to permit persons to whom the
8 # Software is furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
16 # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
18 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
19 # DEALINGS IN THE SOFTWARE.
20
21 from rply.errors import LexingError
22
23 from hy.lex.exceptions import LexException, PrematureEndOfInput # NOQA
24 from hy.lex.lexer import lexer
25 from hy.lex.parser import parser
26
27
28 def tokenize(buf):
29 """
30 Tokenize a Lisp file or string buffer into internal Hy objects.
31 """
32 try:
33 return parser.parse(lexer.lex(buf))
34 except LexingError as e:
35 pos = e.getsourcepos()
36 raise LexException("Could not identify the next token.",
37 pos.lineno, pos.colno)
38 except LexException as e:
39 if e.source is None:
40 e.source = buf
41 raise
42
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/hy/lex/__init__.py b/hy/lex/__init__.py
--- a/hy/lex/__init__.py
+++ b/hy/lex/__init__.py
@@ -34,7 +34,7 @@
except LexingError as e:
pos = e.getsourcepos()
raise LexException("Could not identify the next token.",
- pos.lineno, pos.colno)
+ pos.lineno, pos.colno, buf)
except LexException as e:
if e.source is None:
e.source = buf
diff --git a/hy/lex/exceptions.py b/hy/lex/exceptions.py
--- a/hy/lex/exceptions.py
+++ b/hy/lex/exceptions.py
@@ -24,12 +24,12 @@
class LexException(HyError):
"""Error during the Lexing of a Hython expression."""
- def __init__(self, message, lineno, colno):
+ def __init__(self, message, lineno, colno, source=None):
super(LexException, self).__init__(message)
self.message = message
self.lineno = lineno
self.colno = colno
- self.source = None
+ self.source = source
self.filename = '<stdin>'
def __str__(self):
|
{"golden_diff": "diff --git a/hy/lex/__init__.py b/hy/lex/__init__.py\n--- a/hy/lex/__init__.py\n+++ b/hy/lex/__init__.py\n@@ -34,7 +34,7 @@\n except LexingError as e:\n pos = e.getsourcepos()\n raise LexException(\"Could not identify the next token.\",\n- pos.lineno, pos.colno)\n+ pos.lineno, pos.colno, buf)\n except LexException as e:\n if e.source is None:\n e.source = buf\ndiff --git a/hy/lex/exceptions.py b/hy/lex/exceptions.py\n--- a/hy/lex/exceptions.py\n+++ b/hy/lex/exceptions.py\n@@ -24,12 +24,12 @@\n \n class LexException(HyError):\n \"\"\"Error during the Lexing of a Hython expression.\"\"\"\n- def __init__(self, message, lineno, colno):\n+ def __init__(self, message, lineno, colno, source=None):\n super(LexException, self).__init__(message)\n self.message = message\n self.lineno = lineno\n self.colno = colno\n- self.source = None\n+ self.source = source\n self.filename = '<stdin>'\n \n def __str__(self):\n", "issue": "Single quote crashes hy2py\n## x.hy\r\n\r\n```\r\n'\r\n```\r\n\r\n## `hy2py x.hy`\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"hy/cmdline.py\", line 406, in hy2py_main\r\n if stdin_text is None\r\n File \"hy/cmdline.py\", line 186, in pretty_error\r\n print(e, file=sys.stderr)\r\n File \"hy/lex/exceptions.py\", line 43, in __str__\r\n source = self.source.split(\"\\n\")\r\nAttributeError: 'NoneType' object has no attribute 'split'\r\n```\n", "before_files": [{"content": "# Copyright (c) 2013 Nicolas Dandrimont <[email protected]>\n# Copyright (c) 2013 Bob Tolbert <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nfrom hy.errors import HyError\n\n\nclass LexException(HyError):\n \"\"\"Error during the Lexing of a Hython expression.\"\"\"\n def __init__(self, message, lineno, colno):\n super(LexException, self).__init__(message)\n self.message = message\n self.lineno = lineno\n self.colno = colno\n self.source = None\n self.filename = '<stdin>'\n\n def __str__(self):\n from hy.errors import colored\n\n line = self.lineno\n start = self.colno\n\n result = \"\"\n\n source = self.source.split(\"\\n\")\n\n if line > 0 and start > 0:\n result += ' File \"%s\", line %d, column %d\\n\\n' % (self.filename,\n line,\n start)\n\n if len(self.source) > 0:\n source_line = source[line-1]\n else:\n source_line = \"\"\n\n result += ' %s\\n' % colored.red(source_line)\n result += ' %s%s\\n' % (' '*(start-1), colored.green('^'))\n\n result += colored.yellow(\"LexException: %s\\n\\n\" % self.message)\n\n return result\n\n\nclass PrematureEndOfInput(LexException):\n \"\"\"We got a premature end of input\"\"\"\n def __init__(self, message):\n super(PrematureEndOfInput, self).__init__(message, -1, -1)\n", "path": "hy/lex/exceptions.py"}, {"content": "# Copyright (c) 2013 Paul Tagliamonte <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nfrom rply.errors import LexingError\n\nfrom hy.lex.exceptions import LexException, PrematureEndOfInput # NOQA\nfrom hy.lex.lexer import lexer\nfrom hy.lex.parser import parser\n\n\ndef tokenize(buf):\n \"\"\"\n Tokenize a Lisp file or string buffer into internal Hy objects.\n \"\"\"\n try:\n return parser.parse(lexer.lex(buf))\n except LexingError as e:\n pos = e.getsourcepos()\n raise LexException(\"Could not identify the next token.\",\n pos.lineno, pos.colno)\n except LexException as e:\n if e.source is None:\n e.source = buf\n raise\n", "path": "hy/lex/__init__.py"}], "after_files": [{"content": "# Copyright (c) 2013 Nicolas Dandrimont <[email protected]>\n# Copyright (c) 2013 Bob Tolbert <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nfrom hy.errors import HyError\n\n\nclass LexException(HyError):\n \"\"\"Error during the Lexing of a Hython expression.\"\"\"\n def __init__(self, message, lineno, colno, source=None):\n super(LexException, self).__init__(message)\n self.message = message\n self.lineno = lineno\n self.colno = colno\n self.source = source\n self.filename = '<stdin>'\n\n def __str__(self):\n from hy.errors import colored\n\n line = self.lineno\n start = self.colno\n\n result = \"\"\n\n source = self.source.split(\"\\n\")\n\n if line > 0 and start > 0:\n result += ' File \"%s\", line %d, column %d\\n\\n' % (self.filename,\n line,\n start)\n\n if len(self.source) > 0:\n source_line = source[line-1]\n else:\n source_line = \"\"\n\n result += ' %s\\n' % colored.red(source_line)\n result += ' %s%s\\n' % (' '*(start-1), colored.green('^'))\n\n result += colored.yellow(\"LexException: %s\\n\\n\" % self.message)\n\n return result\n\n\nclass PrematureEndOfInput(LexException):\n \"\"\"We got a premature end of input\"\"\"\n def __init__(self, message):\n super(PrematureEndOfInput, self).__init__(message, -1, -1)\n", "path": "hy/lex/exceptions.py"}, {"content": "# Copyright (c) 2013 Paul Tagliamonte <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nfrom rply.errors import LexingError\n\nfrom hy.lex.exceptions import LexException, PrematureEndOfInput # NOQA\nfrom hy.lex.lexer import lexer\nfrom hy.lex.parser import parser\n\n\ndef tokenize(buf):\n \"\"\"\n Tokenize a Lisp file or string buffer into internal Hy objects.\n \"\"\"\n try:\n return parser.parse(lexer.lex(buf))\n except LexingError as e:\n pos = e.getsourcepos()\n raise LexException(\"Could not identify the next token.\",\n pos.lineno, pos.colno, buf)\n except LexException as e:\n if e.source is None:\n e.source = buf\n raise\n", "path": "hy/lex/__init__.py"}]}
| 1,593 | 296 |
gh_patches_debug_2702
|
rasdani/github-patches
|
git_diff
|
readthedocs__readthedocs.org-4811
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Delete untracked tags on fetch step
Currently, if the user deletes a tag, it needs to wipe the environment for this change be reflected in their version list.
There are some solutions to delete untracked tags (require more than 2 commands). But I found that the newest version of git has the `--prune-tags` option, which is used as `git fetch --prune --prune-tags` (`git >2.17`). We need to update git on the servers (we use 2.7.4) and change the fetch command. Or we can find a way to wipe the environment if we detect something like this case.
Raised in https://github.com/rtfd/readthedocs.org/pull/3913#issuecomment-396673349
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `readthedocs/vcs_support/backends/git.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Git-related utilities."""
3
4 from __future__ import (
5 absolute_import, division, print_function, unicode_literals)
6
7 import csv
8 import logging
9 import os
10 import re
11
12 import git
13 from builtins import str
14 from django.core.exceptions import ValidationError
15 from git.exc import BadName
16 from six import PY2, StringIO
17
18 from readthedocs.config import ALL
19 from readthedocs.projects.exceptions import RepositoryError
20 from readthedocs.projects.validators import validate_submodule_url
21 from readthedocs.vcs_support.base import BaseVCS, VCSVersion
22
23 log = logging.getLogger(__name__)
24
25
26 class Backend(BaseVCS):
27
28 """Git VCS backend."""
29
30 supports_tags = True
31 supports_branches = True
32 supports_submodules = True
33 fallback_branch = 'master' # default branch
34
35 def __init__(self, *args, **kwargs):
36 super(Backend, self).__init__(*args, **kwargs)
37 self.token = kwargs.get('token', None)
38 self.repo_url = self._get_clone_url()
39
40 def _get_clone_url(self):
41 if '://' in self.repo_url:
42 hacked_url = self.repo_url.split('://')[1]
43 hacked_url = re.sub('.git$', '', hacked_url)
44 clone_url = 'https://%s' % hacked_url
45 if self.token:
46 clone_url = 'https://%s@%s' % (self.token, hacked_url)
47 return clone_url
48 # Don't edit URL because all hosts aren't the same
49 # else:
50 # clone_url = 'git://%s' % (hacked_url)
51 return self.repo_url
52
53 def set_remote_url(self, url):
54 return self.run('git', 'remote', 'set-url', 'origin', url)
55
56 def update(self):
57 # Use checkout() to update repo
58 # TODO: See where we call this
59 self.checkout()
60
61 def repo_exists(self):
62 code, _, _ = self.run('git', 'status', record=False)
63 return code == 0
64
65 def are_submodules_available(self, config):
66 """Test whether git submodule checkout step should be performed."""
67 # TODO remove this after users migrate to a config file
68 from readthedocs.projects.models import Feature
69 submodules_in_config = (
70 config.submodules.exclude != ALL or
71 config.submodules.include
72 )
73 if (self.project.has_feature(Feature.SKIP_SUBMODULES) or
74 not submodules_in_config):
75 return False
76
77 # Keep compatibility with previous projects
78 code, out, _ = self.run('git', 'submodule', 'status', record=False)
79 return code == 0 and bool(out)
80
81 def validate_submodules(self, config):
82 """
83 Returns the submodules and check that its URLs are valid.
84
85 .. note::
86
87 Allways call after `self.are_submodules_available`.
88
89 :returns: tuple(bool, list)
90
91 Returns true if all required submodules URLs are valid.
92 Returns a list of all required submodules:
93 - Include is `ALL`, returns all submodules avaliable.
94 - Include is a list, returns just those.
95 - Exclude is `ALL` - this should never happen.
96 - Exlude is a list, returns all avaliable submodules
97 but those from the list.
98 """
99 repo = git.Repo(self.working_dir)
100 submodules = {
101 sub.path: sub
102 for sub in repo.submodules
103 }
104
105 for sub_path in config.submodules.exclude:
106 path = sub_path.rstrip('/')
107 if path in submodules:
108 del submodules[path]
109
110 if config.submodules.include != ALL and config.submodules.include:
111 submodules_include = {}
112 for sub_path in config.submodules.include:
113 path = sub_path.rstrip('/')
114 submodules_include[path] = submodules[path]
115 submodules = submodules_include
116
117 for path, submodule in submodules.items():
118 try:
119 validate_submodule_url(submodule.url)
120 except ValidationError:
121 return False, []
122 return True, submodules.keys()
123
124 def fetch(self):
125 code, _, _ = self.run('git', 'fetch', '--tags', '--prune')
126 if code != 0:
127 raise RepositoryError
128
129 def checkout_revision(self, revision=None):
130 if not revision:
131 branch = self.default_branch or self.fallback_branch
132 revision = 'origin/%s' % branch
133
134 code, out, err = self.run('git', 'checkout', '--force', revision)
135 if code != 0:
136 log.warning("Failed to checkout revision '%s': %s", revision, code)
137 return [code, out, err]
138
139 def clone(self):
140 """
141 Clone the repository.
142
143 .. note::
144
145 Temporarily, we support skipping submodule recursive clone via a
146 feature flag. This will eventually be configurable with our YAML
147 config.
148 """
149 # TODO remove with https://github.com/rtfd/readthedocs-build/issues/30
150 from readthedocs.projects.models import Feature
151 cmd = ['git', 'clone']
152 cmd.extend([self.repo_url, '.'])
153 code, _, _ = self.run(*cmd)
154 if code != 0:
155 raise RepositoryError
156
157 @property
158 def tags(self):
159 versions = []
160 repo = git.Repo(self.working_dir)
161 for tag in repo.tags:
162 try:
163 versions.append(VCSVersion(self, str(tag.commit), str(tag)))
164 except ValueError as e:
165 # ValueError: Cannot resolve commit as tag TAGNAME points to a
166 # blob object - use the `.object` property instead to access it
167 # This is not a real tag for us, so we skip it
168 # https://github.com/rtfd/readthedocs.org/issues/4440
169 log.warning('Git tag skipped: %s', tag, exc_info=True)
170 continue
171 return versions
172
173 @property
174 def branches(self):
175 # Only show remote branches
176 retcode, stdout, _ = self.run(
177 'git',
178 'branch',
179 '-r',
180 record_as_success=True,
181 )
182 # error (or no branches found)
183 if retcode != 0:
184 return []
185 return self.parse_branches(stdout)
186
187 def parse_branches(self, data):
188 """
189 Parse output of git branch -r.
190
191 e.g.:
192
193 origin/2.0.X
194 origin/HEAD -> origin/master
195 origin/develop
196 origin/master
197 origin/release/2.0.0
198 origin/release/2.1.0
199 """
200 clean_branches = []
201 # StringIO below is expecting Unicode data, so ensure that it gets it.
202 if not isinstance(data, str):
203 data = str(data)
204 delimiter = str(' ').encode('utf-8') if PY2 else str(' ')
205 raw_branches = csv.reader(StringIO(data), delimiter=delimiter)
206 for branch in raw_branches:
207 branch = [f for f in branch if f not in ('', '*')]
208 # Handle empty branches
209 if branch:
210 branch = branch[0]
211 if branch.startswith('origin/'):
212 verbose_name = branch.replace('origin/', '')
213 if verbose_name in ['HEAD']:
214 continue
215 clean_branches.append(
216 VCSVersion(self, branch, verbose_name))
217 else:
218 clean_branches.append(VCSVersion(self, branch, branch))
219 return clean_branches
220
221 @property
222 def commit(self):
223 _, stdout, _ = self.run('git', 'rev-parse', 'HEAD')
224 return stdout.strip()
225
226 def checkout(self, identifier=None):
227 self.check_working_dir()
228
229 # Clone or update repository
230 if self.repo_exists():
231 self.set_remote_url(self.repo_url)
232 self.fetch()
233 else:
234 self.make_clean_working_dir()
235 self.clone()
236
237 # Find proper identifier
238 if not identifier:
239 identifier = self.default_branch or self.fallback_branch
240
241 identifier = self.find_ref(identifier)
242
243 # Checkout the correct identifier for this branch.
244 code, out, err = self.checkout_revision(identifier)
245 if code != 0:
246 return code, out, err
247
248 # Clean any remains of previous checkouts
249 self.run('git', 'clean', '-d', '-f', '-f')
250 return code, out, err
251
252 def update_submodules(self, config):
253 if self.are_submodules_available(config):
254 valid, submodules = self.validate_submodules(config)
255 if valid:
256 self.checkout_submodules(submodules, config)
257 else:
258 raise RepositoryError(RepositoryError.INVALID_SUBMODULES)
259
260 def checkout_submodules(self, submodules, config):
261 """Checkout all repository submodules."""
262 self.run('git', 'submodule', 'sync')
263 cmd = [
264 'git',
265 'submodule',
266 'update',
267 '--init',
268 '--force',
269 ]
270 if config.submodules.recursive:
271 cmd.append('--recursive')
272 cmd += submodules
273 self.run(*cmd)
274
275 def find_ref(self, ref):
276 # Check if ref starts with 'origin/'
277 if ref.startswith('origin/'):
278 return ref
279
280 # Check if ref is a branch of the origin remote
281 if self.ref_exists('remotes/origin/' + ref):
282 return 'origin/' + ref
283
284 return ref
285
286 def ref_exists(self, ref):
287 try:
288 r = git.Repo(self.working_dir)
289 if r.commit(ref):
290 return True
291 except (BadName, ValueError):
292 return False
293 return False
294
295 @property
296 def env(self):
297 env = super(Backend, self).env
298 env['GIT_DIR'] = os.path.join(self.working_dir, '.git')
299 # Don't prompt for username, this requires Git 2.3+
300 env['GIT_TERMINAL_PROMPT'] = '0'
301 return env
302
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/readthedocs/vcs_support/backends/git.py b/readthedocs/vcs_support/backends/git.py
--- a/readthedocs/vcs_support/backends/git.py
+++ b/readthedocs/vcs_support/backends/git.py
@@ -122,7 +122,9 @@
return True, submodules.keys()
def fetch(self):
- code, _, _ = self.run('git', 'fetch', '--tags', '--prune')
+ code, _, _ = self.run(
+ 'git', 'fetch', '--tags', '--prune', '--prune-tags',
+ )
if code != 0:
raise RepositoryError
|
{"golden_diff": "diff --git a/readthedocs/vcs_support/backends/git.py b/readthedocs/vcs_support/backends/git.py\n--- a/readthedocs/vcs_support/backends/git.py\n+++ b/readthedocs/vcs_support/backends/git.py\n@@ -122,7 +122,9 @@\n return True, submodules.keys()\n \n def fetch(self):\n- code, _, _ = self.run('git', 'fetch', '--tags', '--prune')\n+ code, _, _ = self.run(\n+ 'git', 'fetch', '--tags', '--prune', '--prune-tags',\n+ )\n if code != 0:\n raise RepositoryError\n", "issue": "Delete untracked tags on fetch step\nCurrently, if the user deletes a tag, it needs to wipe the environment for this change be reflected in their version list.\r\n\r\nThere are some solutions to delete untracked tags (require more than 2 commands). But I found that the newest version of git has the `--prune-tags` option, which is used as `git fetch --prune --prune-tags` (`git >2.17`). We need to update git on the servers (we use 2.7.4) and change the fetch command. Or we can find a way to wipe the environment if we detect something like this case.\r\n\r\nRaised in https://github.com/rtfd/readthedocs.org/pull/3913#issuecomment-396673349\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Git-related utilities.\"\"\"\n\nfrom __future__ import (\n absolute_import, division, print_function, unicode_literals)\n\nimport csv\nimport logging\nimport os\nimport re\n\nimport git\nfrom builtins import str\nfrom django.core.exceptions import ValidationError\nfrom git.exc import BadName\nfrom six import PY2, StringIO\n\nfrom readthedocs.config import ALL\nfrom readthedocs.projects.exceptions import RepositoryError\nfrom readthedocs.projects.validators import validate_submodule_url\nfrom readthedocs.vcs_support.base import BaseVCS, VCSVersion\n\nlog = logging.getLogger(__name__)\n\n\nclass Backend(BaseVCS):\n\n \"\"\"Git VCS backend.\"\"\"\n\n supports_tags = True\n supports_branches = True\n supports_submodules = True\n fallback_branch = 'master' # default branch\n\n def __init__(self, *args, **kwargs):\n super(Backend, self).__init__(*args, **kwargs)\n self.token = kwargs.get('token', None)\n self.repo_url = self._get_clone_url()\n\n def _get_clone_url(self):\n if '://' in self.repo_url:\n hacked_url = self.repo_url.split('://')[1]\n hacked_url = re.sub('.git$', '', hacked_url)\n clone_url = 'https://%s' % hacked_url\n if self.token:\n clone_url = 'https://%s@%s' % (self.token, hacked_url)\n return clone_url\n # Don't edit URL because all hosts aren't the same\n # else:\n # clone_url = 'git://%s' % (hacked_url)\n return self.repo_url\n\n def set_remote_url(self, url):\n return self.run('git', 'remote', 'set-url', 'origin', url)\n\n def update(self):\n # Use checkout() to update repo\n # TODO: See where we call this\n self.checkout()\n\n def repo_exists(self):\n code, _, _ = self.run('git', 'status', record=False)\n return code == 0\n\n def are_submodules_available(self, config):\n \"\"\"Test whether git submodule checkout step should be performed.\"\"\"\n # TODO remove this after users migrate to a config file\n from readthedocs.projects.models import Feature\n submodules_in_config = (\n config.submodules.exclude != ALL or\n config.submodules.include\n )\n if (self.project.has_feature(Feature.SKIP_SUBMODULES) or\n not submodules_in_config):\n return False\n\n # Keep compatibility with previous projects\n code, out, _ = self.run('git', 'submodule', 'status', record=False)\n return code == 0 and bool(out)\n\n def validate_submodules(self, config):\n \"\"\"\n Returns the submodules and check that its URLs are valid.\n\n .. note::\n\n Allways call after `self.are_submodules_available`.\n\n :returns: tuple(bool, list)\n\n Returns true if all required submodules URLs are valid.\n Returns a list of all required submodules:\n - Include is `ALL`, returns all submodules avaliable.\n - Include is a list, returns just those.\n - Exclude is `ALL` - this should never happen.\n - Exlude is a list, returns all avaliable submodules\n but those from the list.\n \"\"\"\n repo = git.Repo(self.working_dir)\n submodules = {\n sub.path: sub\n for sub in repo.submodules\n }\n\n for sub_path in config.submodules.exclude:\n path = sub_path.rstrip('/')\n if path in submodules:\n del submodules[path]\n\n if config.submodules.include != ALL and config.submodules.include:\n submodules_include = {}\n for sub_path in config.submodules.include:\n path = sub_path.rstrip('/')\n submodules_include[path] = submodules[path]\n submodules = submodules_include\n\n for path, submodule in submodules.items():\n try:\n validate_submodule_url(submodule.url)\n except ValidationError:\n return False, []\n return True, submodules.keys()\n\n def fetch(self):\n code, _, _ = self.run('git', 'fetch', '--tags', '--prune')\n if code != 0:\n raise RepositoryError\n\n def checkout_revision(self, revision=None):\n if not revision:\n branch = self.default_branch or self.fallback_branch\n revision = 'origin/%s' % branch\n\n code, out, err = self.run('git', 'checkout', '--force', revision)\n if code != 0:\n log.warning(\"Failed to checkout revision '%s': %s\", revision, code)\n return [code, out, err]\n\n def clone(self):\n \"\"\"\n Clone the repository.\n\n .. note::\n\n Temporarily, we support skipping submodule recursive clone via a\n feature flag. This will eventually be configurable with our YAML\n config.\n \"\"\"\n # TODO remove with https://github.com/rtfd/readthedocs-build/issues/30\n from readthedocs.projects.models import Feature\n cmd = ['git', 'clone']\n cmd.extend([self.repo_url, '.'])\n code, _, _ = self.run(*cmd)\n if code != 0:\n raise RepositoryError\n\n @property\n def tags(self):\n versions = []\n repo = git.Repo(self.working_dir)\n for tag in repo.tags:\n try:\n versions.append(VCSVersion(self, str(tag.commit), str(tag)))\n except ValueError as e:\n # ValueError: Cannot resolve commit as tag TAGNAME points to a\n # blob object - use the `.object` property instead to access it\n # This is not a real tag for us, so we skip it\n # https://github.com/rtfd/readthedocs.org/issues/4440\n log.warning('Git tag skipped: %s', tag, exc_info=True)\n continue\n return versions\n\n @property\n def branches(self):\n # Only show remote branches\n retcode, stdout, _ = self.run(\n 'git',\n 'branch',\n '-r',\n record_as_success=True,\n )\n # error (or no branches found)\n if retcode != 0:\n return []\n return self.parse_branches(stdout)\n\n def parse_branches(self, data):\n \"\"\"\n Parse output of git branch -r.\n\n e.g.:\n\n origin/2.0.X\n origin/HEAD -> origin/master\n origin/develop\n origin/master\n origin/release/2.0.0\n origin/release/2.1.0\n \"\"\"\n clean_branches = []\n # StringIO below is expecting Unicode data, so ensure that it gets it.\n if not isinstance(data, str):\n data = str(data)\n delimiter = str(' ').encode('utf-8') if PY2 else str(' ')\n raw_branches = csv.reader(StringIO(data), delimiter=delimiter)\n for branch in raw_branches:\n branch = [f for f in branch if f not in ('', '*')]\n # Handle empty branches\n if branch:\n branch = branch[0]\n if branch.startswith('origin/'):\n verbose_name = branch.replace('origin/', '')\n if verbose_name in ['HEAD']:\n continue\n clean_branches.append(\n VCSVersion(self, branch, verbose_name))\n else:\n clean_branches.append(VCSVersion(self, branch, branch))\n return clean_branches\n\n @property\n def commit(self):\n _, stdout, _ = self.run('git', 'rev-parse', 'HEAD')\n return stdout.strip()\n\n def checkout(self, identifier=None):\n self.check_working_dir()\n\n # Clone or update repository\n if self.repo_exists():\n self.set_remote_url(self.repo_url)\n self.fetch()\n else:\n self.make_clean_working_dir()\n self.clone()\n\n # Find proper identifier\n if not identifier:\n identifier = self.default_branch or self.fallback_branch\n\n identifier = self.find_ref(identifier)\n\n # Checkout the correct identifier for this branch.\n code, out, err = self.checkout_revision(identifier)\n if code != 0:\n return code, out, err\n\n # Clean any remains of previous checkouts\n self.run('git', 'clean', '-d', '-f', '-f')\n return code, out, err\n\n def update_submodules(self, config):\n if self.are_submodules_available(config):\n valid, submodules = self.validate_submodules(config)\n if valid:\n self.checkout_submodules(submodules, config)\n else:\n raise RepositoryError(RepositoryError.INVALID_SUBMODULES)\n\n def checkout_submodules(self, submodules, config):\n \"\"\"Checkout all repository submodules.\"\"\"\n self.run('git', 'submodule', 'sync')\n cmd = [\n 'git',\n 'submodule',\n 'update',\n '--init',\n '--force',\n ]\n if config.submodules.recursive:\n cmd.append('--recursive')\n cmd += submodules\n self.run(*cmd)\n\n def find_ref(self, ref):\n # Check if ref starts with 'origin/'\n if ref.startswith('origin/'):\n return ref\n\n # Check if ref is a branch of the origin remote\n if self.ref_exists('remotes/origin/' + ref):\n return 'origin/' + ref\n\n return ref\n\n def ref_exists(self, ref):\n try:\n r = git.Repo(self.working_dir)\n if r.commit(ref):\n return True\n except (BadName, ValueError):\n return False\n return False\n\n @property\n def env(self):\n env = super(Backend, self).env\n env['GIT_DIR'] = os.path.join(self.working_dir, '.git')\n # Don't prompt for username, this requires Git 2.3+\n env['GIT_TERMINAL_PROMPT'] = '0'\n return env\n", "path": "readthedocs/vcs_support/backends/git.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Git-related utilities.\"\"\"\n\nfrom __future__ import (\n absolute_import, division, print_function, unicode_literals)\n\nimport csv\nimport logging\nimport os\nimport re\n\nimport git\nfrom builtins import str\nfrom django.core.exceptions import ValidationError\nfrom git.exc import BadName\nfrom six import PY2, StringIO\n\nfrom readthedocs.config import ALL\nfrom readthedocs.projects.exceptions import RepositoryError\nfrom readthedocs.projects.validators import validate_submodule_url\nfrom readthedocs.vcs_support.base import BaseVCS, VCSVersion\n\nlog = logging.getLogger(__name__)\n\n\nclass Backend(BaseVCS):\n\n \"\"\"Git VCS backend.\"\"\"\n\n supports_tags = True\n supports_branches = True\n supports_submodules = True\n fallback_branch = 'master' # default branch\n\n def __init__(self, *args, **kwargs):\n super(Backend, self).__init__(*args, **kwargs)\n self.token = kwargs.get('token', None)\n self.repo_url = self._get_clone_url()\n\n def _get_clone_url(self):\n if '://' in self.repo_url:\n hacked_url = self.repo_url.split('://')[1]\n hacked_url = re.sub('.git$', '', hacked_url)\n clone_url = 'https://%s' % hacked_url\n if self.token:\n clone_url = 'https://%s@%s' % (self.token, hacked_url)\n return clone_url\n # Don't edit URL because all hosts aren't the same\n # else:\n # clone_url = 'git://%s' % (hacked_url)\n return self.repo_url\n\n def set_remote_url(self, url):\n return self.run('git', 'remote', 'set-url', 'origin', url)\n\n def update(self):\n # Use checkout() to update repo\n # TODO: See where we call this\n self.checkout()\n\n def repo_exists(self):\n code, _, _ = self.run('git', 'status', record=False)\n return code == 0\n\n def are_submodules_available(self, config):\n \"\"\"Test whether git submodule checkout step should be performed.\"\"\"\n # TODO remove this after users migrate to a config file\n from readthedocs.projects.models import Feature\n submodules_in_config = (\n config.submodules.exclude != ALL or\n config.submodules.include\n )\n if (self.project.has_feature(Feature.SKIP_SUBMODULES) or\n not submodules_in_config):\n return False\n\n # Keep compatibility with previous projects\n code, out, _ = self.run('git', 'submodule', 'status', record=False)\n return code == 0 and bool(out)\n\n def validate_submodules(self, config):\n \"\"\"\n Returns the submodules and check that its URLs are valid.\n\n .. note::\n\n Allways call after `self.are_submodules_available`.\n\n :returns: tuple(bool, list)\n\n Returns true if all required submodules URLs are valid.\n Returns a list of all required submodules:\n - Include is `ALL`, returns all submodules avaliable.\n - Include is a list, returns just those.\n - Exclude is `ALL` - this should never happen.\n - Exlude is a list, returns all avaliable submodules\n but those from the list.\n \"\"\"\n repo = git.Repo(self.working_dir)\n submodules = {\n sub.path: sub\n for sub in repo.submodules\n }\n\n for sub_path in config.submodules.exclude:\n path = sub_path.rstrip('/')\n if path in submodules:\n del submodules[path]\n\n if config.submodules.include != ALL and config.submodules.include:\n submodules_include = {}\n for sub_path in config.submodules.include:\n path = sub_path.rstrip('/')\n submodules_include[path] = submodules[path]\n submodules = submodules_include\n\n for path, submodule in submodules.items():\n try:\n validate_submodule_url(submodule.url)\n except ValidationError:\n return False, []\n return True, submodules.keys()\n\n def fetch(self):\n code, _, _ = self.run(\n 'git', 'fetch', '--tags', '--prune', '--prune-tags',\n )\n if code != 0:\n raise RepositoryError\n\n def checkout_revision(self, revision=None):\n if not revision:\n branch = self.default_branch or self.fallback_branch\n revision = 'origin/%s' % branch\n\n code, out, err = self.run('git', 'checkout', '--force', revision)\n if code != 0:\n log.warning(\"Failed to checkout revision '%s': %s\", revision, code)\n return [code, out, err]\n\n def clone(self):\n \"\"\"\n Clone the repository.\n\n .. note::\n\n Temporarily, we support skipping submodule recursive clone via a\n feature flag. This will eventually be configurable with our YAML\n config.\n \"\"\"\n # TODO remove with https://github.com/rtfd/readthedocs-build/issues/30\n from readthedocs.projects.models import Feature\n cmd = ['git', 'clone']\n cmd.extend([self.repo_url, '.'])\n code, _, _ = self.run(*cmd)\n if code != 0:\n raise RepositoryError\n\n @property\n def tags(self):\n versions = []\n repo = git.Repo(self.working_dir)\n for tag in repo.tags:\n try:\n versions.append(VCSVersion(self, str(tag.commit), str(tag)))\n except ValueError as e:\n # ValueError: Cannot resolve commit as tag TAGNAME points to a\n # blob object - use the `.object` property instead to access it\n # This is not a real tag for us, so we skip it\n # https://github.com/rtfd/readthedocs.org/issues/4440\n log.warning('Git tag skipped: %s', tag, exc_info=True)\n continue\n return versions\n\n @property\n def branches(self):\n # Only show remote branches\n retcode, stdout, _ = self.run(\n 'git',\n 'branch',\n '-r',\n record_as_success=True,\n )\n # error (or no branches found)\n if retcode != 0:\n return []\n return self.parse_branches(stdout)\n\n def parse_branches(self, data):\n \"\"\"\n Parse output of git branch -r.\n\n e.g.:\n\n origin/2.0.X\n origin/HEAD -> origin/master\n origin/develop\n origin/master\n origin/release/2.0.0\n origin/release/2.1.0\n \"\"\"\n clean_branches = []\n # StringIO below is expecting Unicode data, so ensure that it gets it.\n if not isinstance(data, str):\n data = str(data)\n delimiter = str(' ').encode('utf-8') if PY2 else str(' ')\n raw_branches = csv.reader(StringIO(data), delimiter=delimiter)\n for branch in raw_branches:\n branch = [f for f in branch if f not in ('', '*')]\n # Handle empty branches\n if branch:\n branch = branch[0]\n if branch.startswith('origin/'):\n verbose_name = branch.replace('origin/', '')\n if verbose_name in ['HEAD']:\n continue\n clean_branches.append(\n VCSVersion(self, branch, verbose_name))\n else:\n clean_branches.append(VCSVersion(self, branch, branch))\n return clean_branches\n\n @property\n def commit(self):\n _, stdout, _ = self.run('git', 'rev-parse', 'HEAD')\n return stdout.strip()\n\n def checkout(self, identifier=None):\n self.check_working_dir()\n\n # Clone or update repository\n if self.repo_exists():\n self.set_remote_url(self.repo_url)\n self.fetch()\n else:\n self.make_clean_working_dir()\n self.clone()\n\n # Find proper identifier\n if not identifier:\n identifier = self.default_branch or self.fallback_branch\n\n identifier = self.find_ref(identifier)\n\n # Checkout the correct identifier for this branch.\n code, out, err = self.checkout_revision(identifier)\n if code != 0:\n return code, out, err\n\n # Clean any remains of previous checkouts\n self.run('git', 'clean', '-d', '-f', '-f')\n return code, out, err\n\n def update_submodules(self, config):\n if self.are_submodules_available(config):\n valid, submodules = self.validate_submodules(config)\n if valid:\n self.checkout_submodules(submodules, config)\n else:\n raise RepositoryError(RepositoryError.INVALID_SUBMODULES)\n\n def checkout_submodules(self, submodules, config):\n \"\"\"Checkout all repository submodules.\"\"\"\n self.run('git', 'submodule', 'sync')\n cmd = [\n 'git',\n 'submodule',\n 'update',\n '--init',\n '--force',\n ]\n if config.submodules.recursive:\n cmd.append('--recursive')\n cmd += submodules\n self.run(*cmd)\n\n def find_ref(self, ref):\n # Check if ref starts with 'origin/'\n if ref.startswith('origin/'):\n return ref\n\n # Check if ref is a branch of the origin remote\n if self.ref_exists('remotes/origin/' + ref):\n return 'origin/' + ref\n\n return ref\n\n def ref_exists(self, ref):\n try:\n r = git.Repo(self.working_dir)\n if r.commit(ref):\n return True\n except (BadName, ValueError):\n return False\n return False\n\n @property\n def env(self):\n env = super(Backend, self).env\n env['GIT_DIR'] = os.path.join(self.working_dir, '.git')\n # Don't prompt for username, this requires Git 2.3+\n env['GIT_TERMINAL_PROMPT'] = '0'\n return env\n", "path": "readthedocs/vcs_support/backends/git.py"}]}
| 3,390 | 147 |
gh_patches_debug_25452
|
rasdani/github-patches
|
git_diff
|
encode__httpx-566
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
scripts as a single top-level point of project automation.
Generally across the `encode` projects I've been using [GitHub's "Scripts to Rule Them All" pattern](https://github.blog/2015-06-30-scripts-to-rule-them-all/) as a consistent style for project testing, linting, publishing etc...
It's benenficial from my POV, because it ends up that we can always just run `scripts/install`, or `scripts/test` from *any* project and expect that to perform the same kind of thing, even if we're using different tooling between projects. (eg. started adopting `nox` in some places, but not yet there accross the board.)
I'd like to do the same with `httpx`, while still keeping the niceness of `nox` (So eg. reintroduce a `scripts/test`, `scripts/lint` and `scripts/publish`.) but figure it's worth raising this for discussion first.
I'd also be quite keen on having a fast-feedback test suite being the default, with nox's run against multiple environments as an explicit case?
Any general thoughts here or should I just dive in and take a crack at this?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `noxfile.py`
Content:
```
1 import nox
2
3 nox.options.stop_on_first_error = True
4 nox.options.reuse_existing_virtualenvs = True
5 nox.options.keywords = "not serve"
6
7 source_files = ("httpx", "tools", "tests", "setup.py", "noxfile.py")
8
9
10 @nox.session
11 def lint(session):
12 session.install(
13 "--upgrade", "autoflake", "black", "flake8", "isort", "seed-isort-config"
14 )
15
16 session.run("autoflake", "--in-place", "--recursive", *source_files)
17 session.run("seed-isort-config", "--application-directories=httpx")
18 session.run("isort", "--project=httpx", "--recursive", "--apply", *source_files)
19 session.run("black", "--target-version=py36", *source_files)
20
21 check(session)
22
23
24 @nox.session
25 def check(session):
26 session.install(
27 "--upgrade", "black", "flake8", "flake8-bugbear", "flake8-pie", "isort", "mypy"
28 )
29
30 session.run("black", "--check", "--diff", "--target-version=py36", *source_files)
31 session.run("flake8", *source_files)
32 session.run("mypy", "httpx")
33 session.run(
34 "isort", "--check", "--diff", "--project=httpx", "--recursive", *source_files
35 )
36
37
38 @nox.session
39 def docs(session):
40 session.install("--upgrade", "mkdocs", "mkdocs-material", "mkautodoc>=0.1.0")
41 session.install("-e", ".")
42 session.run("mkdocs", "build")
43
44
45 @nox.session(reuse_venv=True)
46 def serve(session):
47 session.install("--upgrade", "mkdocs", "mkdocs-material")
48
49 session.run("mkdocs", "serve")
50
51
52 @nox.session(python=["3.6", "3.7", "3.8"])
53 def test(session):
54 session.install("--upgrade", "-r", "test-requirements.txt")
55 session.run("python", "-m", "pytest", *session.posargs)
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/noxfile.py b/noxfile.py
deleted file mode 100644
--- a/noxfile.py
+++ /dev/null
@@ -1,55 +0,0 @@
-import nox
-
-nox.options.stop_on_first_error = True
-nox.options.reuse_existing_virtualenvs = True
-nox.options.keywords = "not serve"
-
-source_files = ("httpx", "tools", "tests", "setup.py", "noxfile.py")
-
-
[email protected]
-def lint(session):
- session.install(
- "--upgrade", "autoflake", "black", "flake8", "isort", "seed-isort-config"
- )
-
- session.run("autoflake", "--in-place", "--recursive", *source_files)
- session.run("seed-isort-config", "--application-directories=httpx")
- session.run("isort", "--project=httpx", "--recursive", "--apply", *source_files)
- session.run("black", "--target-version=py36", *source_files)
-
- check(session)
-
-
[email protected]
-def check(session):
- session.install(
- "--upgrade", "black", "flake8", "flake8-bugbear", "flake8-pie", "isort", "mypy"
- )
-
- session.run("black", "--check", "--diff", "--target-version=py36", *source_files)
- session.run("flake8", *source_files)
- session.run("mypy", "httpx")
- session.run(
- "isort", "--check", "--diff", "--project=httpx", "--recursive", *source_files
- )
-
-
[email protected]
-def docs(session):
- session.install("--upgrade", "mkdocs", "mkdocs-material", "mkautodoc>=0.1.0")
- session.install("-e", ".")
- session.run("mkdocs", "build")
-
-
[email protected](reuse_venv=True)
-def serve(session):
- session.install("--upgrade", "mkdocs", "mkdocs-material")
-
- session.run("mkdocs", "serve")
-
-
[email protected](python=["3.6", "3.7", "3.8"])
-def test(session):
- session.install("--upgrade", "-r", "test-requirements.txt")
- session.run("python", "-m", "pytest", *session.posargs)
|
{"golden_diff": "diff --git a/noxfile.py b/noxfile.py\ndeleted file mode 100644\n--- a/noxfile.py\n+++ /dev/null\n@@ -1,55 +0,0 @@\n-import nox\n-\n-nox.options.stop_on_first_error = True\n-nox.options.reuse_existing_virtualenvs = True\n-nox.options.keywords = \"not serve\"\n-\n-source_files = (\"httpx\", \"tools\", \"tests\", \"setup.py\", \"noxfile.py\")\n-\n-\[email protected]\n-def lint(session):\n- session.install(\n- \"--upgrade\", \"autoflake\", \"black\", \"flake8\", \"isort\", \"seed-isort-config\"\n- )\n-\n- session.run(\"autoflake\", \"--in-place\", \"--recursive\", *source_files)\n- session.run(\"seed-isort-config\", \"--application-directories=httpx\")\n- session.run(\"isort\", \"--project=httpx\", \"--recursive\", \"--apply\", *source_files)\n- session.run(\"black\", \"--target-version=py36\", *source_files)\n-\n- check(session)\n-\n-\[email protected]\n-def check(session):\n- session.install(\n- \"--upgrade\", \"black\", \"flake8\", \"flake8-bugbear\", \"flake8-pie\", \"isort\", \"mypy\"\n- )\n-\n- session.run(\"black\", \"--check\", \"--diff\", \"--target-version=py36\", *source_files)\n- session.run(\"flake8\", *source_files)\n- session.run(\"mypy\", \"httpx\")\n- session.run(\n- \"isort\", \"--check\", \"--diff\", \"--project=httpx\", \"--recursive\", *source_files\n- )\n-\n-\[email protected]\n-def docs(session):\n- session.install(\"--upgrade\", \"mkdocs\", \"mkdocs-material\", \"mkautodoc>=0.1.0\")\n- session.install(\"-e\", \".\")\n- session.run(\"mkdocs\", \"build\")\n-\n-\[email protected](reuse_venv=True)\n-def serve(session):\n- session.install(\"--upgrade\", \"mkdocs\", \"mkdocs-material\")\n-\n- session.run(\"mkdocs\", \"serve\")\n-\n-\[email protected](python=[\"3.6\", \"3.7\", \"3.8\"])\n-def test(session):\n- session.install(\"--upgrade\", \"-r\", \"test-requirements.txt\")\n- session.run(\"python\", \"-m\", \"pytest\", *session.posargs)\n", "issue": "scripts as a single top-level point of project automation.\nGenerally across the `encode` projects I've been using [GitHub's \"Scripts to Rule Them All\" pattern](https://github.blog/2015-06-30-scripts-to-rule-them-all/) as a consistent style for project testing, linting, publishing etc...\r\n\r\nIt's benenficial from my POV, because it ends up that we can always just run `scripts/install`, or `scripts/test` from *any* project and expect that to perform the same kind of thing, even if we're using different tooling between projects. (eg. started adopting `nox` in some places, but not yet there accross the board.)\r\n\r\nI'd like to do the same with `httpx`, while still keeping the niceness of `nox` (So eg. reintroduce a `scripts/test`, `scripts/lint` and `scripts/publish`.) but figure it's worth raising this for discussion first.\r\n\r\nI'd also be quite keen on having a fast-feedback test suite being the default, with nox's run against multiple environments as an explicit case?\r\n\r\nAny general thoughts here or should I just dive in and take a crack at this?\n", "before_files": [{"content": "import nox\n\nnox.options.stop_on_first_error = True\nnox.options.reuse_existing_virtualenvs = True\nnox.options.keywords = \"not serve\"\n\nsource_files = (\"httpx\", \"tools\", \"tests\", \"setup.py\", \"noxfile.py\")\n\n\[email protected]\ndef lint(session):\n session.install(\n \"--upgrade\", \"autoflake\", \"black\", \"flake8\", \"isort\", \"seed-isort-config\"\n )\n\n session.run(\"autoflake\", \"--in-place\", \"--recursive\", *source_files)\n session.run(\"seed-isort-config\", \"--application-directories=httpx\")\n session.run(\"isort\", \"--project=httpx\", \"--recursive\", \"--apply\", *source_files)\n session.run(\"black\", \"--target-version=py36\", *source_files)\n\n check(session)\n\n\[email protected]\ndef check(session):\n session.install(\n \"--upgrade\", \"black\", \"flake8\", \"flake8-bugbear\", \"flake8-pie\", \"isort\", \"mypy\"\n )\n\n session.run(\"black\", \"--check\", \"--diff\", \"--target-version=py36\", *source_files)\n session.run(\"flake8\", *source_files)\n session.run(\"mypy\", \"httpx\")\n session.run(\n \"isort\", \"--check\", \"--diff\", \"--project=httpx\", \"--recursive\", *source_files\n )\n\n\[email protected]\ndef docs(session):\n session.install(\"--upgrade\", \"mkdocs\", \"mkdocs-material\", \"mkautodoc>=0.1.0\")\n session.install(\"-e\", \".\")\n session.run(\"mkdocs\", \"build\")\n\n\[email protected](reuse_venv=True)\ndef serve(session):\n session.install(\"--upgrade\", \"mkdocs\", \"mkdocs-material\")\n\n session.run(\"mkdocs\", \"serve\")\n\n\[email protected](python=[\"3.6\", \"3.7\", \"3.8\"])\ndef test(session):\n session.install(\"--upgrade\", \"-r\", \"test-requirements.txt\")\n session.run(\"python\", \"-m\", \"pytest\", *session.posargs)\n", "path": "noxfile.py"}], "after_files": [{"content": null, "path": "noxfile.py"}]}
| 1,074 | 538 |
gh_patches_debug_8868
|
rasdani/github-patches
|
git_diff
|
tensorflow__addons-1738
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dropping support for tensorflow < 2.2.
We have three pull requests that need TF 2.2 and above: #1733 #1726 #905
They are blocked until we drop support, since Addons 0.9 was released with support for both tf 2.1 and 2.2, I don't think it's going to be a very big inconvenience for users if we drop support for 2.1 now. They have a good fallback.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tensorflow_addons/utils/resource_loader.py`
Content:
```
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Utilities similar to tf.python.platform.resource_loader."""
16
17 from distutils.version import LooseVersion
18 import os
19 import warnings
20
21 import tensorflow as tf
22
23 MIN_TF_VERSION_FOR_ABI_COMPATIBILITY = "2.1.0"
24 MAX_TF_VERSION_FOR_ABI_COMPATIBILITY = "2.2.0"
25 abi_warning_already_raised = False
26 SKIP_CUSTOM_OPS = False
27
28
29 def get_project_root():
30 """Returns project root folder."""
31 return os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
32
33
34 def get_path_to_datafile(path):
35 """Get the path to the specified file in the data dependencies.
36
37 The path is relative to tensorflow_addons/
38
39 Args:
40 path: a string resource path relative to tensorflow_addons/
41 Returns:
42 The path to the specified data file
43 """
44 root_dir = get_project_root()
45 return os.path.join(root_dir, path.replace("/", os.sep))
46
47
48 class LazySO:
49 def __init__(self, relative_path):
50 self.relative_path = relative_path
51 self._ops = None
52
53 @property
54 def ops(self):
55 if SKIP_CUSTOM_OPS:
56 import pytest
57
58 pytest.skip(
59 "Skipping the test because a custom ops "
60 "was being loaded while --skip-custom-ops was set."
61 )
62 if self._ops is None:
63 self.display_warning_if_incompatible()
64 self._ops = tf.load_op_library(get_path_to_datafile(self.relative_path))
65 return self._ops
66
67 def display_warning_if_incompatible(self):
68 global abi_warning_already_raised
69 if abi_is_compatible() or abi_warning_already_raised:
70 return
71
72 warnings.warn(
73 "You are currently using TensorFlow {} and trying to load a custom op ({})."
74 "\n"
75 "TensorFlow Addons has compiled its custom ops against TensorFlow {}, "
76 "and there are no compatibility guarantees between the two versions. "
77 "\n"
78 "This means that you might get segfaults when loading the custom op, "
79 "or other kind of low-level errors.\n If you do, do not file an issue "
80 "on Github. This is a known limitation."
81 "\n\n"
82 "It might help you to fallback to pure Python "
83 "ops with TF_ADDONS_PY_OPS . To do that, see "
84 "https://github.com/tensorflow/addons#gpucpu-custom-ops "
85 "\n\n"
86 "You can also change the TensorFlow version installed on your system. "
87 "You would need a TensorFlow version equal to or above {} and strictly "
88 "below {}.\n Note that nightly versions of TensorFlow, "
89 "as well as non-pip TensorFlow like `conda install tensorflow` or compiled "
90 "from source are not supported."
91 "\n\n"
92 "The last solution is to find the TensorFlow Addons version that has "
93 "custom ops compatible with the TensorFlow installed on your "
94 "system. To do that, refer to the readme: "
95 "https://github.com/tensorflow/addons"
96 "".format(
97 tf.__version__,
98 self.relative_path,
99 MIN_TF_VERSION_FOR_ABI_COMPATIBILITY,
100 MIN_TF_VERSION_FOR_ABI_COMPATIBILITY,
101 MAX_TF_VERSION_FOR_ABI_COMPATIBILITY,
102 ),
103 UserWarning,
104 )
105 abi_warning_already_raised = True
106
107
108 def abi_is_compatible():
109 if "dev" in tf.__version__:
110 # tf-nightly
111 return False
112
113 min_version = LooseVersion(MIN_TF_VERSION_FOR_ABI_COMPATIBILITY)
114 max_version = LooseVersion(MAX_TF_VERSION_FOR_ABI_COMPATIBILITY)
115 return min_version <= LooseVersion(tf.__version__) < max_version
116
```
Path: `tensorflow_addons/utils/ensure_tf_install.py`
Content:
```
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15
16
17 # Ensure the TensorFlow version is in the right range. This
18 # needs to happen before anything else, since the imports below will try to
19 # import TensorFlow, too.
20
21 from distutils.version import LooseVersion
22 import warnings
23
24 import tensorflow as tf
25
26 MIN_TF_VERSION = "2.1.0"
27 MAX_TF_VERSION = "2.3.0"
28
29
30 def _check_tf_version():
31 """Warn the user if the version of TensorFlow used is not supported.
32
33 This is not a check for custom ops compatibility. This check only ensure that
34 we support this TensorFlow version if the user uses only Addons' Python code.
35 """
36
37 if "dev" in tf.__version__:
38 warnings.warn(
39 "You are currently using a nightly version of TensorFlow ({}). \n"
40 "TensorFlow Addons offers no support for the nightly versions of "
41 "TensorFlow. Some things might work, some other might not. \n"
42 "If you encounter a bug, do not file an issue on GitHub."
43 "".format(tf.__version__),
44 UserWarning,
45 )
46 return
47
48 min_version = LooseVersion(MIN_TF_VERSION)
49 max_version = LooseVersion(MAX_TF_VERSION)
50
51 if min_version <= LooseVersion(tf.__version__) < max_version:
52 return
53
54 warnings.warn(
55 "Tensorflow Addons supports using Python ops for all Tensorflow versions "
56 "above or equal to {} and strictly below {} (nightly versions are not "
57 "supported). \n "
58 "The versions of TensorFlow you are currently using is {} and is not "
59 "supported. \n"
60 "Some things might work, some things might not.\n"
61 "If you were to encounter a bug, do not file an issue.\n"
62 "If you want to make sure you're using a tested and supported configuration, "
63 "either change the TensorFlow version or the TensorFlow Addons's version. \n"
64 "You can find the compatibility matrix in TensorFlow Addon's readme:\n"
65 "https://github.com/tensorflow/addons".format(
66 MIN_TF_VERSION, MAX_TF_VERSION, tf.__version__
67 ),
68 UserWarning,
69 )
70
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tensorflow_addons/utils/ensure_tf_install.py b/tensorflow_addons/utils/ensure_tf_install.py
--- a/tensorflow_addons/utils/ensure_tf_install.py
+++ b/tensorflow_addons/utils/ensure_tf_install.py
@@ -23,7 +23,7 @@
import tensorflow as tf
-MIN_TF_VERSION = "2.1.0"
+MIN_TF_VERSION = "2.2.0"
MAX_TF_VERSION = "2.3.0"
diff --git a/tensorflow_addons/utils/resource_loader.py b/tensorflow_addons/utils/resource_loader.py
--- a/tensorflow_addons/utils/resource_loader.py
+++ b/tensorflow_addons/utils/resource_loader.py
@@ -20,8 +20,8 @@
import tensorflow as tf
-MIN_TF_VERSION_FOR_ABI_COMPATIBILITY = "2.1.0"
-MAX_TF_VERSION_FOR_ABI_COMPATIBILITY = "2.2.0"
+MIN_TF_VERSION_FOR_ABI_COMPATIBILITY = "2.2.0"
+MAX_TF_VERSION_FOR_ABI_COMPATIBILITY = "2.3.0"
abi_warning_already_raised = False
SKIP_CUSTOM_OPS = False
|
{"golden_diff": "diff --git a/tensorflow_addons/utils/ensure_tf_install.py b/tensorflow_addons/utils/ensure_tf_install.py\n--- a/tensorflow_addons/utils/ensure_tf_install.py\n+++ b/tensorflow_addons/utils/ensure_tf_install.py\n@@ -23,7 +23,7 @@\n \n import tensorflow as tf\n \n-MIN_TF_VERSION = \"2.1.0\"\n+MIN_TF_VERSION = \"2.2.0\"\n MAX_TF_VERSION = \"2.3.0\"\n \n \ndiff --git a/tensorflow_addons/utils/resource_loader.py b/tensorflow_addons/utils/resource_loader.py\n--- a/tensorflow_addons/utils/resource_loader.py\n+++ b/tensorflow_addons/utils/resource_loader.py\n@@ -20,8 +20,8 @@\n \n import tensorflow as tf\n \n-MIN_TF_VERSION_FOR_ABI_COMPATIBILITY = \"2.1.0\"\n-MAX_TF_VERSION_FOR_ABI_COMPATIBILITY = \"2.2.0\"\n+MIN_TF_VERSION_FOR_ABI_COMPATIBILITY = \"2.2.0\"\n+MAX_TF_VERSION_FOR_ABI_COMPATIBILITY = \"2.3.0\"\n abi_warning_already_raised = False\n SKIP_CUSTOM_OPS = False\n", "issue": "Dropping support for tensorflow < 2.2.\nWe have three pull requests that need TF 2.2 and above: #1733 #1726 #905 \r\n\r\nThey are blocked until we drop support, since Addons 0.9 was released with support for both tf 2.1 and 2.2, I don't think it's going to be a very big inconvenience for users if we drop support for 2.1 now. They have a good fallback.\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Utilities similar to tf.python.platform.resource_loader.\"\"\"\n\nfrom distutils.version import LooseVersion\nimport os\nimport warnings\n\nimport tensorflow as tf\n\nMIN_TF_VERSION_FOR_ABI_COMPATIBILITY = \"2.1.0\"\nMAX_TF_VERSION_FOR_ABI_COMPATIBILITY = \"2.2.0\"\nabi_warning_already_raised = False\nSKIP_CUSTOM_OPS = False\n\n\ndef get_project_root():\n \"\"\"Returns project root folder.\"\"\"\n return os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n\ndef get_path_to_datafile(path):\n \"\"\"Get the path to the specified file in the data dependencies.\n\n The path is relative to tensorflow_addons/\n\n Args:\n path: a string resource path relative to tensorflow_addons/\n Returns:\n The path to the specified data file\n \"\"\"\n root_dir = get_project_root()\n return os.path.join(root_dir, path.replace(\"/\", os.sep))\n\n\nclass LazySO:\n def __init__(self, relative_path):\n self.relative_path = relative_path\n self._ops = None\n\n @property\n def ops(self):\n if SKIP_CUSTOM_OPS:\n import pytest\n\n pytest.skip(\n \"Skipping the test because a custom ops \"\n \"was being loaded while --skip-custom-ops was set.\"\n )\n if self._ops is None:\n self.display_warning_if_incompatible()\n self._ops = tf.load_op_library(get_path_to_datafile(self.relative_path))\n return self._ops\n\n def display_warning_if_incompatible(self):\n global abi_warning_already_raised\n if abi_is_compatible() or abi_warning_already_raised:\n return\n\n warnings.warn(\n \"You are currently using TensorFlow {} and trying to load a custom op ({}).\"\n \"\\n\"\n \"TensorFlow Addons has compiled its custom ops against TensorFlow {}, \"\n \"and there are no compatibility guarantees between the two versions. \"\n \"\\n\"\n \"This means that you might get segfaults when loading the custom op, \"\n \"or other kind of low-level errors.\\n If you do, do not file an issue \"\n \"on Github. This is a known limitation.\"\n \"\\n\\n\"\n \"It might help you to fallback to pure Python \"\n \"ops with TF_ADDONS_PY_OPS . To do that, see \"\n \"https://github.com/tensorflow/addons#gpucpu-custom-ops \"\n \"\\n\\n\"\n \"You can also change the TensorFlow version installed on your system. \"\n \"You would need a TensorFlow version equal to or above {} and strictly \"\n \"below {}.\\n Note that nightly versions of TensorFlow, \"\n \"as well as non-pip TensorFlow like `conda install tensorflow` or compiled \"\n \"from source are not supported.\"\n \"\\n\\n\"\n \"The last solution is to find the TensorFlow Addons version that has \"\n \"custom ops compatible with the TensorFlow installed on your \"\n \"system. To do that, refer to the readme: \"\n \"https://github.com/tensorflow/addons\"\n \"\".format(\n tf.__version__,\n self.relative_path,\n MIN_TF_VERSION_FOR_ABI_COMPATIBILITY,\n MIN_TF_VERSION_FOR_ABI_COMPATIBILITY,\n MAX_TF_VERSION_FOR_ABI_COMPATIBILITY,\n ),\n UserWarning,\n )\n abi_warning_already_raised = True\n\n\ndef abi_is_compatible():\n if \"dev\" in tf.__version__:\n # tf-nightly\n return False\n\n min_version = LooseVersion(MIN_TF_VERSION_FOR_ABI_COMPATIBILITY)\n max_version = LooseVersion(MAX_TF_VERSION_FOR_ABI_COMPATIBILITY)\n return min_version <= LooseVersion(tf.__version__) < max_version\n", "path": "tensorflow_addons/utils/resource_loader.py"}, {"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\n\n# Ensure the TensorFlow version is in the right range. This\n# needs to happen before anything else, since the imports below will try to\n# import TensorFlow, too.\n\nfrom distutils.version import LooseVersion\nimport warnings\n\nimport tensorflow as tf\n\nMIN_TF_VERSION = \"2.1.0\"\nMAX_TF_VERSION = \"2.3.0\"\n\n\ndef _check_tf_version():\n \"\"\"Warn the user if the version of TensorFlow used is not supported.\n\n This is not a check for custom ops compatibility. This check only ensure that\n we support this TensorFlow version if the user uses only Addons' Python code.\n \"\"\"\n\n if \"dev\" in tf.__version__:\n warnings.warn(\n \"You are currently using a nightly version of TensorFlow ({}). \\n\"\n \"TensorFlow Addons offers no support for the nightly versions of \"\n \"TensorFlow. Some things might work, some other might not. \\n\"\n \"If you encounter a bug, do not file an issue on GitHub.\"\n \"\".format(tf.__version__),\n UserWarning,\n )\n return\n\n min_version = LooseVersion(MIN_TF_VERSION)\n max_version = LooseVersion(MAX_TF_VERSION)\n\n if min_version <= LooseVersion(tf.__version__) < max_version:\n return\n\n warnings.warn(\n \"Tensorflow Addons supports using Python ops for all Tensorflow versions \"\n \"above or equal to {} and strictly below {} (nightly versions are not \"\n \"supported). \\n \"\n \"The versions of TensorFlow you are currently using is {} and is not \"\n \"supported. \\n\"\n \"Some things might work, some things might not.\\n\"\n \"If you were to encounter a bug, do not file an issue.\\n\"\n \"If you want to make sure you're using a tested and supported configuration, \"\n \"either change the TensorFlow version or the TensorFlow Addons's version. \\n\"\n \"You can find the compatibility matrix in TensorFlow Addon's readme:\\n\"\n \"https://github.com/tensorflow/addons\".format(\n MIN_TF_VERSION, MAX_TF_VERSION, tf.__version__\n ),\n UserWarning,\n )\n", "path": "tensorflow_addons/utils/ensure_tf_install.py"}], "after_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Utilities similar to tf.python.platform.resource_loader.\"\"\"\n\nfrom distutils.version import LooseVersion\nimport os\nimport warnings\n\nimport tensorflow as tf\n\nMIN_TF_VERSION_FOR_ABI_COMPATIBILITY = \"2.2.0\"\nMAX_TF_VERSION_FOR_ABI_COMPATIBILITY = \"2.3.0\"\nabi_warning_already_raised = False\nSKIP_CUSTOM_OPS = False\n\n\ndef get_project_root():\n \"\"\"Returns project root folder.\"\"\"\n return os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n\ndef get_path_to_datafile(path):\n \"\"\"Get the path to the specified file in the data dependencies.\n\n The path is relative to tensorflow_addons/\n\n Args:\n path: a string resource path relative to tensorflow_addons/\n Returns:\n The path to the specified data file\n \"\"\"\n root_dir = get_project_root()\n return os.path.join(root_dir, path.replace(\"/\", os.sep))\n\n\nclass LazySO:\n def __init__(self, relative_path):\n self.relative_path = relative_path\n self._ops = None\n\n @property\n def ops(self):\n if SKIP_CUSTOM_OPS:\n import pytest\n\n pytest.skip(\n \"Skipping the test because a custom ops \"\n \"was being loaded while --skip-custom-ops was set.\"\n )\n if self._ops is None:\n self.display_warning_if_incompatible()\n self._ops = tf.load_op_library(get_path_to_datafile(self.relative_path))\n return self._ops\n\n def display_warning_if_incompatible(self):\n global abi_warning_already_raised\n if abi_is_compatible() or abi_warning_already_raised:\n return\n\n warnings.warn(\n \"You are currently using TensorFlow {} and trying to load a custom op ({}).\"\n \"\\n\"\n \"TensorFlow Addons has compiled its custom ops against TensorFlow {}, \"\n \"and there are no compatibility guarantees between the two versions. \"\n \"\\n\"\n \"This means that you might get segfaults when loading the custom op, \"\n \"or other kind of low-level errors.\\n If you do, do not file an issue \"\n \"on Github. This is a known limitation.\"\n \"\\n\\n\"\n \"It might help you to fallback to pure Python \"\n \"ops with TF_ADDONS_PY_OPS . To do that, see \"\n \"https://github.com/tensorflow/addons#gpucpu-custom-ops \"\n \"\\n\\n\"\n \"You can also change the TensorFlow version installed on your system. \"\n \"You would need a TensorFlow version equal to or above {} and strictly \"\n \"below {}.\\n Note that nightly versions of TensorFlow, \"\n \"as well as non-pip TensorFlow like `conda install tensorflow` or compiled \"\n \"from source are not supported.\"\n \"\\n\\n\"\n \"The last solution is to find the TensorFlow Addons version that has \"\n \"custom ops compatible with the TensorFlow installed on your \"\n \"system. To do that, refer to the readme: \"\n \"https://github.com/tensorflow/addons\"\n \"\".format(\n tf.__version__,\n self.relative_path,\n MIN_TF_VERSION_FOR_ABI_COMPATIBILITY,\n MIN_TF_VERSION_FOR_ABI_COMPATIBILITY,\n MAX_TF_VERSION_FOR_ABI_COMPATIBILITY,\n ),\n UserWarning,\n )\n abi_warning_already_raised = True\n\n\ndef abi_is_compatible():\n if \"dev\" in tf.__version__:\n # tf-nightly\n return False\n\n min_version = LooseVersion(MIN_TF_VERSION_FOR_ABI_COMPATIBILITY)\n max_version = LooseVersion(MAX_TF_VERSION_FOR_ABI_COMPATIBILITY)\n return min_version <= LooseVersion(tf.__version__) < max_version\n", "path": "tensorflow_addons/utils/resource_loader.py"}, {"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\n\n# Ensure the TensorFlow version is in the right range. This\n# needs to happen before anything else, since the imports below will try to\n# import TensorFlow, too.\n\nfrom distutils.version import LooseVersion\nimport warnings\n\nimport tensorflow as tf\n\nMIN_TF_VERSION = \"2.2.0\"\nMAX_TF_VERSION = \"2.3.0\"\n\n\ndef _check_tf_version():\n \"\"\"Warn the user if the version of TensorFlow used is not supported.\n\n This is not a check for custom ops compatibility. This check only ensure that\n we support this TensorFlow version if the user uses only Addons' Python code.\n \"\"\"\n\n if \"dev\" in tf.__version__:\n warnings.warn(\n \"You are currently using a nightly version of TensorFlow ({}). \\n\"\n \"TensorFlow Addons offers no support for the nightly versions of \"\n \"TensorFlow. Some things might work, some other might not. \\n\"\n \"If you encounter a bug, do not file an issue on GitHub.\"\n \"\".format(tf.__version__),\n UserWarning,\n )\n return\n\n min_version = LooseVersion(MIN_TF_VERSION)\n max_version = LooseVersion(MAX_TF_VERSION)\n\n if min_version <= LooseVersion(tf.__version__) < max_version:\n return\n\n warnings.warn(\n \"Tensorflow Addons supports using Python ops for all Tensorflow versions \"\n \"above or equal to {} and strictly below {} (nightly versions are not \"\n \"supported). \\n \"\n \"The versions of TensorFlow you are currently using is {} and is not \"\n \"supported. \\n\"\n \"Some things might work, some things might not.\\n\"\n \"If you were to encounter a bug, do not file an issue.\\n\"\n \"If you want to make sure you're using a tested and supported configuration, \"\n \"either change the TensorFlow version or the TensorFlow Addons's version. \\n\"\n \"You can find the compatibility matrix in TensorFlow Addon's readme:\\n\"\n \"https://github.com/tensorflow/addons\".format(\n MIN_TF_VERSION, MAX_TF_VERSION, tf.__version__\n ),\n UserWarning,\n )\n", "path": "tensorflow_addons/utils/ensure_tf_install.py"}]}
| 2,296 | 262 |
gh_patches_debug_1770
|
rasdani/github-patches
|
git_diff
|
weecology__retriever-677
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Download only fails for `gwdd`
```
ethan@gandalf:~$ retriever download gwdd
=> Installing Zanne et al. Global wood density database.
Creating database GWDD...
Couldn't create database ('NoneType' object has no attribute 'execute'). Trying to continue anyway.
'table_name'
```
And the file doesn't appear.
From `try_install_all.py`:
```
('download', 'gwdd', KeyError('table_name',))
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/gwdd.py`
Content:
```
1 #retriever
2 """Retriever script for Zanne et al. Global wood density database.
3
4 """
5 from builtins import str
6 from builtins import range
7
8 import os
9 import sys
10 import xlrd
11 from retriever.lib.templates import Script
12 from retriever.lib.models import Table
13 from retriever.lib.excel import Excel
14
15
16
17 class main(Script):
18 def __init__(self, **kwargs):
19 Script.__init__(self, **kwargs)
20 self.name = "Zanne et al. Global wood density database."
21 self.shortname = "GWDD"
22 self.retriever_minimum_version = '2.0.0-dev'
23 self.script_version = '1.0'
24 self.urls = {"GWDD": "http://datadryad.org/bitstream/handle/10255/dryad.235/GlobalWoodDensityDatabase.xls?sequence=1"}
25 self.tags = ["Taxon > Plants", "Spatial Scale > Global",
26 "Data Type > Observational"]
27 self.ref = "http://datadryad.org/resource/doi:10.5061/dryad.234"
28 self.description = "A collection and collation of data on the major wood functional traits, including the largest wood density database to date (8412 taxa), mechanical strength measures and anatomical features, as well as clade-specific features such as secondary chemistry."
29 self.citation = "Chave J, Coomes DA, Jansen S, Lewis SL, Swenson NG, Zanne AE (2009) Towards a worldwide wood economics spectrum. Ecology Letters 12(4): 351-366. http://dx.doi.org/10.1111/j.1461-0248.2009.01285.x"
30 self.addendum = """ *Correspondence for updates to the database: [email protected]
31 For descriptions of the database, see Chave et al. 2009. Towards a worldwide wood economics spectrum. Ecology Letters. Identifier: http://hdl.handle.net/10255/dryad.234
32
33 Below we list the rules of use for the Global wood density database.
34 These are developed based on the rules of use for the Glopnet dataset (www.nature.com/nature/journal/v428/n6985/full/nature02403.html) and Cedar Creek LTER and Related Data (http://www.lter.umn.edu/cgi-bin/register).
35 If you would like to use the Global wood density database, we request that you:
36 1. Notify the main address of correspondence (Gaby Lopez-Gonzalo) if you plan to use the database in a publication.
37 2. Provide recognition of the efforts of this group in the assembly of the data by using the citation for the database above.
38 3. Recognize that these data were assembled by the group for various analyses and research questions. If any of these uses overlap with your interests, you recognize that group has precedence in addressing these questions."""
39
40 def download(self, engine=None, debug=False):
41 Script.download(self, engine, debug)
42
43 self.engine.download_file(self.urls["GWDD"], "GlobalWoodDensityDatabase.xls")
44 filename = os.path.basename("GlobalWoodDensityDatabase.xls")
45
46 book = xlrd.open_workbook(self.engine.format_filename(filename))
47 sh = book.sheet_by_index(1)
48 rows = sh.nrows
49
50 #Creating data table
51 lines = []
52 for i in range(1, rows):
53 row = sh.row(i)
54 if not all(Excel.empty_cell(cell) for cell in row):
55 this_line = {}
56 def format_value(s):
57 s = Excel.cell_value(s)
58 return str(s).title().replace("\\", "/").replace('"', '')
59 for num, label in enumerate(["Number", "Family", "Binomial", "Wood_Density",
60 "Region", "Reference_Number"]):
61 this_line[label] = format_value(row[num])
62 lines.append(this_line)
63
64 table = Table("data", delimiter="\t")
65 table.columns=[("Number" , ("pk-int",) ),
66 ("Family" , ("char",) ),
67 ("Binomial" , ("char",) ),
68 ("Wood_Density" , ("double",) ),
69 ("Region" , ("char",) ),
70 ("Reference_Number" , ("int",) )]
71 table.pk = 'Number'
72 table.contains_pk = True
73
74 gwdd = []
75 for line in lines:
76 gwdd_data = [line["Number"],
77 line["Family"],
78 line["Binomial"],
79 line["Wood_Density"],
80 line["Region"],
81 line["Reference_Number"]]
82 gwdd.append(gwdd_data)
83
84 data = ['\t'.join(gwdd_line) for gwdd_line in gwdd]
85 self.engine.table = table
86 self.engine.create_table()
87 self.engine.add_to_table(data)
88
89 #Creating reference table
90 lines = []
91 sh = book.sheet_by_index(2)
92 rows = sh.nrows
93 for i in range(1, rows):
94 row = sh.row(i)
95 if not all(Excel.empty_cell(cell) for cell in row):
96 this_line = {}
97 def format_value(s):
98 s = Excel.cell_value(s)
99 return str(s).title().replace("\\", "/").replace('"', '')
100 for num, label in enumerate(["Reference_Number", "Reference"]):
101 this_line[label] = format_value(row[num])
102 lines.append(this_line)
103
104 table = Table("reference", delimiter="\t")
105 table.columns=[("Reference_Number" , ("pk-int",) ),
106 ("Reference" , ("char",) )]
107 table.pk = 'Reference_Number'
108 table.contains_pk = True
109
110 gwdd = []
111 for line in lines:
112 gwdd_ref = [line["Reference_Number"],
113 line["Reference"]]
114 gwdd.append(gwdd_ref)
115
116 data = ['\t'.join(gwdd_line) for gwdd_line in gwdd]
117 self.engine.table = table
118 self.engine.create_table()
119 self.engine.add_to_table(data)
120
121 return self.engine
122
123 SCRIPT = main()
124
125
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scripts/gwdd.py b/scripts/gwdd.py
--- a/scripts/gwdd.py
+++ b/scripts/gwdd.py
@@ -117,7 +117,8 @@
self.engine.table = table
self.engine.create_table()
self.engine.add_to_table(data)
-
+ self.engine.find_file("GlobalWoodDensityDatabase.xls")
+
return self.engine
SCRIPT = main()
|
{"golden_diff": "diff --git a/scripts/gwdd.py b/scripts/gwdd.py\n--- a/scripts/gwdd.py\n+++ b/scripts/gwdd.py\n@@ -117,7 +117,8 @@\n self.engine.table = table\n self.engine.create_table()\n self.engine.add_to_table(data)\n- \n+ self.engine.find_file(\"GlobalWoodDensityDatabase.xls\")\n+\n return self.engine\n \n SCRIPT = main()\n", "issue": "Download only fails for `gwdd`\n```\nethan@gandalf:~$ retriever download gwdd\n=> Installing Zanne et al. Global wood density database.\nCreating database GWDD...\nCouldn't create database ('NoneType' object has no attribute 'execute'). Trying to continue anyway.\n'table_name'\n```\n\nAnd the file doesn't appear.\n\nFrom `try_install_all.py`:\n\n```\n('download', 'gwdd', KeyError('table_name',))\n```\n\n", "before_files": [{"content": "#retriever\n\"\"\"Retriever script for Zanne et al. Global wood density database.\n\n\"\"\"\nfrom builtins import str\nfrom builtins import range\n\nimport os\nimport sys\nimport xlrd\nfrom retriever.lib.templates import Script\nfrom retriever.lib.models import Table\nfrom retriever.lib.excel import Excel\n\n\n\nclass main(Script):\n def __init__(self, **kwargs):\n Script.__init__(self, **kwargs)\n self.name = \"Zanne et al. Global wood density database.\"\n self.shortname = \"GWDD\"\n self.retriever_minimum_version = '2.0.0-dev'\n self.script_version = '1.0'\n self.urls = {\"GWDD\": \"http://datadryad.org/bitstream/handle/10255/dryad.235/GlobalWoodDensityDatabase.xls?sequence=1\"}\n self.tags = [\"Taxon > Plants\", \"Spatial Scale > Global\",\n \"Data Type > Observational\"]\n self.ref = \"http://datadryad.org/resource/doi:10.5061/dryad.234\"\n self.description = \"A collection and collation of data on the major wood functional traits, including the largest wood density database to date (8412 taxa), mechanical strength measures and anatomical features, as well as clade-specific features such as secondary chemistry.\"\n self.citation = \"Chave J, Coomes DA, Jansen S, Lewis SL, Swenson NG, Zanne AE (2009) Towards a worldwide wood economics spectrum. Ecology Letters 12(4): 351-366. http://dx.doi.org/10.1111/j.1461-0248.2009.01285.x\"\n self.addendum = \"\"\" *Correspondence for updates to the database: [email protected]\n For descriptions of the database, see Chave et al. 2009. Towards a worldwide wood economics spectrum. Ecology Letters. Identifier: http://hdl.handle.net/10255/dryad.234\n\n Below we list the rules of use for the Global wood density database. \n These are developed based on the rules of use for the Glopnet dataset (www.nature.com/nature/journal/v428/n6985/full/nature02403.html) and Cedar Creek LTER and Related Data (http://www.lter.umn.edu/cgi-bin/register). \n If you would like to use the Global wood density database, we request that you:\n 1. Notify the main address of correspondence (Gaby Lopez-Gonzalo) if you plan to use the database in a publication.\n 2. Provide recognition of the efforts of this group in the assembly of the data by using the citation for the database above.\n 3. Recognize that these data were assembled by the group for various analyses and research questions. If any of these uses overlap with your interests, you recognize that group has precedence in addressing these questions.\"\"\"\n\n def download(self, engine=None, debug=False):\n Script.download(self, engine, debug)\n\n self.engine.download_file(self.urls[\"GWDD\"], \"GlobalWoodDensityDatabase.xls\")\n filename = os.path.basename(\"GlobalWoodDensityDatabase.xls\")\n\n book = xlrd.open_workbook(self.engine.format_filename(filename))\n sh = book.sheet_by_index(1)\n rows = sh.nrows\n\n #Creating data table\n lines = []\n for i in range(1, rows):\n row = sh.row(i)\n if not all(Excel.empty_cell(cell) for cell in row):\n this_line = {}\n def format_value(s):\n s = Excel.cell_value(s)\n return str(s).title().replace(\"\\\\\", \"/\").replace('\"', '')\n for num, label in enumerate([\"Number\", \"Family\", \"Binomial\", \"Wood_Density\",\n \"Region\", \"Reference_Number\"]):\n this_line[label] = format_value(row[num])\n lines.append(this_line)\n\n table = Table(\"data\", delimiter=\"\\t\")\n table.columns=[(\"Number\" , (\"pk-int\",) ),\n (\"Family\" , (\"char\",) ),\n (\"Binomial\" , (\"char\",) ),\n (\"Wood_Density\" , (\"double\",) ),\n (\"Region\" , (\"char\",) ),\n (\"Reference_Number\" , (\"int\",) )]\n table.pk = 'Number'\n table.contains_pk = True\n\n gwdd = []\n for line in lines:\n gwdd_data = [line[\"Number\"],\n line[\"Family\"],\n line[\"Binomial\"],\n line[\"Wood_Density\"],\n line[\"Region\"],\n line[\"Reference_Number\"]]\n gwdd.append(gwdd_data)\n\n data = ['\\t'.join(gwdd_line) for gwdd_line in gwdd]\n self.engine.table = table\n self.engine.create_table()\n self.engine.add_to_table(data)\n\n #Creating reference table\n lines = []\n sh = book.sheet_by_index(2)\n rows = sh.nrows\n for i in range(1, rows):\n row = sh.row(i)\n if not all(Excel.empty_cell(cell) for cell in row):\n this_line = {}\n def format_value(s):\n s = Excel.cell_value(s)\n return str(s).title().replace(\"\\\\\", \"/\").replace('\"', '')\n for num, label in enumerate([\"Reference_Number\", \"Reference\"]):\n this_line[label] = format_value(row[num])\n lines.append(this_line)\n\n table = Table(\"reference\", delimiter=\"\\t\")\n table.columns=[(\"Reference_Number\" , (\"pk-int\",) ),\n (\"Reference\" , (\"char\",) )]\n table.pk = 'Reference_Number'\n table.contains_pk = True\n\n gwdd = []\n for line in lines:\n gwdd_ref = [line[\"Reference_Number\"],\n line[\"Reference\"]]\n gwdd.append(gwdd_ref)\n\n data = ['\\t'.join(gwdd_line) for gwdd_line in gwdd]\n self.engine.table = table\n self.engine.create_table()\n self.engine.add_to_table(data)\n \n return self.engine\n\nSCRIPT = main()\n\n", "path": "scripts/gwdd.py"}], "after_files": [{"content": "#retriever\n\"\"\"Retriever script for Zanne et al. Global wood density database.\n\n\"\"\"\nfrom builtins import str\nfrom builtins import range\n\nimport os\nimport sys\nimport xlrd\nfrom retriever.lib.templates import Script\nfrom retriever.lib.models import Table\nfrom retriever.lib.excel import Excel\n\n\n\nclass main(Script):\n def __init__(self, **kwargs):\n Script.__init__(self, **kwargs)\n self.name = \"Zanne et al. Global wood density database.\"\n self.shortname = \"GWDD\"\n self.retriever_minimum_version = '2.0.0-dev'\n self.script_version = '1.0'\n self.urls = {\"GWDD\": \"http://datadryad.org/bitstream/handle/10255/dryad.235/GlobalWoodDensityDatabase.xls?sequence=1\"}\n self.tags = [\"Taxon > Plants\", \"Spatial Scale > Global\",\n \"Data Type > Observational\"]\n self.ref = \"http://datadryad.org/resource/doi:10.5061/dryad.234\"\n self.description = \"A collection and collation of data on the major wood functional traits, including the largest wood density database to date (8412 taxa), mechanical strength measures and anatomical features, as well as clade-specific features such as secondary chemistry.\"\n self.citation = \"Chave J, Coomes DA, Jansen S, Lewis SL, Swenson NG, Zanne AE (2009) Towards a worldwide wood economics spectrum. Ecology Letters 12(4): 351-366. http://dx.doi.org/10.1111/j.1461-0248.2009.01285.x\"\n self.addendum = \"\"\" *Correspondence for updates to the database: [email protected]\n For descriptions of the database, see Chave et al. 2009. Towards a worldwide wood economics spectrum. Ecology Letters. Identifier: http://hdl.handle.net/10255/dryad.234\n\n Below we list the rules of use for the Global wood density database. \n These are developed based on the rules of use for the Glopnet dataset (www.nature.com/nature/journal/v428/n6985/full/nature02403.html) and Cedar Creek LTER and Related Data (http://www.lter.umn.edu/cgi-bin/register). \n If you would like to use the Global wood density database, we request that you:\n 1. Notify the main address of correspondence (Gaby Lopez-Gonzalo) if you plan to use the database in a publication.\n 2. Provide recognition of the efforts of this group in the assembly of the data by using the citation for the database above.\n 3. Recognize that these data were assembled by the group for various analyses and research questions. If any of these uses overlap with your interests, you recognize that group has precedence in addressing these questions.\"\"\"\n\n def download(self, engine=None, debug=False):\n Script.download(self, engine, debug)\n\n self.engine.download_file(self.urls[\"GWDD\"], \"GlobalWoodDensityDatabase.xls\")\n filename = os.path.basename(\"GlobalWoodDensityDatabase.xls\")\n\n book = xlrd.open_workbook(self.engine.format_filename(filename))\n sh = book.sheet_by_index(1)\n rows = sh.nrows\n\n #Creating data table\n lines = []\n for i in range(1, rows):\n row = sh.row(i)\n if not all(Excel.empty_cell(cell) for cell in row):\n this_line = {}\n def format_value(s):\n s = Excel.cell_value(s)\n return str(s).title().replace(\"\\\\\", \"/\").replace('\"', '')\n for num, label in enumerate([\"Number\", \"Family\", \"Binomial\", \"Wood_Density\",\n \"Region\", \"Reference_Number\"]):\n this_line[label] = format_value(row[num])\n lines.append(this_line)\n\n table = Table(\"data\", delimiter=\"\\t\")\n table.columns=[(\"Number\" , (\"pk-int\",) ),\n (\"Family\" , (\"char\",) ),\n (\"Binomial\" , (\"char\",) ),\n (\"Wood_Density\" , (\"double\",) ),\n (\"Region\" , (\"char\",) ),\n (\"Reference_Number\" , (\"int\",) )]\n table.pk = 'Number'\n table.contains_pk = True\n\n gwdd = []\n for line in lines:\n gwdd_data = [line[\"Number\"],\n line[\"Family\"],\n line[\"Binomial\"],\n line[\"Wood_Density\"],\n line[\"Region\"],\n line[\"Reference_Number\"]]\n gwdd.append(gwdd_data)\n\n data = ['\\t'.join(gwdd_line) for gwdd_line in gwdd]\n self.engine.table = table\n self.engine.create_table()\n self.engine.add_to_table(data)\n\n #Creating reference table\n lines = []\n sh = book.sheet_by_index(2)\n rows = sh.nrows\n for i in range(1, rows):\n row = sh.row(i)\n if not all(Excel.empty_cell(cell) for cell in row):\n this_line = {}\n def format_value(s):\n s = Excel.cell_value(s)\n return str(s).title().replace(\"\\\\\", \"/\").replace('\"', '')\n for num, label in enumerate([\"Reference_Number\", \"Reference\"]):\n this_line[label] = format_value(row[num])\n lines.append(this_line)\n\n table = Table(\"reference\", delimiter=\"\\t\")\n table.columns=[(\"Reference_Number\" , (\"pk-int\",) ),\n (\"Reference\" , (\"char\",) )]\n table.pk = 'Reference_Number'\n table.contains_pk = True\n\n gwdd = []\n for line in lines:\n gwdd_ref = [line[\"Reference_Number\"],\n line[\"Reference\"]]\n gwdd.append(gwdd_ref)\n\n data = ['\\t'.join(gwdd_line) for gwdd_line in gwdd]\n self.engine.table = table\n self.engine.create_table()\n self.engine.add_to_table(data)\n self.engine.find_file(\"GlobalWoodDensityDatabase.xls\")\n\n return self.engine\n\nSCRIPT = main()\n\n", "path": "scripts/gwdd.py"}]}
| 2,001 | 92 |
gh_patches_debug_19263
|
rasdani/github-patches
|
git_diff
|
ytdl-org__youtube-dl-12391
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[fox.com] unable to get season_number and episode_number
## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like that [x])
- Use *Preview* tab to see how your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.02.21*. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.02.21**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [ ] Site support request (request for adding support for a new site)
- [x] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
---
### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*
---
### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:
Add `-v` flag to **your command line** you run youtube-dl with, copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):
```
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['--ap-mso', 'DTV', '--ap-username', 'PRIVATE', '--ap-password', 'PRIVATE', '-v', '--get-filename', '-o', '%(season)s s%(season_number)se%(episode_number)s - %(title)s.%(ext)s', 'http://www.fox.com/watch/796751939630/7684301568']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2017.02.17
[debug] Python version 3.4.4 - Windows-10-10.0.14393
[debug] exe versions: ffmpeg N-82092-g89ec4ad, ffprobe N-82092-g89ec4ad, rtmpdump 2.4
[debug] Proxy map: {}
NA sNAeNA - Chris Has Got a Date, Date, Date, Date, Date.mp4
```
---
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: http://www.fox.com/watch/868757571821/7684301568
- Single video: http://www.fox.com/watch/853172291669/7684301568
- Single video: http://www.fox.com/watch/835908163744/7684301568
- Single video: http://www.fox.com/watch/796751939630/7684301568
Note that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.
---
### Description of your *issue*, suggested solution and other information
Unable to get season, season_number, and episode_number. Output results in NA for each field. I am able to grab the title. The links I provided clearly show the season number and episode number on the web page. I think the info is contained in the following script example from the last link I provided above.
```
<script type="application/ld+json">{"@context":"http:\/\/schema.org","@id":"801550915931","@type":"TVEpisode","name":"Chris Has Got a Date, Date, Date, Date, Date","description":"Taylor Swift agrees to go to a homecoming dance with Chris after he asks her via online video.","url":"http:\/\/www.fox.com\/watch\/796751939630\/7684301568","image":"http:\/\/www.fox.com\/sites\/default\/files\/video-image-thumb\/eacx02_376_03a_0152_hires2_1280x720_801543747530.jpg","episodeNumber":"5","partOfSeason":{"@type":"TVSeason","seasonNumber":"15"},"partOfSeries":{"@type":"TVSeries","name":"Family Guy","sameAs":"http:\/\/www.imdb.com\/title\/tt0182576\/","url":"http:\/\/www.fox.com\/family-guy"},"releasedEvent":{"@type":"PublicationEvent","startDate":"2016-11-06","location":{"@type":"Country","name":"US"}},"potentialAction":[{"@type":"WatchAction","target":["http:\/\/www.fox.com\/watch\/796751939630\/7684301568?cmpid=xtra:-FOX-Goog_Watch","android-app:\/\/com.fox.now\/foxapp\/video_fullepisode\/freewheel\/796751939630?cmpid=xtra:-FOX-Goog_Watch","ios-app:\/\/571096102\/foxapp\/video_fullepisode\/freewheel\/796751939630?cmpid=xtra:-FOX-Goog_Watch"],"expectsAcceptanceOf":[{"@type":"Offer","availabilityStarts":"1969-12-31T19:00:00-05:00","availabilityEnds":"2016-11-14T03:01:00-05:00","category":"externalSubscription","eligibleRegion":[{"@type":"Country","name":"US"},{"@type":"Country","name":"AS"},{"@type":"Country","name":"GU"},{"@type":"Country","name":"PR"},{"@type":"Country","name":"VI"},{"@type":"Country","name":"UM"}]},{"@type":"Offer","availabilityStarts":"2016-11-14T03:01:00-05:00","availabilityEnds":"2017-01-09T02:59:00-05:00","category":"free","eligibleRegion":[{"@type":"Country","name":"US"},{"@type":"Country","name":"AS"},{"@type":"Country","name":"GU"},{"@type":"Country","name":"PR"},{"@type":"Country","name":"VI"},{"@type":"Country","name":"UM"}]},{"@type":"Offer","availabilityStarts":"2017-01-09T02:59:00-05:00","availabilityEnds":"2017-06-29T02:59:00-04:00","category":"externalSubscription","eligibleRegion":[{"@type":"Country","name":"US"},{"@type":"Country","name":"AS"},{"@type":"Country","name":"GU"},{"@type":"Country","name":"PR"},{"@type":"Country","name":"VI"},{"@type":"Country","name":"UM"}]}]},{"@type":"WatchAction","target":[{"@type":"EntryPoint","urlTemplate":"http:\/\/www.fox.com\/watch\/796751939630\/7684301568?cmpid=xtra:-FOX-Goog_Watch","actionPlatform":"http:\/\/schema.org\/DesktopWebPlatform","inLanguage":"en"},{"@type":"EntryPoint","urlTemplate":"android-app:\/\/com.fox.now\/foxapp\/video_fullepisode\/freewheel\/796751939630?extcmp=xtra_FOXNOW_Goog_Watch","actionPlatform":"http:\/\/schema.org\/AndroidPlatform","inLanguage":"en"},{"@type":"EntryPoint","urlTemplate":"http:\/\/www.fox.com\/watch\/796751939630\/7684301568?extcmp=xtra_FOXNOW_Goog_Watch","actionPlatform":"http:\/\/schema.org\/IOSPlatform","inLanguage":"en"}],"expectsAcceptanceOf":[{"@type":"Offer","availabilityStarts":"1969-12-31T19:00:00-05:00","availabilityEnds":"2016-11-14T03:01:00-05:00","category":"externalSubscription","eligibleRegion":[{"@type":"Country","name":"US"},{"@type":"Country","name":"AS"},{"@type":"Country","name":"GU"},{"@type":"Country","name":"PR"},{"@type":"Country","name":"VI"},{"@type":"Country","name":"UM"}]},{"@type":"Offer","availabilityStarts":"2016-11-14T03:01:00-05:00","availabilityEnds":"2017-01-09T02:59:00-05:00","category":"free","eligibleRegion":[{"@type":"Country","name":"US"},{"@type":"Country","name":"AS"},{"@type":"Country","name":"GU"},{"@type":"Country","name":"PR"},{"@type":"Country","name":"VI"},{"@type":"Country","name":"UM"}]},{"@type":"Offer","availabilityStarts":"2017-01-09T02:59:00-05:00","availabilityEnds":"2017-06-29T02:59:00-04:00","category":"externalSubscription","eligibleRegion":[{"@type":"Country","name":"US"},{"@type":"Country","name":"AS"},{"@type":"Country","name":"GU"},{"@type":"Country","name":"PR"},{"@type":"Country","name":"VI"},{"@type":"Country","name":"UM"}]}]}],"duration":"PT21M36S"}</script><meta name="description" content="FULL EPISODE - "Chris Has Got a Date, Date, Date, Date, Date" - Taylor Swift agrees to go to a homecoming dance with Chris after he asks her via online video."/>
```
This issue only occurs on Fox. I do not have this issue with other major networks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `youtube_dl/extractor/fox.py`
Content:
```
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 from .adobepass import AdobePassIE
5 from ..utils import (
6 smuggle_url,
7 update_url_query,
8 )
9
10
11 class FOXIE(AdobePassIE):
12 _VALID_URL = r'https?://(?:www\.)?fox\.com/watch/(?P<id>[0-9]+)'
13 _TEST = {
14 'url': 'http://www.fox.com/watch/255180355939/7684182528',
15 'md5': 'ebd296fcc41dd4b19f8115d8461a3165',
16 'info_dict': {
17 'id': '255180355939',
18 'ext': 'mp4',
19 'title': 'Official Trailer: Gotham',
20 'description': 'Tracing the rise of the great DC Comics Super-Villains and vigilantes, Gotham reveals an entirely new chapter that has never been told.',
21 'duration': 129,
22 'timestamp': 1400020798,
23 'upload_date': '20140513',
24 'uploader': 'NEWA-FNG-FOXCOM',
25 },
26 'add_ie': ['ThePlatform'],
27 }
28
29 def _real_extract(self, url):
30 video_id = self._match_id(url)
31 webpage = self._download_webpage(url, video_id)
32
33 settings = self._parse_json(self._search_regex(
34 r'jQuery\.extend\(Drupal\.settings\s*,\s*({.+?})\);',
35 webpage, 'drupal settings'), video_id)
36 fox_pdk_player = settings['fox_pdk_player']
37 release_url = fox_pdk_player['release_url']
38 query = {
39 'mbr': 'true',
40 'switch': 'http'
41 }
42 if fox_pdk_player.get('access') == 'locked':
43 ap_p = settings['foxAdobePassProvider']
44 rating = ap_p.get('videoRating')
45 if rating == 'n/a':
46 rating = None
47 resource = self._get_mvpd_resource('fbc-fox', None, ap_p['videoGUID'], rating)
48 query['auth'] = self._extract_mvpd_auth(url, video_id, 'fbc-fox', resource)
49
50 return {
51 '_type': 'url_transparent',
52 'ie_key': 'ThePlatform',
53 'url': smuggle_url(update_url_query(release_url, query), {'force_smil_url': True}),
54 'id': video_id,
55 }
56
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/youtube_dl/extractor/fox.py b/youtube_dl/extractor/fox.py
--- a/youtube_dl/extractor/fox.py
+++ b/youtube_dl/extractor/fox.py
@@ -3,6 +3,7 @@
from .adobepass import AdobePassIE
from ..utils import (
+ int_or_none,
smuggle_url,
update_url_query,
)
@@ -47,9 +48,12 @@
resource = self._get_mvpd_resource('fbc-fox', None, ap_p['videoGUID'], rating)
query['auth'] = self._extract_mvpd_auth(url, video_id, 'fbc-fox', resource)
- return {
+ info = self._search_json_ld(webpage, video_id, fatal=False)
+ info.update({
'_type': 'url_transparent',
'ie_key': 'ThePlatform',
'url': smuggle_url(update_url_query(release_url, query), {'force_smil_url': True}),
'id': video_id,
- }
+ })
+
+ return info
|
{"golden_diff": "diff --git a/youtube_dl/extractor/fox.py b/youtube_dl/extractor/fox.py\n--- a/youtube_dl/extractor/fox.py\n+++ b/youtube_dl/extractor/fox.py\n@@ -3,6 +3,7 @@\n \n from .adobepass import AdobePassIE\n from ..utils import (\n+ int_or_none,\n smuggle_url,\n update_url_query,\n )\n@@ -47,9 +48,12 @@\n resource = self._get_mvpd_resource('fbc-fox', None, ap_p['videoGUID'], rating)\n query['auth'] = self._extract_mvpd_auth(url, video_id, 'fbc-fox', resource)\n \n- return {\n+ info = self._search_json_ld(webpage, video_id, fatal=False)\n+ info.update({\n '_type': 'url_transparent',\n 'ie_key': 'ThePlatform',\n 'url': smuggle_url(update_url_query(release_url, query), {'force_smil_url': True}),\n 'id': video_id,\n- }\n+ })\n+\n+ return info\n", "issue": "[fox.com] unable to get season_number and episode_number\n## Please follow the guide below\r\n\r\n- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly\r\n- Put an `x` into all the boxes [ ] relevant to your *issue* (like that [x])\r\n- Use *Preview* tab to see how your issue will actually look like\r\n\r\n---\r\n\r\n### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.02.21*. If it's not read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.02.21**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [x] At least skimmed through [README](https://github.com/rg3/youtube-dl/blob/master/README.md) and **most notably** [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n\r\n### What is the purpose of your *issue*?\r\n- [ ] Bug report (encountered problems with youtube-dl)\r\n- [ ] Site support request (request for adding support for a new site)\r\n- [x] Feature request (request for a new functionality)\r\n- [ ] Question\r\n- [ ] Other\r\n\r\n---\r\n\r\n### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:\r\n\r\nAdd `-v` flag to **your command line** you run youtube-dl with, copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):\r\n```\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: ['--ap-mso', 'DTV', '--ap-username', 'PRIVATE', '--ap-password', 'PRIVATE', '-v', '--get-filename', '-o', '%(season)s s%(season_number)se%(episode_number)s - %(title)s.%(ext)s', 'http://www.fox.com/watch/796751939630/7684301568']\r\n[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252\r\n[debug] youtube-dl version 2017.02.17\r\n[debug] Python version 3.4.4 - Windows-10-10.0.14393\r\n[debug] exe versions: ffmpeg N-82092-g89ec4ad, ffprobe N-82092-g89ec4ad, rtmpdump 2.4\r\n[debug] Proxy map: {}\r\nNA sNAeNA - Chris Has Got a Date, Date, Date, Date, Date.mp4\r\n```\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):\r\n- Single video: http://www.fox.com/watch/868757571821/7684301568\r\n- Single video: http://www.fox.com/watch/853172291669/7684301568\r\n- Single video: http://www.fox.com/watch/835908163744/7684301568\r\n- Single video: http://www.fox.com/watch/796751939630/7684301568\r\n\r\nNote that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.\r\n\r\n\r\n\r\n---\r\n\r\n### Description of your *issue*, suggested solution and other information\r\n\r\nUnable to get season, season_number, and episode_number. Output results in NA for each field. I am able to grab the title. The links I provided clearly show the season number and episode number on the web page. I think the info is contained in the following script example from the last link I provided above. \r\n\r\n```\r\n<script type=\"application/ld+json\">{\"@context\":\"http:\\/\\/schema.org\",\"@id\":\"801550915931\",\"@type\":\"TVEpisode\",\"name\":\"Chris Has Got a Date, Date, Date, Date, Date\",\"description\":\"Taylor Swift agrees to go to a homecoming dance with Chris after he asks her via online video.\",\"url\":\"http:\\/\\/www.fox.com\\/watch\\/796751939630\\/7684301568\",\"image\":\"http:\\/\\/www.fox.com\\/sites\\/default\\/files\\/video-image-thumb\\/eacx02_376_03a_0152_hires2_1280x720_801543747530.jpg\",\"episodeNumber\":\"5\",\"partOfSeason\":{\"@type\":\"TVSeason\",\"seasonNumber\":\"15\"},\"partOfSeries\":{\"@type\":\"TVSeries\",\"name\":\"Family Guy\",\"sameAs\":\"http:\\/\\/www.imdb.com\\/title\\/tt0182576\\/\",\"url\":\"http:\\/\\/www.fox.com\\/family-guy\"},\"releasedEvent\":{\"@type\":\"PublicationEvent\",\"startDate\":\"2016-11-06\",\"location\":{\"@type\":\"Country\",\"name\":\"US\"}},\"potentialAction\":[{\"@type\":\"WatchAction\",\"target\":[\"http:\\/\\/www.fox.com\\/watch\\/796751939630\\/7684301568?cmpid=xtra:-FOX-Goog_Watch\",\"android-app:\\/\\/com.fox.now\\/foxapp\\/video_fullepisode\\/freewheel\\/796751939630?cmpid=xtra:-FOX-Goog_Watch\",\"ios-app:\\/\\/571096102\\/foxapp\\/video_fullepisode\\/freewheel\\/796751939630?cmpid=xtra:-FOX-Goog_Watch\"],\"expectsAcceptanceOf\":[{\"@type\":\"Offer\",\"availabilityStarts\":\"1969-12-31T19:00:00-05:00\",\"availabilityEnds\":\"2016-11-14T03:01:00-05:00\",\"category\":\"externalSubscription\",\"eligibleRegion\":[{\"@type\":\"Country\",\"name\":\"US\"},{\"@type\":\"Country\",\"name\":\"AS\"},{\"@type\":\"Country\",\"name\":\"GU\"},{\"@type\":\"Country\",\"name\":\"PR\"},{\"@type\":\"Country\",\"name\":\"VI\"},{\"@type\":\"Country\",\"name\":\"UM\"}]},{\"@type\":\"Offer\",\"availabilityStarts\":\"2016-11-14T03:01:00-05:00\",\"availabilityEnds\":\"2017-01-09T02:59:00-05:00\",\"category\":\"free\",\"eligibleRegion\":[{\"@type\":\"Country\",\"name\":\"US\"},{\"@type\":\"Country\",\"name\":\"AS\"},{\"@type\":\"Country\",\"name\":\"GU\"},{\"@type\":\"Country\",\"name\":\"PR\"},{\"@type\":\"Country\",\"name\":\"VI\"},{\"@type\":\"Country\",\"name\":\"UM\"}]},{\"@type\":\"Offer\",\"availabilityStarts\":\"2017-01-09T02:59:00-05:00\",\"availabilityEnds\":\"2017-06-29T02:59:00-04:00\",\"category\":\"externalSubscription\",\"eligibleRegion\":[{\"@type\":\"Country\",\"name\":\"US\"},{\"@type\":\"Country\",\"name\":\"AS\"},{\"@type\":\"Country\",\"name\":\"GU\"},{\"@type\":\"Country\",\"name\":\"PR\"},{\"@type\":\"Country\",\"name\":\"VI\"},{\"@type\":\"Country\",\"name\":\"UM\"}]}]},{\"@type\":\"WatchAction\",\"target\":[{\"@type\":\"EntryPoint\",\"urlTemplate\":\"http:\\/\\/www.fox.com\\/watch\\/796751939630\\/7684301568?cmpid=xtra:-FOX-Goog_Watch\",\"actionPlatform\":\"http:\\/\\/schema.org\\/DesktopWebPlatform\",\"inLanguage\":\"en\"},{\"@type\":\"EntryPoint\",\"urlTemplate\":\"android-app:\\/\\/com.fox.now\\/foxapp\\/video_fullepisode\\/freewheel\\/796751939630?extcmp=xtra_FOXNOW_Goog_Watch\",\"actionPlatform\":\"http:\\/\\/schema.org\\/AndroidPlatform\",\"inLanguage\":\"en\"},{\"@type\":\"EntryPoint\",\"urlTemplate\":\"http:\\/\\/www.fox.com\\/watch\\/796751939630\\/7684301568?extcmp=xtra_FOXNOW_Goog_Watch\",\"actionPlatform\":\"http:\\/\\/schema.org\\/IOSPlatform\",\"inLanguage\":\"en\"}],\"expectsAcceptanceOf\":[{\"@type\":\"Offer\",\"availabilityStarts\":\"1969-12-31T19:00:00-05:00\",\"availabilityEnds\":\"2016-11-14T03:01:00-05:00\",\"category\":\"externalSubscription\",\"eligibleRegion\":[{\"@type\":\"Country\",\"name\":\"US\"},{\"@type\":\"Country\",\"name\":\"AS\"},{\"@type\":\"Country\",\"name\":\"GU\"},{\"@type\":\"Country\",\"name\":\"PR\"},{\"@type\":\"Country\",\"name\":\"VI\"},{\"@type\":\"Country\",\"name\":\"UM\"}]},{\"@type\":\"Offer\",\"availabilityStarts\":\"2016-11-14T03:01:00-05:00\",\"availabilityEnds\":\"2017-01-09T02:59:00-05:00\",\"category\":\"free\",\"eligibleRegion\":[{\"@type\":\"Country\",\"name\":\"US\"},{\"@type\":\"Country\",\"name\":\"AS\"},{\"@type\":\"Country\",\"name\":\"GU\"},{\"@type\":\"Country\",\"name\":\"PR\"},{\"@type\":\"Country\",\"name\":\"VI\"},{\"@type\":\"Country\",\"name\":\"UM\"}]},{\"@type\":\"Offer\",\"availabilityStarts\":\"2017-01-09T02:59:00-05:00\",\"availabilityEnds\":\"2017-06-29T02:59:00-04:00\",\"category\":\"externalSubscription\",\"eligibleRegion\":[{\"@type\":\"Country\",\"name\":\"US\"},{\"@type\":\"Country\",\"name\":\"AS\"},{\"@type\":\"Country\",\"name\":\"GU\"},{\"@type\":\"Country\",\"name\":\"PR\"},{\"@type\":\"Country\",\"name\":\"VI\"},{\"@type\":\"Country\",\"name\":\"UM\"}]}]}],\"duration\":\"PT21M36S\"}</script><meta name=\"description\" content=\"FULL EPISODE - "Chris Has Got a Date, Date, Date, Date, Date" - Taylor Swift agrees to go to a homecoming dance with Chris after he asks her via online video.\"/>\r\n```\r\n\r\nThis issue only occurs on Fox. I do not have this issue with other major networks. \n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nfrom .adobepass import AdobePassIE\nfrom ..utils import (\n smuggle_url,\n update_url_query,\n)\n\n\nclass FOXIE(AdobePassIE):\n _VALID_URL = r'https?://(?:www\\.)?fox\\.com/watch/(?P<id>[0-9]+)'\n _TEST = {\n 'url': 'http://www.fox.com/watch/255180355939/7684182528',\n 'md5': 'ebd296fcc41dd4b19f8115d8461a3165',\n 'info_dict': {\n 'id': '255180355939',\n 'ext': 'mp4',\n 'title': 'Official Trailer: Gotham',\n 'description': 'Tracing the rise of the great DC Comics Super-Villains and vigilantes, Gotham reveals an entirely new chapter that has never been told.',\n 'duration': 129,\n 'timestamp': 1400020798,\n 'upload_date': '20140513',\n 'uploader': 'NEWA-FNG-FOXCOM',\n },\n 'add_ie': ['ThePlatform'],\n }\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n webpage = self._download_webpage(url, video_id)\n\n settings = self._parse_json(self._search_regex(\n r'jQuery\\.extend\\(Drupal\\.settings\\s*,\\s*({.+?})\\);',\n webpage, 'drupal settings'), video_id)\n fox_pdk_player = settings['fox_pdk_player']\n release_url = fox_pdk_player['release_url']\n query = {\n 'mbr': 'true',\n 'switch': 'http'\n }\n if fox_pdk_player.get('access') == 'locked':\n ap_p = settings['foxAdobePassProvider']\n rating = ap_p.get('videoRating')\n if rating == 'n/a':\n rating = None\n resource = self._get_mvpd_resource('fbc-fox', None, ap_p['videoGUID'], rating)\n query['auth'] = self._extract_mvpd_auth(url, video_id, 'fbc-fox', resource)\n\n return {\n '_type': 'url_transparent',\n 'ie_key': 'ThePlatform',\n 'url': smuggle_url(update_url_query(release_url, query), {'force_smil_url': True}),\n 'id': video_id,\n }\n", "path": "youtube_dl/extractor/fox.py"}], "after_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nfrom .adobepass import AdobePassIE\nfrom ..utils import (\n int_or_none,\n smuggle_url,\n update_url_query,\n)\n\n\nclass FOXIE(AdobePassIE):\n _VALID_URL = r'https?://(?:www\\.)?fox\\.com/watch/(?P<id>[0-9]+)'\n _TEST = {\n 'url': 'http://www.fox.com/watch/255180355939/7684182528',\n 'md5': 'ebd296fcc41dd4b19f8115d8461a3165',\n 'info_dict': {\n 'id': '255180355939',\n 'ext': 'mp4',\n 'title': 'Official Trailer: Gotham',\n 'description': 'Tracing the rise of the great DC Comics Super-Villains and vigilantes, Gotham reveals an entirely new chapter that has never been told.',\n 'duration': 129,\n 'timestamp': 1400020798,\n 'upload_date': '20140513',\n 'uploader': 'NEWA-FNG-FOXCOM',\n },\n 'add_ie': ['ThePlatform'],\n }\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n webpage = self._download_webpage(url, video_id)\n\n settings = self._parse_json(self._search_regex(\n r'jQuery\\.extend\\(Drupal\\.settings\\s*,\\s*({.+?})\\);',\n webpage, 'drupal settings'), video_id)\n fox_pdk_player = settings['fox_pdk_player']\n release_url = fox_pdk_player['release_url']\n query = {\n 'mbr': 'true',\n 'switch': 'http'\n }\n if fox_pdk_player.get('access') == 'locked':\n ap_p = settings['foxAdobePassProvider']\n rating = ap_p.get('videoRating')\n if rating == 'n/a':\n rating = None\n resource = self._get_mvpd_resource('fbc-fox', None, ap_p['videoGUID'], rating)\n query['auth'] = self._extract_mvpd_auth(url, video_id, 'fbc-fox', resource)\n\n info = self._search_json_ld(webpage, video_id, fatal=False)\n info.update({\n '_type': 'url_transparent',\n 'ie_key': 'ThePlatform',\n 'url': smuggle_url(update_url_query(release_url, query), {'force_smil_url': True}),\n 'id': video_id,\n })\n\n return info\n", "path": "youtube_dl/extractor/fox.py"}]}
| 3,597 | 246 |
gh_patches_debug_2584
|
rasdani/github-patches
|
git_diff
|
kivy__python-for-android-2180
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Issues introduced by PR #2113 (SDL2)
As said on Discord #dev channel yesterday, PR #2113 introduces a lot of blocking issues.
These are the results of the tests done by me, @AndreMiras and @opacam :
- `sdl2==2.0.10` have issues that have been solved by the SDL2 team, so it needs to be bumped to `2.0.12`.
- `sdl2==2.0.12` works but create freezes during runtime.
- These freezes are definitely related to the new `SDL_LockMutex` / `SDL_UnlockMutex` mechanism they added for concurrency issues.
- Commenting `SDL_LockMutex` on `Touch` related events fixes the freeze issue for non-fullscreen apps.
- On fullscreen apps, the patch it's also needed on `Resize, .. etc` events.
I'm providing an attached patch that fixes the issues on top of `2.0.12`, btw seems not a good idea to do that, so it needs some more investigation:
[disable_mutex.txt](https://github.com/kivy/python-for-android/files/4569870/disable_mutex.txt)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pythonforandroid/recipes/sdl2/__init__.py`
Content:
```
1 from pythonforandroid.recipe import BootstrapNDKRecipe
2 from pythonforandroid.toolchain import current_directory, shprint
3 import sh
4
5
6 class LibSDL2Recipe(BootstrapNDKRecipe):
7 version = "2.0.10"
8 url = "https://www.libsdl.org/release/SDL2-{version}.zip"
9 md5sum = "6b2e9a4a2faba4ff277062cf669724f4"
10
11 dir_name = 'SDL'
12
13 depends = ['sdl2_image', 'sdl2_mixer', 'sdl2_ttf']
14
15 def get_recipe_env(self, arch=None, with_flags_in_cc=True, with_python=True):
16 env = super().get_recipe_env(
17 arch=arch, with_flags_in_cc=with_flags_in_cc, with_python=with_python)
18 env['APP_ALLOW_MISSING_DEPS'] = 'true'
19 return env
20
21 def build_arch(self, arch):
22 env = self.get_recipe_env(arch)
23
24 with current_directory(self.get_jni_dir()):
25 shprint(
26 sh.ndk_build,
27 "V=1",
28 "NDK_DEBUG=" + ("1" if self.ctx.build_as_debuggable else "0"),
29 _env=env
30 )
31
32
33 recipe = LibSDL2Recipe()
34
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pythonforandroid/recipes/sdl2/__init__.py b/pythonforandroid/recipes/sdl2/__init__.py
--- a/pythonforandroid/recipes/sdl2/__init__.py
+++ b/pythonforandroid/recipes/sdl2/__init__.py
@@ -4,9 +4,9 @@
class LibSDL2Recipe(BootstrapNDKRecipe):
- version = "2.0.10"
- url = "https://www.libsdl.org/release/SDL2-{version}.zip"
- md5sum = "6b2e9a4a2faba4ff277062cf669724f4"
+ version = "2.0.9"
+ url = "https://www.libsdl.org/release/SDL2-{version}.tar.gz"
+ md5sum = 'f2ecfba915c54f7200f504d8b48a5dfe'
dir_name = 'SDL'
|
{"golden_diff": "diff --git a/pythonforandroid/recipes/sdl2/__init__.py b/pythonforandroid/recipes/sdl2/__init__.py\n--- a/pythonforandroid/recipes/sdl2/__init__.py\n+++ b/pythonforandroid/recipes/sdl2/__init__.py\n@@ -4,9 +4,9 @@\n \n \n class LibSDL2Recipe(BootstrapNDKRecipe):\n- version = \"2.0.10\"\n- url = \"https://www.libsdl.org/release/SDL2-{version}.zip\"\n- md5sum = \"6b2e9a4a2faba4ff277062cf669724f4\"\n+ version = \"2.0.9\"\n+ url = \"https://www.libsdl.org/release/SDL2-{version}.tar.gz\"\n+ md5sum = 'f2ecfba915c54f7200f504d8b48a5dfe'\n \n dir_name = 'SDL'\n", "issue": "Issues introduced by PR #2113 (SDL2)\nAs said on Discord #dev channel yesterday, PR #2113 introduces a lot of blocking issues.\r\n\r\nThese are the results of the tests done by me, @AndreMiras and @opacam :\r\n\r\n- `sdl2==2.0.10` have issues that have been solved by the SDL2 team, so it needs to be bumped to `2.0.12`.\r\n\r\n- `sdl2==2.0.12` works but create freezes during runtime.\r\n\r\n- These freezes are definitely related to the new `SDL_LockMutex` / `SDL_UnlockMutex` mechanism they added for concurrency issues.\r\n\r\n- Commenting `SDL_LockMutex` on `Touch` related events fixes the freeze issue for non-fullscreen apps.\r\n\r\n- On fullscreen apps, the patch it's also needed on `Resize, .. etc` events.\r\n\r\nI'm providing an attached patch that fixes the issues on top of `2.0.12`, btw seems not a good idea to do that, so it needs some more investigation:\r\n\r\n[disable_mutex.txt](https://github.com/kivy/python-for-android/files/4569870/disable_mutex.txt)\r\n\r\n\n", "before_files": [{"content": "from pythonforandroid.recipe import BootstrapNDKRecipe\nfrom pythonforandroid.toolchain import current_directory, shprint\nimport sh\n\n\nclass LibSDL2Recipe(BootstrapNDKRecipe):\n version = \"2.0.10\"\n url = \"https://www.libsdl.org/release/SDL2-{version}.zip\"\n md5sum = \"6b2e9a4a2faba4ff277062cf669724f4\"\n\n dir_name = 'SDL'\n\n depends = ['sdl2_image', 'sdl2_mixer', 'sdl2_ttf']\n\n def get_recipe_env(self, arch=None, with_flags_in_cc=True, with_python=True):\n env = super().get_recipe_env(\n arch=arch, with_flags_in_cc=with_flags_in_cc, with_python=with_python)\n env['APP_ALLOW_MISSING_DEPS'] = 'true'\n return env\n\n def build_arch(self, arch):\n env = self.get_recipe_env(arch)\n\n with current_directory(self.get_jni_dir()):\n shprint(\n sh.ndk_build,\n \"V=1\",\n \"NDK_DEBUG=\" + (\"1\" if self.ctx.build_as_debuggable else \"0\"),\n _env=env\n )\n\n\nrecipe = LibSDL2Recipe()\n", "path": "pythonforandroid/recipes/sdl2/__init__.py"}], "after_files": [{"content": "from pythonforandroid.recipe import BootstrapNDKRecipe\nfrom pythonforandroid.toolchain import current_directory, shprint\nimport sh\n\n\nclass LibSDL2Recipe(BootstrapNDKRecipe):\n version = \"2.0.9\"\n url = \"https://www.libsdl.org/release/SDL2-{version}.tar.gz\"\n md5sum = 'f2ecfba915c54f7200f504d8b48a5dfe'\n\n dir_name = 'SDL'\n\n depends = ['sdl2_image', 'sdl2_mixer', 'sdl2_ttf']\n\n def get_recipe_env(self, arch=None, with_flags_in_cc=True, with_python=True):\n env = super().get_recipe_env(\n arch=arch, with_flags_in_cc=with_flags_in_cc, with_python=with_python)\n env['APP_ALLOW_MISSING_DEPS'] = 'true'\n return env\n\n def build_arch(self, arch):\n env = self.get_recipe_env(arch)\n\n with current_directory(self.get_jni_dir()):\n shprint(\n sh.ndk_build,\n \"V=1\",\n \"NDK_DEBUG=\" + (\"1\" if self.ctx.build_as_debuggable else \"0\"),\n _env=env\n )\n\n\nrecipe = LibSDL2Recipe()\n", "path": "pythonforandroid/recipes/sdl2/__init__.py"}]}
| 873 | 227 |
gh_patches_debug_1952
|
rasdani/github-patches
|
git_diff
|
docker__docker-py-1709
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
.dockerignore does not work with patterns begin with slash
docker version:
```
docker -v
Docker version 17.03.1-ce, build c6d412e
```
reproduce:
```
mkdir app
cd app
mkdir foo
touch foo/bar
echo '/foo/bar' > .dockerignore
printf 'FROM alpine:3.1\nWORKDIR /app\nCOPY . .\n' > Dockerfile
docker build -t app .
docker run --rm app find foo
```
output:
```
foo
foo/bar
```
It seems the statement from [the official document](https://docs.docker.com/engine/reference/builder/#dockerignore-file) below is not correct:
> For example, the patterns `/foo/bar` and `foo/bar` both exclude a file or directory named `bar` in the `foo` subdirectory of `PATH` or in the root of the git repository located at `URL`.
We should either amend the document or fix the bug.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docker/utils/build.py`
Content:
```
1 import os
2
3 from ..constants import IS_WINDOWS_PLATFORM
4 from .fnmatch import fnmatch
5 from .utils import create_archive
6
7
8 def tar(path, exclude=None, dockerfile=None, fileobj=None, gzip=False):
9 root = os.path.abspath(path)
10 exclude = exclude or []
11
12 return create_archive(
13 files=sorted(exclude_paths(root, exclude, dockerfile=dockerfile)),
14 root=root, fileobj=fileobj, gzip=gzip
15 )
16
17
18 def exclude_paths(root, patterns, dockerfile=None):
19 """
20 Given a root directory path and a list of .dockerignore patterns, return
21 an iterator of all paths (both regular files and directories) in the root
22 directory that do *not* match any of the patterns.
23
24 All paths returned are relative to the root.
25 """
26 if dockerfile is None:
27 dockerfile = 'Dockerfile'
28
29 exceptions = [p for p in patterns if p.startswith('!')]
30
31 include_patterns = [p[1:] for p in exceptions]
32 include_patterns += [dockerfile, '.dockerignore']
33
34 exclude_patterns = list(set(patterns) - set(exceptions))
35
36 paths = get_paths(root, exclude_patterns, include_patterns,
37 has_exceptions=len(exceptions) > 0)
38
39 return set(paths).union(
40 # If the Dockerfile is in a subdirectory that is excluded, get_paths
41 # will not descend into it and the file will be skipped. This ensures
42 # it doesn't happen.
43 set([dockerfile.replace('/', os.path.sep)])
44 if os.path.exists(os.path.join(root, dockerfile)) else set()
45 )
46
47
48 def should_include(path, exclude_patterns, include_patterns):
49 """
50 Given a path, a list of exclude patterns, and a list of inclusion patterns:
51
52 1. Returns True if the path doesn't match any exclusion pattern
53 2. Returns False if the path matches an exclusion pattern and doesn't match
54 an inclusion pattern
55 3. Returns true if the path matches an exclusion pattern and matches an
56 inclusion pattern
57 """
58 for pattern in exclude_patterns:
59 if match_path(path, pattern):
60 for pattern in include_patterns:
61 if match_path(path, pattern):
62 return True
63 return False
64 return True
65
66
67 def should_check_directory(directory_path, exclude_patterns, include_patterns):
68 """
69 Given a directory path, a list of exclude patterns, and a list of inclusion
70 patterns:
71
72 1. Returns True if the directory path should be included according to
73 should_include.
74 2. Returns True if the directory path is the prefix for an inclusion
75 pattern
76 3. Returns False otherwise
77 """
78
79 # To account for exception rules, check directories if their path is a
80 # a prefix to an inclusion pattern. This logic conforms with the current
81 # docker logic (2016-10-27):
82 # https://github.com/docker/docker/blob/bc52939b0455116ab8e0da67869ec81c1a1c3e2c/pkg/archive/archive.go#L640-L671
83
84 def normalize_path(path):
85 return path.replace(os.path.sep, '/')
86
87 path_with_slash = normalize_path(directory_path) + '/'
88 possible_child_patterns = [
89 pattern for pattern in map(normalize_path, include_patterns)
90 if (pattern + '/').startswith(path_with_slash)
91 ]
92 directory_included = should_include(
93 directory_path, exclude_patterns, include_patterns
94 )
95 return directory_included or len(possible_child_patterns) > 0
96
97
98 def get_paths(root, exclude_patterns, include_patterns, has_exceptions=False):
99 paths = []
100
101 for parent, dirs, files in os.walk(root, topdown=True, followlinks=False):
102 parent = os.path.relpath(parent, root)
103 if parent == '.':
104 parent = ''
105
106 # Remove excluded patterns from the list of directories to traverse
107 # by mutating the dirs we're iterating over.
108 # This looks strange, but is considered the correct way to skip
109 # traversal. See https://docs.python.org/2/library/os.html#os.walk
110 dirs[:] = [
111 d for d in dirs if should_check_directory(
112 os.path.join(parent, d), exclude_patterns, include_patterns
113 )
114 ]
115
116 for path in dirs:
117 if should_include(os.path.join(parent, path),
118 exclude_patterns, include_patterns):
119 paths.append(os.path.join(parent, path))
120
121 for path in files:
122 if should_include(os.path.join(parent, path),
123 exclude_patterns, include_patterns):
124 paths.append(os.path.join(parent, path))
125
126 return paths
127
128
129 def match_path(path, pattern):
130 pattern = pattern.rstrip('/' + os.path.sep)
131 if pattern:
132 pattern = os.path.relpath(pattern)
133
134 pattern_components = pattern.split(os.path.sep)
135 if len(pattern_components) == 1 and IS_WINDOWS_PLATFORM:
136 pattern_components = pattern.split('/')
137
138 if '**' not in pattern:
139 path_components = path.split(os.path.sep)[:len(pattern_components)]
140 else:
141 path_components = path.split(os.path.sep)
142 return fnmatch('/'.join(path_components), '/'.join(pattern_components))
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docker/utils/build.py b/docker/utils/build.py
--- a/docker/utils/build.py
+++ b/docker/utils/build.py
@@ -26,6 +26,7 @@
if dockerfile is None:
dockerfile = 'Dockerfile'
+ patterns = [p.lstrip('/') for p in patterns]
exceptions = [p for p in patterns if p.startswith('!')]
include_patterns = [p[1:] for p in exceptions]
|
{"golden_diff": "diff --git a/docker/utils/build.py b/docker/utils/build.py\n--- a/docker/utils/build.py\n+++ b/docker/utils/build.py\n@@ -26,6 +26,7 @@\n if dockerfile is None:\n dockerfile = 'Dockerfile'\n \n+ patterns = [p.lstrip('/') for p in patterns]\n exceptions = [p for p in patterns if p.startswith('!')]\n \n include_patterns = [p[1:] for p in exceptions]\n", "issue": ".dockerignore does not work with patterns begin with slash\ndocker version:\r\n\r\n```\r\ndocker -v\r\nDocker version 17.03.1-ce, build c6d412e\r\n```\r\nreproduce:\r\n\r\n```\r\nmkdir app\r\ncd app\r\nmkdir foo\r\ntouch foo/bar\r\necho '/foo/bar' > .dockerignore\r\nprintf 'FROM alpine:3.1\\nWORKDIR /app\\nCOPY . .\\n' > Dockerfile\r\ndocker build -t app .\r\ndocker run --rm app find foo\r\n```\r\n\r\noutput:\r\n\r\n```\r\nfoo\r\nfoo/bar\r\n```\r\n\r\nIt seems the statement from [the official document](https://docs.docker.com/engine/reference/builder/#dockerignore-file) below is not correct:\r\n\r\n> For example, the patterns `/foo/bar` and `foo/bar` both exclude a file or directory named `bar` in the `foo` subdirectory of `PATH` or in the root of the git repository located at `URL`.\r\n\r\nWe should either amend the document or fix the bug.\r\n\n", "before_files": [{"content": "import os\n\nfrom ..constants import IS_WINDOWS_PLATFORM\nfrom .fnmatch import fnmatch\nfrom .utils import create_archive\n\n\ndef tar(path, exclude=None, dockerfile=None, fileobj=None, gzip=False):\n root = os.path.abspath(path)\n exclude = exclude or []\n\n return create_archive(\n files=sorted(exclude_paths(root, exclude, dockerfile=dockerfile)),\n root=root, fileobj=fileobj, gzip=gzip\n )\n\n\ndef exclude_paths(root, patterns, dockerfile=None):\n \"\"\"\n Given a root directory path and a list of .dockerignore patterns, return\n an iterator of all paths (both regular files and directories) in the root\n directory that do *not* match any of the patterns.\n\n All paths returned are relative to the root.\n \"\"\"\n if dockerfile is None:\n dockerfile = 'Dockerfile'\n\n exceptions = [p for p in patterns if p.startswith('!')]\n\n include_patterns = [p[1:] for p in exceptions]\n include_patterns += [dockerfile, '.dockerignore']\n\n exclude_patterns = list(set(patterns) - set(exceptions))\n\n paths = get_paths(root, exclude_patterns, include_patterns,\n has_exceptions=len(exceptions) > 0)\n\n return set(paths).union(\n # If the Dockerfile is in a subdirectory that is excluded, get_paths\n # will not descend into it and the file will be skipped. This ensures\n # it doesn't happen.\n set([dockerfile.replace('/', os.path.sep)])\n if os.path.exists(os.path.join(root, dockerfile)) else set()\n )\n\n\ndef should_include(path, exclude_patterns, include_patterns):\n \"\"\"\n Given a path, a list of exclude patterns, and a list of inclusion patterns:\n\n 1. Returns True if the path doesn't match any exclusion pattern\n 2. Returns False if the path matches an exclusion pattern and doesn't match\n an inclusion pattern\n 3. Returns true if the path matches an exclusion pattern and matches an\n inclusion pattern\n \"\"\"\n for pattern in exclude_patterns:\n if match_path(path, pattern):\n for pattern in include_patterns:\n if match_path(path, pattern):\n return True\n return False\n return True\n\n\ndef should_check_directory(directory_path, exclude_patterns, include_patterns):\n \"\"\"\n Given a directory path, a list of exclude patterns, and a list of inclusion\n patterns:\n\n 1. Returns True if the directory path should be included according to\n should_include.\n 2. Returns True if the directory path is the prefix for an inclusion\n pattern\n 3. Returns False otherwise\n \"\"\"\n\n # To account for exception rules, check directories if their path is a\n # a prefix to an inclusion pattern. This logic conforms with the current\n # docker logic (2016-10-27):\n # https://github.com/docker/docker/blob/bc52939b0455116ab8e0da67869ec81c1a1c3e2c/pkg/archive/archive.go#L640-L671\n\n def normalize_path(path):\n return path.replace(os.path.sep, '/')\n\n path_with_slash = normalize_path(directory_path) + '/'\n possible_child_patterns = [\n pattern for pattern in map(normalize_path, include_patterns)\n if (pattern + '/').startswith(path_with_slash)\n ]\n directory_included = should_include(\n directory_path, exclude_patterns, include_patterns\n )\n return directory_included or len(possible_child_patterns) > 0\n\n\ndef get_paths(root, exclude_patterns, include_patterns, has_exceptions=False):\n paths = []\n\n for parent, dirs, files in os.walk(root, topdown=True, followlinks=False):\n parent = os.path.relpath(parent, root)\n if parent == '.':\n parent = ''\n\n # Remove excluded patterns from the list of directories to traverse\n # by mutating the dirs we're iterating over.\n # This looks strange, but is considered the correct way to skip\n # traversal. See https://docs.python.org/2/library/os.html#os.walk\n dirs[:] = [\n d for d in dirs if should_check_directory(\n os.path.join(parent, d), exclude_patterns, include_patterns\n )\n ]\n\n for path in dirs:\n if should_include(os.path.join(parent, path),\n exclude_patterns, include_patterns):\n paths.append(os.path.join(parent, path))\n\n for path in files:\n if should_include(os.path.join(parent, path),\n exclude_patterns, include_patterns):\n paths.append(os.path.join(parent, path))\n\n return paths\n\n\ndef match_path(path, pattern):\n pattern = pattern.rstrip('/' + os.path.sep)\n if pattern:\n pattern = os.path.relpath(pattern)\n\n pattern_components = pattern.split(os.path.sep)\n if len(pattern_components) == 1 and IS_WINDOWS_PLATFORM:\n pattern_components = pattern.split('/')\n\n if '**' not in pattern:\n path_components = path.split(os.path.sep)[:len(pattern_components)]\n else:\n path_components = path.split(os.path.sep)\n return fnmatch('/'.join(path_components), '/'.join(pattern_components))\n", "path": "docker/utils/build.py"}], "after_files": [{"content": "import os\n\nfrom ..constants import IS_WINDOWS_PLATFORM\nfrom .fnmatch import fnmatch\nfrom .utils import create_archive\n\n\ndef tar(path, exclude=None, dockerfile=None, fileobj=None, gzip=False):\n root = os.path.abspath(path)\n exclude = exclude or []\n\n return create_archive(\n files=sorted(exclude_paths(root, exclude, dockerfile=dockerfile)),\n root=root, fileobj=fileobj, gzip=gzip\n )\n\n\ndef exclude_paths(root, patterns, dockerfile=None):\n \"\"\"\n Given a root directory path and a list of .dockerignore patterns, return\n an iterator of all paths (both regular files and directories) in the root\n directory that do *not* match any of the patterns.\n\n All paths returned are relative to the root.\n \"\"\"\n if dockerfile is None:\n dockerfile = 'Dockerfile'\n\n patterns = [p.lstrip('/') for p in patterns]\n exceptions = [p for p in patterns if p.startswith('!')]\n\n include_patterns = [p[1:] for p in exceptions]\n include_patterns += [dockerfile, '.dockerignore']\n\n exclude_patterns = list(set(patterns) - set(exceptions))\n\n paths = get_paths(root, exclude_patterns, include_patterns,\n has_exceptions=len(exceptions) > 0)\n\n return set(paths).union(\n # If the Dockerfile is in a subdirectory that is excluded, get_paths\n # will not descend into it and the file will be skipped. This ensures\n # it doesn't happen.\n set([dockerfile.replace('/', os.path.sep)])\n if os.path.exists(os.path.join(root, dockerfile)) else set()\n )\n\n\ndef should_include(path, exclude_patterns, include_patterns):\n \"\"\"\n Given a path, a list of exclude patterns, and a list of inclusion patterns:\n\n 1. Returns True if the path doesn't match any exclusion pattern\n 2. Returns False if the path matches an exclusion pattern and doesn't match\n an inclusion pattern\n 3. Returns true if the path matches an exclusion pattern and matches an\n inclusion pattern\n \"\"\"\n for pattern in exclude_patterns:\n if match_path(path, pattern):\n for pattern in include_patterns:\n if match_path(path, pattern):\n return True\n return False\n return True\n\n\ndef should_check_directory(directory_path, exclude_patterns, include_patterns):\n \"\"\"\n Given a directory path, a list of exclude patterns, and a list of inclusion\n patterns:\n\n 1. Returns True if the directory path should be included according to\n should_include.\n 2. Returns True if the directory path is the prefix for an inclusion\n pattern\n 3. Returns False otherwise\n \"\"\"\n\n # To account for exception rules, check directories if their path is a\n # a prefix to an inclusion pattern. This logic conforms with the current\n # docker logic (2016-10-27):\n # https://github.com/docker/docker/blob/bc52939b0455116ab8e0da67869ec81c1a1c3e2c/pkg/archive/archive.go#L640-L671\n\n def normalize_path(path):\n return path.replace(os.path.sep, '/')\n\n path_with_slash = normalize_path(directory_path) + '/'\n possible_child_patterns = [\n pattern for pattern in map(normalize_path, include_patterns)\n if (pattern + '/').startswith(path_with_slash)\n ]\n directory_included = should_include(\n directory_path, exclude_patterns, include_patterns\n )\n return directory_included or len(possible_child_patterns) > 0\n\n\ndef get_paths(root, exclude_patterns, include_patterns, has_exceptions=False):\n paths = []\n\n for parent, dirs, files in os.walk(root, topdown=True, followlinks=False):\n parent = os.path.relpath(parent, root)\n if parent == '.':\n parent = ''\n\n # Remove excluded patterns from the list of directories to traverse\n # by mutating the dirs we're iterating over.\n # This looks strange, but is considered the correct way to skip\n # traversal. See https://docs.python.org/2/library/os.html#os.walk\n dirs[:] = [\n d for d in dirs if should_check_directory(\n os.path.join(parent, d), exclude_patterns, include_patterns\n )\n ]\n\n for path in dirs:\n if should_include(os.path.join(parent, path),\n exclude_patterns, include_patterns):\n paths.append(os.path.join(parent, path))\n\n for path in files:\n if should_include(os.path.join(parent, path),\n exclude_patterns, include_patterns):\n paths.append(os.path.join(parent, path))\n\n return paths\n\n\ndef match_path(path, pattern):\n pattern = pattern.rstrip('/' + os.path.sep)\n if pattern:\n pattern = os.path.relpath(pattern)\n\n pattern_components = pattern.split(os.path.sep)\n if len(pattern_components) == 1 and IS_WINDOWS_PLATFORM:\n pattern_components = pattern.split('/')\n\n if '**' not in pattern:\n path_components = path.split(os.path.sep)[:len(pattern_components)]\n else:\n path_components = path.split(os.path.sep)\n return fnmatch('/'.join(path_components), '/'.join(pattern_components))\n", "path": "docker/utils/build.py"}]}
| 1,933 | 101 |
gh_patches_debug_16086
|
rasdani/github-patches
|
git_diff
|
nvaccess__nvda-13382
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Recent alpha versions crash Notepad++ versions 8.2.x and below
<!-- Please read the text in this edit field before filling it in.
Please thoroughly read NVDA's wiki article on how to fill in this template, including how to provide the required files.
Issues may be closed if the required information is not present.
https://github.com/nvaccess/nvda/blob/master/devDocs/githubIssueTemplateExplanationAndExamples.md
Please also note that the NVDA project has a Citizen and Contributor Code of Conduct which can be found at https://github.com/nvaccess/nvda/blob/master/CODE_OF_CONDUCT.md. NV Access expects that all contributors and other community members read and abide by the rules set out in this document while participating or contributing to this project. This includes creating or commenting on issues and pull requests.
Each of the questions and sections below start with multiple hash symbols (#). Place your answers and information on the blank line below each question.
-->
### Steps to reproduce:
1. Use NVDA alpha version nvda_snapshot_alpha-24771,a1a6598b.exe or above.
2. Open Notepad++ v8.1.x or v8.2.x 64-bit.
3.Press up/down a few times.
### Actual behavior:
Notepad++ crashes. No error message is printed in NVDA log.
### Expected behavior:
Notepad++ should not crash.
### System configuration
#### NVDA installed/portable/running from source:
Portable alpha nvda_snapshot_alpha-24771,a1a6598b.exe
#### NVDA version:
nvda_snapshot_alpha-24771,a1a6598b.exe
#### Windows version:
Reproduced on both Windows 10 and 11 64-bit.
#### Name and version of other software in use when reproducing the issue:
Reproduced using either of these two versions (64-bit in both cases):
- Notepad++ v8.1.9.3: https://github.com/notepad-plus-plus/notepad-plus-plus/releases/download/v8.1.9.3/npp.8.1.9.3.portable.x64.zip
- Notepad++ v8.2.1: https://github.com/notepad-plus-plus/notepad-plus-plus/releases/download/v8.2.1/npp.8.2.1.portable.x64.zip
#### Other information about your system:
N/A
### Other questions
#### Does the issue still occur after restarting your computer?
Yes
#### Have you tried any other versions of NVDA? If so, please report their behaviors.
Identified that commit a1a6598b19f71bd108bef64373dd35c89c14cd4a is causing this, therefore cc: @lukaszgo1.
#### If NVDA add-ons are disabled, is your problem still occurring?
Yes
#### Does the issue still occur after you run the COM Registration Fixing Tool in NVDA's tools menu?
Yes
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `source/appModules/notepadPlusPlus.py`
Content:
```
1 # A part of NonVisual Desktop Access (NVDA)
2 # Copyright (C) 2022 NV Access Limited, Łukasz Golonka
3 # This file may be used under the terms of the GNU General Public License, version 2 or later.
4 # For more details see: https://www.gnu.org/licenses/gpl-2.0.html
5
6 """AppModule for Notepad++.
7 Do not rename! The executable file for Notepad++ is named `notepad++` and `+` is not a valid character
8 in Python's import statements.
9 This module is mapped to the right binary separately
10 and the current name makes it possible to expose it from `nvdaBuiltin` for add-on developers.
11 """
12
13 import ctypes
14
15 import appModuleHandler
16 import NVDAObjects.window.scintilla as ScintillaBase
17
18
19 class CharacterRangeStructLongLong(ctypes.Structure):
20 """By default character ranges in Scintilla are represented by longs.
21 However long is not big enough for files over 2 GB,
22 therefore in 64-bit builds of Notepad++ 8.3 and later
23 these ranges are represented by longlong.
24 """
25 _fields_ = [
26 ('cpMin', ctypes.c_longlong),
27 ('cpMax', ctypes.c_longlong),
28 ]
29
30
31 class ScintillaTextInfoNpp83(ScintillaBase.ScintillaTextInfo):
32 """Text info for 64-bit builds of Notepad++ 8.3 and later.
33 """
34
35 class TextRangeStruct(ctypes.Structure):
36 _fields_ = [
37 ('chrg', CharacterRangeStructLongLong),
38 ('lpstrText', ctypes.c_char_p),
39 ]
40
41
42 class NppEdit(ScintillaBase.Scintilla):
43
44 name = None # The name of the editor is not useful.
45
46 def _get_TextInfo(self):
47 if self.appModule.is64BitProcess:
48 appVerMajor, appVerMinor, *__ = self.appModule.productVersion.split(".")
49 if int(appVerMajor) >= 8 and int(appVerMinor) >= 3:
50 return ScintillaTextInfoNpp83
51 return super().TextInfo
52
53
54 class AppModule(appModuleHandler.AppModule):
55
56 def chooseNVDAObjectOverlayClasses(self, obj, clsList):
57 if obj.windowClassName == "Scintilla" and obj.windowControlID == 0:
58 clsList.insert(0, NppEdit)
59
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/source/appModules/notepadPlusPlus.py b/source/appModules/notepadPlusPlus.py
--- a/source/appModules/notepadPlusPlus.py
+++ b/source/appModules/notepadPlusPlus.py
@@ -46,7 +46,12 @@
def _get_TextInfo(self):
if self.appModule.is64BitProcess:
appVerMajor, appVerMinor, *__ = self.appModule.productVersion.split(".")
- if int(appVerMajor) >= 8 and int(appVerMinor) >= 3:
+ # When retrieving the version, Notepad++ concatenates
+ # minor, patch, build in major.minor.patch.build to the form of major.minor
+ # https://github.com/notepad-plus-plus/npp-usermanual/blob/master/content/docs/plugin-communication.md#nppm_getnppversion
+ # e.g. '8.3' for '8.3', '8.21' for '8.2.1' and '8.192' for '8.1.9.2'.
+ # Therefore, only use the first digit of the minor version to match against version 8.3 or later.
+ if int(appVerMajor) >= 8 and int(appVerMinor[0]) >= 3:
return ScintillaTextInfoNpp83
return super().TextInfo
|
{"golden_diff": "diff --git a/source/appModules/notepadPlusPlus.py b/source/appModules/notepadPlusPlus.py\n--- a/source/appModules/notepadPlusPlus.py\n+++ b/source/appModules/notepadPlusPlus.py\n@@ -46,7 +46,12 @@\n \tdef _get_TextInfo(self):\r\n \t\tif self.appModule.is64BitProcess:\r\n \t\t\tappVerMajor, appVerMinor, *__ = self.appModule.productVersion.split(\".\")\r\n-\t\t\tif int(appVerMajor) >= 8 and int(appVerMinor) >= 3:\r\n+\t\t\t# When retrieving the version, Notepad++ concatenates\r\n+\t\t\t# minor, patch, build in major.minor.patch.build to the form of major.minor\r\n+\t\t\t# https://github.com/notepad-plus-plus/npp-usermanual/blob/master/content/docs/plugin-communication.md#nppm_getnppversion\r\n+\t\t\t# e.g. '8.3' for '8.3', '8.21' for '8.2.1' and '8.192' for '8.1.9.2'.\r\n+\t\t\t# Therefore, only use the first digit of the minor version to match against version 8.3 or later.\r\n+\t\t\tif int(appVerMajor) >= 8 and int(appVerMinor[0]) >= 3:\r\n \t\t\t\treturn ScintillaTextInfoNpp83\r\n \t\treturn super().TextInfo\n", "issue": "Recent alpha versions crash Notepad++ versions 8.2.x and below\n<!-- Please read the text in this edit field before filling it in.\r\nPlease thoroughly read NVDA's wiki article on how to fill in this template, including how to provide the required files.\r\nIssues may be closed if the required information is not present.\r\nhttps://github.com/nvaccess/nvda/blob/master/devDocs/githubIssueTemplateExplanationAndExamples.md\r\nPlease also note that the NVDA project has a Citizen and Contributor Code of Conduct which can be found at https://github.com/nvaccess/nvda/blob/master/CODE_OF_CONDUCT.md. NV Access expects that all contributors and other community members read and abide by the rules set out in this document while participating or contributing to this project. This includes creating or commenting on issues and pull requests. \r\n\r\nEach of the questions and sections below start with multiple hash symbols (#). Place your answers and information on the blank line below each question.\r\n-->\r\n\r\n### Steps to reproduce:\r\n1. Use NVDA alpha version nvda_snapshot_alpha-24771,a1a6598b.exe or above.\r\n2. Open Notepad++ v8.1.x or v8.2.x 64-bit.\r\n3.Press up/down a few times.\r\n\r\n### Actual behavior:\r\nNotepad++ crashes. No error message is printed in NVDA log.\r\n### Expected behavior:\r\nNotepad++ should not crash.\r\n### System configuration\r\n#### NVDA installed/portable/running from source:\r\nPortable alpha nvda_snapshot_alpha-24771,a1a6598b.exe\r\n#### NVDA version:\r\nnvda_snapshot_alpha-24771,a1a6598b.exe\r\n\r\n#### Windows version:\r\nReproduced on both Windows 10 and 11 64-bit.\r\n#### Name and version of other software in use when reproducing the issue:\r\nReproduced using either of these two versions (64-bit in both cases):\r\n- Notepad++ v8.1.9.3: https://github.com/notepad-plus-plus/notepad-plus-plus/releases/download/v8.1.9.3/npp.8.1.9.3.portable.x64.zip\r\n- Notepad++ v8.2.1: https://github.com/notepad-plus-plus/notepad-plus-plus/releases/download/v8.2.1/npp.8.2.1.portable.x64.zip\r\n#### Other information about your system:\r\nN/A\r\n### Other questions\r\n#### Does the issue still occur after restarting your computer?\r\nYes\r\n#### Have you tried any other versions of NVDA? If so, please report their behaviors.\r\nIdentified that commit a1a6598b19f71bd108bef64373dd35c89c14cd4a is causing this, therefore cc: @lukaszgo1.\r\n#### If NVDA add-ons are disabled, is your problem still occurring?\r\nYes\r\n#### Does the issue still occur after you run the COM Registration Fixing Tool in NVDA's tools menu?\r\nYes\n", "before_files": [{"content": "# A part of NonVisual Desktop Access (NVDA)\r\n# Copyright (C) 2022 NV Access Limited, \u0141ukasz Golonka\r\n# This file may be used under the terms of the GNU General Public License, version 2 or later.\r\n# For more details see: https://www.gnu.org/licenses/gpl-2.0.html\r\n\r\n\"\"\"AppModule for Notepad++.\r\nDo not rename! The executable file for Notepad++ is named `notepad++` and `+` is not a valid character\r\nin Python's import statements.\r\nThis module is mapped to the right binary separately\r\nand the current name makes it possible to expose it from `nvdaBuiltin` for add-on developers.\r\n\"\"\"\r\n\r\nimport ctypes\r\n\r\nimport appModuleHandler\r\nimport NVDAObjects.window.scintilla as ScintillaBase\r\n\r\n\r\nclass CharacterRangeStructLongLong(ctypes.Structure):\r\n\t\"\"\"By default character ranges in Scintilla are represented by longs.\r\n\tHowever long is not big enough for files over 2 GB,\r\n\ttherefore in 64-bit builds of Notepad++ 8.3 and later\r\n\tthese ranges are represented by longlong.\r\n\t\"\"\"\r\n\t_fields_ = [\r\n\t\t('cpMin', ctypes.c_longlong),\r\n\t\t('cpMax', ctypes.c_longlong),\r\n\t]\r\n\r\n\r\nclass ScintillaTextInfoNpp83(ScintillaBase.ScintillaTextInfo):\r\n\t\"\"\"Text info for 64-bit builds of Notepad++ 8.3 and later.\r\n\t\"\"\"\r\n\r\n\tclass TextRangeStruct(ctypes.Structure):\r\n\t\t_fields_ = [\r\n\t\t\t('chrg', CharacterRangeStructLongLong),\r\n\t\t\t('lpstrText', ctypes.c_char_p),\r\n\t\t]\r\n\r\n\r\nclass NppEdit(ScintillaBase.Scintilla):\r\n\r\n\tname = None # The name of the editor is not useful.\r\n\r\n\tdef _get_TextInfo(self):\r\n\t\tif self.appModule.is64BitProcess:\r\n\t\t\tappVerMajor, appVerMinor, *__ = self.appModule.productVersion.split(\".\")\r\n\t\t\tif int(appVerMajor) >= 8 and int(appVerMinor) >= 3:\r\n\t\t\t\treturn ScintillaTextInfoNpp83\r\n\t\treturn super().TextInfo\r\n\r\n\r\nclass AppModule(appModuleHandler.AppModule):\r\n\r\n\tdef chooseNVDAObjectOverlayClasses(self, obj, clsList):\r\n\t\tif obj.windowClassName == \"Scintilla\" and obj.windowControlID == 0:\r\n\t\t\tclsList.insert(0, NppEdit)\r\n", "path": "source/appModules/notepadPlusPlus.py"}], "after_files": [{"content": "# A part of NonVisual Desktop Access (NVDA)\r\n# Copyright (C) 2022 NV Access Limited, \u0141ukasz Golonka\r\n# This file may be used under the terms of the GNU General Public License, version 2 or later.\r\n# For more details see: https://www.gnu.org/licenses/gpl-2.0.html\r\n\r\n\"\"\"AppModule for Notepad++.\r\nDo not rename! The executable file for Notepad++ is named `notepad++` and `+` is not a valid character\r\nin Python's import statements.\r\nThis module is mapped to the right binary separately\r\nand the current name makes it possible to expose it from `nvdaBuiltin` for add-on developers.\r\n\"\"\"\r\n\r\nimport ctypes\r\n\r\nimport appModuleHandler\r\nimport NVDAObjects.window.scintilla as ScintillaBase\r\n\r\n\r\nclass CharacterRangeStructLongLong(ctypes.Structure):\r\n\t\"\"\"By default character ranges in Scintilla are represented by longs.\r\n\tHowever long is not big enough for files over 2 GB,\r\n\ttherefore in 64-bit builds of Notepad++ 8.3 and later\r\n\tthese ranges are represented by longlong.\r\n\t\"\"\"\r\n\t_fields_ = [\r\n\t\t('cpMin', ctypes.c_longlong),\r\n\t\t('cpMax', ctypes.c_longlong),\r\n\t]\r\n\r\n\r\nclass ScintillaTextInfoNpp83(ScintillaBase.ScintillaTextInfo):\r\n\t\"\"\"Text info for 64-bit builds of Notepad++ 8.3 and later.\r\n\t\"\"\"\r\n\r\n\tclass TextRangeStruct(ctypes.Structure):\r\n\t\t_fields_ = [\r\n\t\t\t('chrg', CharacterRangeStructLongLong),\r\n\t\t\t('lpstrText', ctypes.c_char_p),\r\n\t\t]\r\n\r\n\r\nclass NppEdit(ScintillaBase.Scintilla):\r\n\r\n\tname = None # The name of the editor is not useful.\r\n\r\n\tdef _get_TextInfo(self):\r\n\t\tif self.appModule.is64BitProcess:\r\n\t\t\tappVerMajor, appVerMinor, *__ = self.appModule.productVersion.split(\".\")\r\n\t\t\t# When retrieving the version, Notepad++ concatenates\r\n\t\t\t# minor, patch, build in major.minor.patch.build to the form of major.minor\r\n\t\t\t# https://github.com/notepad-plus-plus/npp-usermanual/blob/master/content/docs/plugin-communication.md#nppm_getnppversion\r\n\t\t\t# e.g. '8.3' for '8.3', '8.21' for '8.2.1' and '8.192' for '8.1.9.2'.\r\n\t\t\t# Therefore, only use the first digit of the minor version to match against version 8.3 or later.\r\n\t\t\tif int(appVerMajor) >= 8 and int(appVerMinor[0]) >= 3:\r\n\t\t\t\treturn ScintillaTextInfoNpp83\r\n\t\treturn super().TextInfo\r\n\r\n\r\nclass AppModule(appModuleHandler.AppModule):\r\n\r\n\tdef chooseNVDAObjectOverlayClasses(self, obj, clsList):\r\n\t\tif obj.windowClassName == \"Scintilla\" and obj.windowControlID == 0:\r\n\t\t\tclsList.insert(0, NppEdit)\r\n", "path": "source/appModules/notepadPlusPlus.py"}]}
| 1,561 | 306 |
gh_patches_debug_29842
|
rasdani/github-patches
|
git_diff
|
adap__flower-1024
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Secure gRPC: Logging still reporing insecure connection
### Discussed in https://github.com/adap/flower/discussions/1020
<div type='discussions-op-text'>
<sup>Originally posted by **NathanSchot** January 21, 2022</sup>
Hey,
My team and I have enjoyed working with flower for a few months now. We are very excited with the recent implementation of secure gRPC connections, as security is one of our main focus points.
I realise I may be a bit early (and potentially nitpicky) with this question as secure gRPC is still under construction, but we noticed that the logging of the FL process does not take the existence of root certificates in account, therefore printing incorrect information. The two lines where this happens are:
https://github.com/adap/flower/blob/cb66854dcbf2e0c5bf14ee48149053a84be8a3be/src/py/flwr/client/app.py#L89
and
https://github.com/adap/flower/blob/cb66854dcbf2e0c5bf14ee48149053a84be8a3be/src/py/flwr/client/grpc_client/connection.py#L114
Thanks in advance.</div>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/py/flwr/client/app.py`
Content:
```
1 # Copyright 2020 Adap GmbH. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Flower client app."""
16
17
18 import time
19 from logging import INFO
20 from typing import Optional
21
22 from flwr.common import GRPC_MAX_MESSAGE_LENGTH
23 from flwr.common.logger import log
24
25 from .client import Client
26 from .grpc_client.connection import grpc_connection
27 from .grpc_client.message_handler import handle
28 from .keras_client import KerasClient, KerasClientWrapper
29 from .numpy_client import NumPyClient, NumPyClientWrapper
30
31
32 def start_client(
33 server_address: str,
34 client: Client,
35 grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,
36 root_certificates: Optional[bytes] = None,
37 ) -> None:
38 """Start a Flower Client which connects to a gRPC server.
39
40 Parameters
41 ----------
42 server_address: str. The IPv6 address of the server. If the Flower
43 server runs on the same machine on port 8080, then `server_address`
44 would be `"[::]:8080"`.
45 client: flwr.client.Client. An implementation of the abstract base
46 class `flwr.client.Client`.
47 grpc_max_message_length: int (default: 536_870_912, this equals 512MB).
48 The maximum length of gRPC messages that can be exchanged with the
49 Flower server. The default should be sufficient for most models.
50 Users who train very large models might need to increase this
51 value. Note that the Flower server needs to be started with the
52 same value (see `flwr.server.start_server`), otherwise it will not
53 know about the increased limit and block larger messages.
54 root_certificates: bytes (default: None)
55 The PEM-encoded root certificates as a byte string. If provided, a secure
56 connection using the certificates will be established to a
57 SSL-enabled Flower server.
58
59 Returns
60 -------
61 None
62
63 Examples
64 --------
65 Starting a client with insecure server connection:
66
67 >>> start_client(
68 >>> server_address=localhost:8080,
69 >>> client=FlowerClient(),
70 >>> )
71
72 Starting a SSL-enabled client:
73
74 >>> from pathlib import Path
75 >>> start_client(
76 >>> server_address=localhost:8080,
77 >>> client=FlowerClient(),
78 >>> root_certificates=Path("/crts/root.pem").read_bytes(),
79 >>> )
80 """
81 while True:
82 sleep_duration: int = 0
83 with grpc_connection(
84 server_address,
85 max_message_length=grpc_max_message_length,
86 root_certificates=root_certificates,
87 ) as conn:
88 receive, send = conn
89 log(INFO, "Opened (insecure) gRPC connection")
90
91 while True:
92 server_message = receive()
93 client_message, sleep_duration, keep_going = handle(
94 client, server_message
95 )
96 send(client_message)
97 if not keep_going:
98 break
99 if sleep_duration == 0:
100 log(INFO, "Disconnect and shut down")
101 break
102 # Sleep and reconnect afterwards
103 log(
104 INFO,
105 "Disconnect, then re-establish connection after %s second(s)",
106 sleep_duration,
107 )
108 time.sleep(sleep_duration)
109
110
111 def start_numpy_client(
112 server_address: str,
113 client: NumPyClient,
114 grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,
115 root_certificates: Optional[bytes] = None,
116 ) -> None:
117 """Start a Flower NumPyClient which connects to a gRPC server.
118
119 Parameters
120 ----------
121 server_address: str. The IPv6 address of the server. If the Flower
122 server runs on the same machine on port 8080, then `server_address`
123 would be `"[::]:8080"`.
124 client: flwr.client.NumPyClient. An implementation of the abstract base
125 class `flwr.client.NumPyClient`.
126 grpc_max_message_length: int (default: 536_870_912, this equals 512MB).
127 The maximum length of gRPC messages that can be exchanged with the
128 Flower server. The default should be sufficient for most models.
129 Users who train very large models might need to increase this
130 value. Note that the Flower server needs to be started with the
131 same value (see `flwr.server.start_server`), otherwise it will not
132 know about the increased limit and block larger messages.
133 root_certificates: bytes (default: None)
134 The PEM-encoded root certificates a byte string. If provided, a secure
135 connection using the certificates will be established to a
136 SSL-enabled Flower server.
137
138 Returns
139 -------
140 None
141
142 Examples
143 --------
144 Starting a client with an insecure server connection:
145
146 >>> start_client(
147 >>> server_address=localhost:8080,
148 >>> client=FlowerClient(),
149 >>> )
150
151 Starting a SSL-enabled client:
152
153 >>> from pathlib import Path
154 >>> start_client(
155 >>> server_address=localhost:8080,
156 >>> client=FlowerClient(),
157 >>> root_certificates=Path("/crts/root.pem").read_bytes(),
158 >>> )
159 """
160
161 # Wrap the NumPyClient
162 flower_client = NumPyClientWrapper(client)
163
164 # Start
165 start_client(
166 server_address=server_address,
167 client=flower_client,
168 grpc_max_message_length=grpc_max_message_length,
169 root_certificates=root_certificates,
170 )
171
172
173 def start_keras_client(
174 server_address: str,
175 client: KerasClient,
176 grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,
177 ) -> None:
178 """Start a Flower KerasClient which connects to a gRPC server.
179
180 Arguments:
181 server_address: str. The IPv6 address of the server. If the Flower
182 server runs on the same machine on port 8080, then `server_address`
183 would be `"[::]:8080"`.
184 client: flwr.client.KerasClient. An implementation of the abstract base
185 class `flwr.client.KerasClient`.
186 grpc_max_message_length: int (default: 536_870_912, this equals 512MB).
187 The maximum length of gRPC messages that can be exchanged with the
188 Flower server. The default should be sufficient for most models.
189 Users who train very large models might need to increase this
190 value. Note that the Flower server needs to be started with the
191 same value (see `flwr.server.start_server`), otherwise it will not
192 know about the increased limit and block larger messages.
193
194 Returns:
195 None.
196 """
197
198 # Deprecation warning
199 warning = """
200 DEPRECATION WARNING: KerasClient is deprecated, migrate to NumPyClient.
201
202 KerasClient will be removed in a future release, please migrate to either
203 NumPyClient (recommended) or Client. NumPyClient is recommended because it
204 is conceptually very similar to KerasClient.
205 """
206 print(warning)
207
208 # Wrap the Keras client
209 flower_client = KerasClientWrapper(client)
210
211 # Start
212 start_client(
213 server_address=server_address,
214 client=flower_client,
215 grpc_max_message_length=grpc_max_message_length,
216 )
217
```
Path: `src/py/flwr/client/grpc_client/connection.py`
Content:
```
1 # Copyright 2020 Adap GmbH. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Provides contextmanager which manages a gRPC channel to connect to the
16 server."""
17 from contextlib import contextmanager
18 from logging import DEBUG
19 from queue import Queue
20 from typing import Callable, Iterator, Optional, Tuple
21
22 import grpc
23
24 from flwr.common import GRPC_MAX_MESSAGE_LENGTH
25 from flwr.common.logger import log
26 from flwr.proto.transport_pb2 import ClientMessage, ServerMessage
27 from flwr.proto.transport_pb2_grpc import FlowerServiceStub
28
29 # Uncomment these flags in case you are debugging
30 # os.environ["GRPC_VERBOSITY"] = "debug"
31 # os.environ["GRPC_TRACE"] = "connectivity_state"
32
33
34 def on_channel_state_change(channel_connectivity: str) -> None:
35 """Log channel connectivity."""
36 log(DEBUG, channel_connectivity)
37
38
39 @contextmanager
40 def grpc_connection(
41 server_address: str,
42 max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,
43 root_certificates: Optional[bytes] = None,
44 ) -> Iterator[Tuple[Callable[[], ServerMessage], Callable[[ClientMessage], None]]]:
45 """Establish an insecure gRPC connection to a gRPC server.
46
47 Parameters
48 ----------
49 server_address : str
50 The IPv6 address of the server. If the Flower server runs on the same machine
51 on port 8080, then `server_address` would be `"[::]:8080"`.
52 grpc_max_message_length : int
53 The maximum length of gRPC messages that can be exchanged with the Flower
54 server. The default should be sufficient for most models. Users who train
55 very large models might need to increase this value. Note that the Flower
56 server needs to be started with the same value
57 (see `flwr.server.start_server`), otherwise it will not know about the
58 increased limit and block larger messages.
59 (default: 536_870_912, this equals 512MB)
60 root_certificates : Optional[bytes] (default: None)
61 The PEM-encoded root certificates as a byte string. If provided, a secure
62 connection using the certificates will be established to a SSL-enabled
63 Flower server.
64
65 Returns
66 -------
67 receive, send : Callable, Callable
68
69 Examples
70 --------
71 Establishing a SSL-enabled connection to the server:
72
73 >>> from pathlib import Path
74 >>> with grpc_connection(
75 >>> server_address,
76 >>> max_message_length=grpc_max_message_length,
77 >>> root_certificates=Path("/crts/root.pem").read_bytes(),
78 >>> ) as conn:
79 >>> receive, send = conn
80 >>> server_message = receive()
81 >>> # do something here
82 >>> send(client_message)
83 """
84 channel_options = [
85 ("grpc.max_send_message_length", max_message_length),
86 ("grpc.max_receive_message_length", max_message_length),
87 ]
88
89 if root_certificates is not None:
90 ssl_channel_credentials = grpc.ssl_channel_credentials(root_certificates)
91 channel = grpc.secure_channel(
92 server_address, ssl_channel_credentials, options=channel_options
93 )
94 else:
95 channel = grpc.insecure_channel(server_address, options=channel_options)
96
97 channel.subscribe(on_channel_state_change)
98
99 queue: Queue[ClientMessage] = Queue( # pylint: disable=unsubscriptable-object
100 maxsize=1
101 )
102 stub = FlowerServiceStub(channel)
103
104 server_message_iterator: Iterator[ServerMessage] = stub.Join(iter(queue.get, None))
105
106 receive: Callable[[], ServerMessage] = lambda: next(server_message_iterator)
107 send: Callable[[ClientMessage], None] = lambda msg: queue.put(msg, block=False)
108
109 try:
110 yield (receive, send)
111 finally:
112 # Make sure to have a final
113 channel.close()
114 log(DEBUG, "Insecure gRPC channel closed")
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/py/flwr/client/app.py b/src/py/flwr/client/app.py
--- a/src/py/flwr/client/app.py
+++ b/src/py/flwr/client/app.py
@@ -86,7 +86,6 @@
root_certificates=root_certificates,
) as conn:
receive, send = conn
- log(INFO, "Opened (insecure) gRPC connection")
while True:
server_message = receive()
diff --git a/src/py/flwr/client/grpc_client/connection.py b/src/py/flwr/client/grpc_client/connection.py
--- a/src/py/flwr/client/grpc_client/connection.py
+++ b/src/py/flwr/client/grpc_client/connection.py
@@ -15,7 +15,7 @@
"""Provides contextmanager which manages a gRPC channel to connect to the
server."""
from contextlib import contextmanager
-from logging import DEBUG
+from logging import DEBUG, INFO
from queue import Queue
from typing import Callable, Iterator, Optional, Tuple
@@ -91,8 +91,10 @@
channel = grpc.secure_channel(
server_address, ssl_channel_credentials, options=channel_options
)
+ log(INFO, "Opened secure gRPC connection using certificates")
else:
channel = grpc.insecure_channel(server_address, options=channel_options)
+ log(INFO, "Opened insecure gRPC connection (no certificates were passed)")
channel.subscribe(on_channel_state_change)
@@ -111,4 +113,4 @@
finally:
# Make sure to have a final
channel.close()
- log(DEBUG, "Insecure gRPC channel closed")
+ log(DEBUG, "gRPC channel closed")
|
{"golden_diff": "diff --git a/src/py/flwr/client/app.py b/src/py/flwr/client/app.py\n--- a/src/py/flwr/client/app.py\n+++ b/src/py/flwr/client/app.py\n@@ -86,7 +86,6 @@\n root_certificates=root_certificates,\n ) as conn:\n receive, send = conn\n- log(INFO, \"Opened (insecure) gRPC connection\")\n \n while True:\n server_message = receive()\ndiff --git a/src/py/flwr/client/grpc_client/connection.py b/src/py/flwr/client/grpc_client/connection.py\n--- a/src/py/flwr/client/grpc_client/connection.py\n+++ b/src/py/flwr/client/grpc_client/connection.py\n@@ -15,7 +15,7 @@\n \"\"\"Provides contextmanager which manages a gRPC channel to connect to the\n server.\"\"\"\n from contextlib import contextmanager\n-from logging import DEBUG\n+from logging import DEBUG, INFO\n from queue import Queue\n from typing import Callable, Iterator, Optional, Tuple\n \n@@ -91,8 +91,10 @@\n channel = grpc.secure_channel(\n server_address, ssl_channel_credentials, options=channel_options\n )\n+ log(INFO, \"Opened secure gRPC connection using certificates\")\n else:\n channel = grpc.insecure_channel(server_address, options=channel_options)\n+ log(INFO, \"Opened insecure gRPC connection (no certificates were passed)\")\n \n channel.subscribe(on_channel_state_change)\n \n@@ -111,4 +113,4 @@\n finally:\n # Make sure to have a final\n channel.close()\n- log(DEBUG, \"Insecure gRPC channel closed\")\n+ log(DEBUG, \"gRPC channel closed\")\n", "issue": "Secure gRPC: Logging still reporing insecure connection\n### Discussed in https://github.com/adap/flower/discussions/1020\r\n\r\n<div type='discussions-op-text'>\r\n\r\n<sup>Originally posted by **NathanSchot** January 21, 2022</sup>\r\nHey,\r\n\r\nMy team and I have enjoyed working with flower for a few months now. We are very excited with the recent implementation of secure gRPC connections, as security is one of our main focus points.\r\n\r\nI realise I may be a bit early (and potentially nitpicky) with this question as secure gRPC is still under construction, but we noticed that the logging of the FL process does not take the existence of root certificates in account, therefore printing incorrect information. The two lines where this happens are:\r\n\r\nhttps://github.com/adap/flower/blob/cb66854dcbf2e0c5bf14ee48149053a84be8a3be/src/py/flwr/client/app.py#L89\r\nand \r\nhttps://github.com/adap/flower/blob/cb66854dcbf2e0c5bf14ee48149053a84be8a3be/src/py/flwr/client/grpc_client/connection.py#L114\r\n\r\nThanks in advance.</div>\n", "before_files": [{"content": "# Copyright 2020 Adap GmbH. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Flower client app.\"\"\"\n\n\nimport time\nfrom logging import INFO\nfrom typing import Optional\n\nfrom flwr.common import GRPC_MAX_MESSAGE_LENGTH\nfrom flwr.common.logger import log\n\nfrom .client import Client\nfrom .grpc_client.connection import grpc_connection\nfrom .grpc_client.message_handler import handle\nfrom .keras_client import KerasClient, KerasClientWrapper\nfrom .numpy_client import NumPyClient, NumPyClientWrapper\n\n\ndef start_client(\n server_address: str,\n client: Client,\n grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,\n root_certificates: Optional[bytes] = None,\n) -> None:\n \"\"\"Start a Flower Client which connects to a gRPC server.\n\n Parameters\n ----------\n server_address: str. The IPv6 address of the server. If the Flower\n server runs on the same machine on port 8080, then `server_address`\n would be `\"[::]:8080\"`.\n client: flwr.client.Client. An implementation of the abstract base\n class `flwr.client.Client`.\n grpc_max_message_length: int (default: 536_870_912, this equals 512MB).\n The maximum length of gRPC messages that can be exchanged with the\n Flower server. The default should be sufficient for most models.\n Users who train very large models might need to increase this\n value. Note that the Flower server needs to be started with the\n same value (see `flwr.server.start_server`), otherwise it will not\n know about the increased limit and block larger messages.\n root_certificates: bytes (default: None)\n The PEM-encoded root certificates as a byte string. If provided, a secure\n connection using the certificates will be established to a\n SSL-enabled Flower server.\n\n Returns\n -------\n None\n\n Examples\n --------\n Starting a client with insecure server connection:\n\n >>> start_client(\n >>> server_address=localhost:8080,\n >>> client=FlowerClient(),\n >>> )\n\n Starting a SSL-enabled client:\n\n >>> from pathlib import Path\n >>> start_client(\n >>> server_address=localhost:8080,\n >>> client=FlowerClient(),\n >>> root_certificates=Path(\"/crts/root.pem\").read_bytes(),\n >>> )\n \"\"\"\n while True:\n sleep_duration: int = 0\n with grpc_connection(\n server_address,\n max_message_length=grpc_max_message_length,\n root_certificates=root_certificates,\n ) as conn:\n receive, send = conn\n log(INFO, \"Opened (insecure) gRPC connection\")\n\n while True:\n server_message = receive()\n client_message, sleep_duration, keep_going = handle(\n client, server_message\n )\n send(client_message)\n if not keep_going:\n break\n if sleep_duration == 0:\n log(INFO, \"Disconnect and shut down\")\n break\n # Sleep and reconnect afterwards\n log(\n INFO,\n \"Disconnect, then re-establish connection after %s second(s)\",\n sleep_duration,\n )\n time.sleep(sleep_duration)\n\n\ndef start_numpy_client(\n server_address: str,\n client: NumPyClient,\n grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,\n root_certificates: Optional[bytes] = None,\n) -> None:\n \"\"\"Start a Flower NumPyClient which connects to a gRPC server.\n\n Parameters\n ----------\n server_address: str. The IPv6 address of the server. If the Flower\n server runs on the same machine on port 8080, then `server_address`\n would be `\"[::]:8080\"`.\n client: flwr.client.NumPyClient. An implementation of the abstract base\n class `flwr.client.NumPyClient`.\n grpc_max_message_length: int (default: 536_870_912, this equals 512MB).\n The maximum length of gRPC messages that can be exchanged with the\n Flower server. The default should be sufficient for most models.\n Users who train very large models might need to increase this\n value. Note that the Flower server needs to be started with the\n same value (see `flwr.server.start_server`), otherwise it will not\n know about the increased limit and block larger messages.\n root_certificates: bytes (default: None)\n The PEM-encoded root certificates a byte string. If provided, a secure\n connection using the certificates will be established to a\n SSL-enabled Flower server.\n\n Returns\n -------\n None\n\n Examples\n --------\n Starting a client with an insecure server connection:\n\n >>> start_client(\n >>> server_address=localhost:8080,\n >>> client=FlowerClient(),\n >>> )\n\n Starting a SSL-enabled client:\n\n >>> from pathlib import Path\n >>> start_client(\n >>> server_address=localhost:8080,\n >>> client=FlowerClient(),\n >>> root_certificates=Path(\"/crts/root.pem\").read_bytes(),\n >>> )\n \"\"\"\n\n # Wrap the NumPyClient\n flower_client = NumPyClientWrapper(client)\n\n # Start\n start_client(\n server_address=server_address,\n client=flower_client,\n grpc_max_message_length=grpc_max_message_length,\n root_certificates=root_certificates,\n )\n\n\ndef start_keras_client(\n server_address: str,\n client: KerasClient,\n grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,\n) -> None:\n \"\"\"Start a Flower KerasClient which connects to a gRPC server.\n\n Arguments:\n server_address: str. The IPv6 address of the server. If the Flower\n server runs on the same machine on port 8080, then `server_address`\n would be `\"[::]:8080\"`.\n client: flwr.client.KerasClient. An implementation of the abstract base\n class `flwr.client.KerasClient`.\n grpc_max_message_length: int (default: 536_870_912, this equals 512MB).\n The maximum length of gRPC messages that can be exchanged with the\n Flower server. The default should be sufficient for most models.\n Users who train very large models might need to increase this\n value. Note that the Flower server needs to be started with the\n same value (see `flwr.server.start_server`), otherwise it will not\n know about the increased limit and block larger messages.\n\n Returns:\n None.\n \"\"\"\n\n # Deprecation warning\n warning = \"\"\"\n DEPRECATION WARNING: KerasClient is deprecated, migrate to NumPyClient.\n\n KerasClient will be removed in a future release, please migrate to either\n NumPyClient (recommended) or Client. NumPyClient is recommended because it\n is conceptually very similar to KerasClient.\n \"\"\"\n print(warning)\n\n # Wrap the Keras client\n flower_client = KerasClientWrapper(client)\n\n # Start\n start_client(\n server_address=server_address,\n client=flower_client,\n grpc_max_message_length=grpc_max_message_length,\n )\n", "path": "src/py/flwr/client/app.py"}, {"content": "# Copyright 2020 Adap GmbH. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Provides contextmanager which manages a gRPC channel to connect to the\nserver.\"\"\"\nfrom contextlib import contextmanager\nfrom logging import DEBUG\nfrom queue import Queue\nfrom typing import Callable, Iterator, Optional, Tuple\n\nimport grpc\n\nfrom flwr.common import GRPC_MAX_MESSAGE_LENGTH\nfrom flwr.common.logger import log\nfrom flwr.proto.transport_pb2 import ClientMessage, ServerMessage\nfrom flwr.proto.transport_pb2_grpc import FlowerServiceStub\n\n# Uncomment these flags in case you are debugging\n# os.environ[\"GRPC_VERBOSITY\"] = \"debug\"\n# os.environ[\"GRPC_TRACE\"] = \"connectivity_state\"\n\n\ndef on_channel_state_change(channel_connectivity: str) -> None:\n \"\"\"Log channel connectivity.\"\"\"\n log(DEBUG, channel_connectivity)\n\n\n@contextmanager\ndef grpc_connection(\n server_address: str,\n max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,\n root_certificates: Optional[bytes] = None,\n) -> Iterator[Tuple[Callable[[], ServerMessage], Callable[[ClientMessage], None]]]:\n \"\"\"Establish an insecure gRPC connection to a gRPC server.\n\n Parameters\n ----------\n server_address : str\n The IPv6 address of the server. If the Flower server runs on the same machine\n on port 8080, then `server_address` would be `\"[::]:8080\"`.\n grpc_max_message_length : int\n The maximum length of gRPC messages that can be exchanged with the Flower\n server. The default should be sufficient for most models. Users who train\n very large models might need to increase this value. Note that the Flower\n server needs to be started with the same value\n (see `flwr.server.start_server`), otherwise it will not know about the\n increased limit and block larger messages.\n (default: 536_870_912, this equals 512MB)\n root_certificates : Optional[bytes] (default: None)\n The PEM-encoded root certificates as a byte string. If provided, a secure\n connection using the certificates will be established to a SSL-enabled\n Flower server.\n\n Returns\n -------\n receive, send : Callable, Callable\n\n Examples\n --------\n Establishing a SSL-enabled connection to the server:\n\n >>> from pathlib import Path\n >>> with grpc_connection(\n >>> server_address,\n >>> max_message_length=grpc_max_message_length,\n >>> root_certificates=Path(\"/crts/root.pem\").read_bytes(),\n >>> ) as conn:\n >>> receive, send = conn\n >>> server_message = receive()\n >>> # do something here\n >>> send(client_message)\n \"\"\"\n channel_options = [\n (\"grpc.max_send_message_length\", max_message_length),\n (\"grpc.max_receive_message_length\", max_message_length),\n ]\n\n if root_certificates is not None:\n ssl_channel_credentials = grpc.ssl_channel_credentials(root_certificates)\n channel = grpc.secure_channel(\n server_address, ssl_channel_credentials, options=channel_options\n )\n else:\n channel = grpc.insecure_channel(server_address, options=channel_options)\n\n channel.subscribe(on_channel_state_change)\n\n queue: Queue[ClientMessage] = Queue( # pylint: disable=unsubscriptable-object\n maxsize=1\n )\n stub = FlowerServiceStub(channel)\n\n server_message_iterator: Iterator[ServerMessage] = stub.Join(iter(queue.get, None))\n\n receive: Callable[[], ServerMessage] = lambda: next(server_message_iterator)\n send: Callable[[ClientMessage], None] = lambda msg: queue.put(msg, block=False)\n\n try:\n yield (receive, send)\n finally:\n # Make sure to have a final\n channel.close()\n log(DEBUG, \"Insecure gRPC channel closed\")\n", "path": "src/py/flwr/client/grpc_client/connection.py"}], "after_files": [{"content": "# Copyright 2020 Adap GmbH. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Flower client app.\"\"\"\n\n\nimport time\nfrom logging import INFO\nfrom typing import Optional\n\nfrom flwr.common import GRPC_MAX_MESSAGE_LENGTH\nfrom flwr.common.logger import log\n\nfrom .client import Client\nfrom .grpc_client.connection import grpc_connection\nfrom .grpc_client.message_handler import handle\nfrom .keras_client import KerasClient, KerasClientWrapper\nfrom .numpy_client import NumPyClient, NumPyClientWrapper\n\n\ndef start_client(\n server_address: str,\n client: Client,\n grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,\n root_certificates: Optional[bytes] = None,\n) -> None:\n \"\"\"Start a Flower Client which connects to a gRPC server.\n\n Parameters\n ----------\n server_address: str. The IPv6 address of the server. If the Flower\n server runs on the same machine on port 8080, then `server_address`\n would be `\"[::]:8080\"`.\n client: flwr.client.Client. An implementation of the abstract base\n class `flwr.client.Client`.\n grpc_max_message_length: int (default: 536_870_912, this equals 512MB).\n The maximum length of gRPC messages that can be exchanged with the\n Flower server. The default should be sufficient for most models.\n Users who train very large models might need to increase this\n value. Note that the Flower server needs to be started with the\n same value (see `flwr.server.start_server`), otherwise it will not\n know about the increased limit and block larger messages.\n root_certificates: bytes (default: None)\n The PEM-encoded root certificates as a byte string. If provided, a secure\n connection using the certificates will be established to a\n SSL-enabled Flower server.\n\n Returns\n -------\n None\n\n Examples\n --------\n Starting a client with insecure server connection:\n\n >>> start_client(\n >>> server_address=localhost:8080,\n >>> client=FlowerClient(),\n >>> )\n\n Starting a SSL-enabled client:\n\n >>> from pathlib import Path\n >>> start_client(\n >>> server_address=localhost:8080,\n >>> client=FlowerClient(),\n >>> root_certificates=Path(\"/crts/root.pem\").read_bytes(),\n >>> )\n \"\"\"\n while True:\n sleep_duration: int = 0\n with grpc_connection(\n server_address,\n max_message_length=grpc_max_message_length,\n root_certificates=root_certificates,\n ) as conn:\n receive, send = conn\n\n while True:\n server_message = receive()\n client_message, sleep_duration, keep_going = handle(\n client, server_message\n )\n send(client_message)\n if not keep_going:\n break\n if sleep_duration == 0:\n log(INFO, \"Disconnect and shut down\")\n break\n # Sleep and reconnect afterwards\n log(\n INFO,\n \"Disconnect, then re-establish connection after %s second(s)\",\n sleep_duration,\n )\n time.sleep(sleep_duration)\n\n\ndef start_numpy_client(\n server_address: str,\n client: NumPyClient,\n grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,\n root_certificates: Optional[bytes] = None,\n) -> None:\n \"\"\"Start a Flower NumPyClient which connects to a gRPC server.\n\n Parameters\n ----------\n server_address: str. The IPv6 address of the server. If the Flower\n server runs on the same machine on port 8080, then `server_address`\n would be `\"[::]:8080\"`.\n client: flwr.client.NumPyClient. An implementation of the abstract base\n class `flwr.client.NumPyClient`.\n grpc_max_message_length: int (default: 536_870_912, this equals 512MB).\n The maximum length of gRPC messages that can be exchanged with the\n Flower server. The default should be sufficient for most models.\n Users who train very large models might need to increase this\n value. Note that the Flower server needs to be started with the\n same value (see `flwr.server.start_server`), otherwise it will not\n know about the increased limit and block larger messages.\n root_certificates: bytes (default: None)\n The PEM-encoded root certificates a byte string. If provided, a secure\n connection using the certificates will be established to a\n SSL-enabled Flower server.\n\n Returns\n -------\n None\n\n Examples\n --------\n Starting a client with an insecure server connection:\n\n >>> start_client(\n >>> server_address=localhost:8080,\n >>> client=FlowerClient(),\n >>> )\n\n Starting a SSL-enabled client:\n\n >>> from pathlib import Path\n >>> start_client(\n >>> server_address=localhost:8080,\n >>> client=FlowerClient(),\n >>> root_certificates=Path(\"/crts/root.pem\").read_bytes(),\n >>> )\n \"\"\"\n\n # Wrap the NumPyClient\n flower_client = NumPyClientWrapper(client)\n\n # Start\n start_client(\n server_address=server_address,\n client=flower_client,\n grpc_max_message_length=grpc_max_message_length,\n root_certificates=root_certificates,\n )\n\n\ndef start_keras_client(\n server_address: str,\n client: KerasClient,\n grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,\n) -> None:\n \"\"\"Start a Flower KerasClient which connects to a gRPC server.\n\n Arguments:\n server_address: str. The IPv6 address of the server. If the Flower\n server runs on the same machine on port 8080, then `server_address`\n would be `\"[::]:8080\"`.\n client: flwr.client.KerasClient. An implementation of the abstract base\n class `flwr.client.KerasClient`.\n grpc_max_message_length: int (default: 536_870_912, this equals 512MB).\n The maximum length of gRPC messages that can be exchanged with the\n Flower server. The default should be sufficient for most models.\n Users who train very large models might need to increase this\n value. Note that the Flower server needs to be started with the\n same value (see `flwr.server.start_server`), otherwise it will not\n know about the increased limit and block larger messages.\n\n Returns:\n None.\n \"\"\"\n\n # Deprecation warning\n warning = \"\"\"\n DEPRECATION WARNING: KerasClient is deprecated, migrate to NumPyClient.\n\n KerasClient will be removed in a future release, please migrate to either\n NumPyClient (recommended) or Client. NumPyClient is recommended because it\n is conceptually very similar to KerasClient.\n \"\"\"\n print(warning)\n\n # Wrap the Keras client\n flower_client = KerasClientWrapper(client)\n\n # Start\n start_client(\n server_address=server_address,\n client=flower_client,\n grpc_max_message_length=grpc_max_message_length,\n )\n", "path": "src/py/flwr/client/app.py"}, {"content": "# Copyright 2020 Adap GmbH. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Provides contextmanager which manages a gRPC channel to connect to the\nserver.\"\"\"\nfrom contextlib import contextmanager\nfrom logging import DEBUG, INFO\nfrom queue import Queue\nfrom typing import Callable, Iterator, Optional, Tuple\n\nimport grpc\n\nfrom flwr.common import GRPC_MAX_MESSAGE_LENGTH\nfrom flwr.common.logger import log\nfrom flwr.proto.transport_pb2 import ClientMessage, ServerMessage\nfrom flwr.proto.transport_pb2_grpc import FlowerServiceStub\n\n# Uncomment these flags in case you are debugging\n# os.environ[\"GRPC_VERBOSITY\"] = \"debug\"\n# os.environ[\"GRPC_TRACE\"] = \"connectivity_state\"\n\n\ndef on_channel_state_change(channel_connectivity: str) -> None:\n \"\"\"Log channel connectivity.\"\"\"\n log(DEBUG, channel_connectivity)\n\n\n@contextmanager\ndef grpc_connection(\n server_address: str,\n max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,\n root_certificates: Optional[bytes] = None,\n) -> Iterator[Tuple[Callable[[], ServerMessage], Callable[[ClientMessage], None]]]:\n \"\"\"Establish an insecure gRPC connection to a gRPC server.\n\n Parameters\n ----------\n server_address : str\n The IPv6 address of the server. If the Flower server runs on the same machine\n on port 8080, then `server_address` would be `\"[::]:8080\"`.\n grpc_max_message_length : int\n The maximum length of gRPC messages that can be exchanged with the Flower\n server. The default should be sufficient for most models. Users who train\n very large models might need to increase this value. Note that the Flower\n server needs to be started with the same value\n (see `flwr.server.start_server`), otherwise it will not know about the\n increased limit and block larger messages.\n (default: 536_870_912, this equals 512MB)\n root_certificates : Optional[bytes] (default: None)\n The PEM-encoded root certificates as a byte string. If provided, a secure\n connection using the certificates will be established to a SSL-enabled\n Flower server.\n\n Returns\n -------\n receive, send : Callable, Callable\n\n Examples\n --------\n Establishing a SSL-enabled connection to the server:\n\n >>> from pathlib import Path\n >>> with grpc_connection(\n >>> server_address,\n >>> max_message_length=grpc_max_message_length,\n >>> root_certificates=Path(\"/crts/root.pem\").read_bytes(),\n >>> ) as conn:\n >>> receive, send = conn\n >>> server_message = receive()\n >>> # do something here\n >>> send(client_message)\n \"\"\"\n channel_options = [\n (\"grpc.max_send_message_length\", max_message_length),\n (\"grpc.max_receive_message_length\", max_message_length),\n ]\n\n if root_certificates is not None:\n ssl_channel_credentials = grpc.ssl_channel_credentials(root_certificates)\n channel = grpc.secure_channel(\n server_address, ssl_channel_credentials, options=channel_options\n )\n log(INFO, \"Opened secure gRPC connection using certificates\")\n else:\n channel = grpc.insecure_channel(server_address, options=channel_options)\n log(INFO, \"Opened insecure gRPC connection (no certificates were passed)\")\n\n channel.subscribe(on_channel_state_change)\n\n queue: Queue[ClientMessage] = Queue( # pylint: disable=unsubscriptable-object\n maxsize=1\n )\n stub = FlowerServiceStub(channel)\n\n server_message_iterator: Iterator[ServerMessage] = stub.Join(iter(queue.get, None))\n\n receive: Callable[[], ServerMessage] = lambda: next(server_message_iterator)\n send: Callable[[ClientMessage], None] = lambda msg: queue.put(msg, block=False)\n\n try:\n yield (receive, send)\n finally:\n # Make sure to have a final\n channel.close()\n log(DEBUG, \"gRPC channel closed\")\n", "path": "src/py/flwr/client/grpc_client/connection.py"}]}
| 4,036 | 360 |
gh_patches_debug_31171
|
rasdani/github-patches
|
git_diff
|
spack__spack-36657
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Invalid flag with apple m2 cpu and apple-clang (clang-1400.0.29.202)
On a system as described in the title, the following flag in not recognized and causes compilation failures:
https://github.com/spack/spack/blob/b2c806f6fcd73ae624582c84a177e465a7877cf7/lib/spack/external/archspec/json/cpu/microarchitectures.json#L2794
Replacing with `-mcpu=apple-m1` seems to fix the issue. There seems to be no flag specific to m2 cpu.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/spack/external/archspec/cli.py`
Content:
```
1 # Copyright 2019-2020 Lawrence Livermore National Security, LLC and other
2 # Archspec Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5 """
6 archspec command line interface
7 """
8
9 import click
10
11 import archspec
12 import archspec.cpu
13
14
15 @click.group(name="archspec")
16 @click.version_option(version=archspec.__version__)
17 def main():
18 """archspec command line interface"""
19
20
21 @main.command()
22 def cpu():
23 """archspec command line interface for CPU"""
24 click.echo(archspec.cpu.host())
25
```
Path: `lib/spack/external/__init__.py`
Content:
```
1 # Copyright 2013-2023 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 """This module contains the following external, potentially separately
7 licensed, packages that are included in Spack:
8
9 altgraph
10 --------
11
12 * Homepage: https://altgraph.readthedocs.io/en/latest/index.html
13 * Usage: dependency of macholib
14 * Version: 0.17.3
15
16 archspec
17 --------
18
19 * Homepage: https://pypi.python.org/pypi/archspec
20 * Usage: Labeling, comparison and detection of microarchitectures
21 * Version: 0.2.0 (commit e44bad9c7b6defac73696f64078b2fe634719b62)
22
23 astunparse
24 ----------------
25
26 * Homepage: https://github.com/simonpercivall/astunparse
27 * Usage: Unparsing Python ASTs for package hashes in Spack
28 * Version: 1.6.3 (plus modifications)
29 * Note: This is in ``spack.util.unparse`` because it's very heavily
30 modified, and we want to track coverage for it.
31 Specifically, we have modified this library to generate consistent unparsed ASTs
32 regardless of the Python version. It is based on:
33 1. The original ``astunparse`` library;
34 2. Modifications for consistency;
35 3. Backports from the ``ast.unparse`` function in Python 3.9 and later
36 The unparsing is now mostly consistent with upstream ``ast.unparse``, so if
37 we ever require Python 3.9 or higher, we can drop this external package.
38
39 attrs
40 ----------------
41
42 * Homepage: https://github.com/python-attrs/attrs
43 * Usage: Needed by jsonschema.
44 * Version: 22.1.0
45
46 ctest_log_parser
47 ----------------
48
49 * Homepage: https://github.com/Kitware/CMake/blob/master/Source/CTest/cmCTestBuildHandler.cxx
50 * Usage: Functions to parse build logs and extract error messages.
51 * Version: Unversioned
52 * Note: This is a homemade port of Kitware's CTest build handler.
53
54 distro
55 ------
56
57 * Homepage: https://pypi.python.org/pypi/distro
58 * Usage: Provides a more stable linux distribution detection.
59 * Version: 1.8.0
60
61 jinja2
62 ------
63
64 * Homepage: https://pypi.python.org/pypi/Jinja2
65 * Usage: A modern and designer-friendly templating language for Python.
66 * Version: 3.0.3 (last version supporting Python 3.6)
67
68 jsonschema
69 ----------
70
71 * Homepage: https://pypi.python.org/pypi/jsonschema
72 * Usage: An implementation of JSON Schema for Python.
73 * Version: 3.2.0 (last version before 2.7 and 3.6 support was dropped)
74 * Note: We don't include tests or benchmarks; just what Spack needs.
75
76 macholib
77 --------
78
79 * Homepage: https://macholib.readthedocs.io/en/latest/index.html#
80 * Usage: Manipulation of Mach-o binaries for relocating macOS buildcaches on Linux
81 * Version: 1.16.2
82
83 markupsafe
84 ----------
85
86 * Homepage: https://pypi.python.org/pypi/MarkupSafe
87 * Usage: Implements a XML/HTML/XHTML Markup safe string for Python.
88 * Version: 2.0.1 (last version supporting Python 3.6)
89
90 pyrsistent
91 ----------
92
93 * Homepage: http://github.com/tobgu/pyrsistent/
94 * Usage: Needed by `jsonschema`
95 * Version: 0.18.0
96
97 ruamel.yaml
98 ------
99
100 * Homepage: https://yaml.readthedocs.io/
101 * Usage: Used for config files. Ruamel is based on PyYAML but is more
102 actively maintained and has more features, including round-tripping
103 comments read from config files.
104 * Version: 0.11.15 (last version supporting Python 2.6)
105 * Note: This package has been slightly modified to improve Python 2.6
106 compatibility -- some ``{}`` format strings were replaced, and the
107 import for ``OrderedDict`` was tweaked.
108
109 six
110 ---
111
112 * Homepage: https://pypi.python.org/pypi/six
113 * Usage: Python 2 and 3 compatibility utilities.
114 * Version: 1.16.0
115
116 """
117
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/lib/spack/external/__init__.py b/lib/spack/external/__init__.py
--- a/lib/spack/external/__init__.py
+++ b/lib/spack/external/__init__.py
@@ -18,7 +18,7 @@
* Homepage: https://pypi.python.org/pypi/archspec
* Usage: Labeling, comparison and detection of microarchitectures
-* Version: 0.2.0 (commit e44bad9c7b6defac73696f64078b2fe634719b62)
+* Version: 0.2.0-dev (commit f3667f95030c6573842fb5f6df0d647285597509)
astunparse
----------------
diff --git a/lib/spack/external/archspec/cli.py b/lib/spack/external/archspec/cli.py
--- a/lib/spack/external/archspec/cli.py
+++ b/lib/spack/external/archspec/cli.py
@@ -6,19 +6,61 @@
archspec command line interface
"""
-import click
+import argparse
+import typing
import archspec
import archspec.cpu
[email protected](name="archspec")
[email protected]_option(version=archspec.__version__)
-def main():
- """archspec command line interface"""
+def _make_parser() -> argparse.ArgumentParser:
+ parser = argparse.ArgumentParser(
+ "archspec",
+ description="archspec command line interface",
+ add_help=False,
+ )
+ parser.add_argument(
+ "--version",
+ "-V",
+ help="Show the version and exit.",
+ action="version",
+ version=f"archspec, version {archspec.__version__}",
+ )
+ parser.add_argument("--help", "-h", help="Show the help and exit.", action="help")
+ subcommands = parser.add_subparsers(
+ title="command",
+ metavar="COMMAND",
+ dest="command",
+ )
[email protected]()
-def cpu():
- """archspec command line interface for CPU"""
- click.echo(archspec.cpu.host())
+ cpu_command = subcommands.add_parser(
+ "cpu",
+ help="archspec command line interface for CPU",
+ description="archspec command line interface for CPU",
+ )
+ cpu_command.set_defaults(run=cpu)
+
+ return parser
+
+
+def cpu() -> int:
+ """Run the `archspec cpu` subcommand."""
+ print(archspec.cpu.host())
+ return 0
+
+
+def main(argv: typing.Optional[typing.List[str]] = None) -> int:
+ """Run the `archspec` command line interface."""
+ parser = _make_parser()
+
+ try:
+ args = parser.parse_args(argv)
+ except SystemExit as err:
+ return err.code
+
+ if args.command is None:
+ parser.print_help()
+ return 0
+
+ return args.run()
|
{"golden_diff": "diff --git a/lib/spack/external/__init__.py b/lib/spack/external/__init__.py\n--- a/lib/spack/external/__init__.py\n+++ b/lib/spack/external/__init__.py\n@@ -18,7 +18,7 @@\n \n * Homepage: https://pypi.python.org/pypi/archspec\n * Usage: Labeling, comparison and detection of microarchitectures\n-* Version: 0.2.0 (commit e44bad9c7b6defac73696f64078b2fe634719b62)\n+* Version: 0.2.0-dev (commit f3667f95030c6573842fb5f6df0d647285597509)\n \n astunparse\n ----------------\ndiff --git a/lib/spack/external/archspec/cli.py b/lib/spack/external/archspec/cli.py\n--- a/lib/spack/external/archspec/cli.py\n+++ b/lib/spack/external/archspec/cli.py\n@@ -6,19 +6,61 @@\n archspec command line interface\n \"\"\"\n \n-import click\n+import argparse\n+import typing\n \n import archspec\n import archspec.cpu\n \n \[email protected](name=\"archspec\")\[email protected]_option(version=archspec.__version__)\n-def main():\n- \"\"\"archspec command line interface\"\"\"\n+def _make_parser() -> argparse.ArgumentParser:\n+ parser = argparse.ArgumentParser(\n+ \"archspec\",\n+ description=\"archspec command line interface\",\n+ add_help=False,\n+ )\n+ parser.add_argument(\n+ \"--version\",\n+ \"-V\",\n+ help=\"Show the version and exit.\",\n+ action=\"version\",\n+ version=f\"archspec, version {archspec.__version__}\",\n+ )\n+ parser.add_argument(\"--help\", \"-h\", help=\"Show the help and exit.\", action=\"help\")\n \n+ subcommands = parser.add_subparsers(\n+ title=\"command\",\n+ metavar=\"COMMAND\",\n+ dest=\"command\",\n+ )\n \[email protected]()\n-def cpu():\n- \"\"\"archspec command line interface for CPU\"\"\"\n- click.echo(archspec.cpu.host())\n+ cpu_command = subcommands.add_parser(\n+ \"cpu\",\n+ help=\"archspec command line interface for CPU\",\n+ description=\"archspec command line interface for CPU\",\n+ )\n+ cpu_command.set_defaults(run=cpu)\n+\n+ return parser\n+\n+\n+def cpu() -> int:\n+ \"\"\"Run the `archspec cpu` subcommand.\"\"\"\n+ print(archspec.cpu.host())\n+ return 0\n+\n+\n+def main(argv: typing.Optional[typing.List[str]] = None) -> int:\n+ \"\"\"Run the `archspec` command line interface.\"\"\"\n+ parser = _make_parser()\n+\n+ try:\n+ args = parser.parse_args(argv)\n+ except SystemExit as err:\n+ return err.code\n+\n+ if args.command is None:\n+ parser.print_help()\n+ return 0\n+\n+ return args.run()\n", "issue": "Invalid flag with apple m2 cpu and apple-clang (clang-1400.0.29.202)\nOn a system as described in the title, the following flag in not recognized and causes compilation failures:\r\n\r\nhttps://github.com/spack/spack/blob/b2c806f6fcd73ae624582c84a177e465a7877cf7/lib/spack/external/archspec/json/cpu/microarchitectures.json#L2794\r\n\r\nReplacing with `-mcpu=apple-m1` seems to fix the issue. There seems to be no flag specific to m2 cpu.\r\n\n", "before_files": [{"content": "# Copyright 2019-2020 Lawrence Livermore National Security, LLC and other\n# Archspec Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\"\"\"\narchspec command line interface\n\"\"\"\n\nimport click\n\nimport archspec\nimport archspec.cpu\n\n\[email protected](name=\"archspec\")\[email protected]_option(version=archspec.__version__)\ndef main():\n \"\"\"archspec command line interface\"\"\"\n\n\[email protected]()\ndef cpu():\n \"\"\"archspec command line interface for CPU\"\"\"\n click.echo(archspec.cpu.host())\n", "path": "lib/spack/external/archspec/cli.py"}, {"content": "# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\n\"\"\"This module contains the following external, potentially separately\nlicensed, packages that are included in Spack:\n\naltgraph\n--------\n\n* Homepage: https://altgraph.readthedocs.io/en/latest/index.html\n* Usage: dependency of macholib\n* Version: 0.17.3\n\narchspec\n--------\n\n* Homepage: https://pypi.python.org/pypi/archspec\n* Usage: Labeling, comparison and detection of microarchitectures\n* Version: 0.2.0 (commit e44bad9c7b6defac73696f64078b2fe634719b62)\n\nastunparse\n----------------\n\n* Homepage: https://github.com/simonpercivall/astunparse\n* Usage: Unparsing Python ASTs for package hashes in Spack\n* Version: 1.6.3 (plus modifications)\n* Note: This is in ``spack.util.unparse`` because it's very heavily\n modified, and we want to track coverage for it.\n Specifically, we have modified this library to generate consistent unparsed ASTs\n regardless of the Python version. It is based on:\n 1. The original ``astunparse`` library;\n 2. Modifications for consistency;\n 3. Backports from the ``ast.unparse`` function in Python 3.9 and later\n The unparsing is now mostly consistent with upstream ``ast.unparse``, so if\n we ever require Python 3.9 or higher, we can drop this external package.\n\nattrs\n----------------\n\n* Homepage: https://github.com/python-attrs/attrs\n* Usage: Needed by jsonschema.\n* Version: 22.1.0\n\nctest_log_parser\n----------------\n\n* Homepage: https://github.com/Kitware/CMake/blob/master/Source/CTest/cmCTestBuildHandler.cxx\n* Usage: Functions to parse build logs and extract error messages.\n* Version: Unversioned\n* Note: This is a homemade port of Kitware's CTest build handler.\n\ndistro\n------\n\n* Homepage: https://pypi.python.org/pypi/distro\n* Usage: Provides a more stable linux distribution detection.\n* Version: 1.8.0\n\njinja2\n------\n\n* Homepage: https://pypi.python.org/pypi/Jinja2\n* Usage: A modern and designer-friendly templating language for Python.\n* Version: 3.0.3 (last version supporting Python 3.6)\n\njsonschema\n----------\n\n* Homepage: https://pypi.python.org/pypi/jsonschema\n* Usage: An implementation of JSON Schema for Python.\n* Version: 3.2.0 (last version before 2.7 and 3.6 support was dropped)\n* Note: We don't include tests or benchmarks; just what Spack needs.\n\nmacholib\n--------\n\n* Homepage: https://macholib.readthedocs.io/en/latest/index.html#\n* Usage: Manipulation of Mach-o binaries for relocating macOS buildcaches on Linux\n* Version: 1.16.2\n\nmarkupsafe\n----------\n\n* Homepage: https://pypi.python.org/pypi/MarkupSafe\n* Usage: Implements a XML/HTML/XHTML Markup safe string for Python.\n* Version: 2.0.1 (last version supporting Python 3.6)\n\npyrsistent\n----------\n\n* Homepage: http://github.com/tobgu/pyrsistent/\n* Usage: Needed by `jsonschema`\n* Version: 0.18.0\n\nruamel.yaml\n------\n\n* Homepage: https://yaml.readthedocs.io/\n* Usage: Used for config files. Ruamel is based on PyYAML but is more\n actively maintained and has more features, including round-tripping\n comments read from config files.\n* Version: 0.11.15 (last version supporting Python 2.6)\n* Note: This package has been slightly modified to improve Python 2.6\n compatibility -- some ``{}`` format strings were replaced, and the\n import for ``OrderedDict`` was tweaked.\n\nsix\n---\n\n* Homepage: https://pypi.python.org/pypi/six\n* Usage: Python 2 and 3 compatibility utilities.\n* Version: 1.16.0\n\n\"\"\"\n", "path": "lib/spack/external/__init__.py"}], "after_files": [{"content": "# Copyright 2019-2020 Lawrence Livermore National Security, LLC and other\n# Archspec Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\"\"\"\narchspec command line interface\n\"\"\"\n\nimport argparse\nimport typing\n\nimport archspec\nimport archspec.cpu\n\n\ndef _make_parser() -> argparse.ArgumentParser:\n parser = argparse.ArgumentParser(\n \"archspec\",\n description=\"archspec command line interface\",\n add_help=False,\n )\n parser.add_argument(\n \"--version\",\n \"-V\",\n help=\"Show the version and exit.\",\n action=\"version\",\n version=f\"archspec, version {archspec.__version__}\",\n )\n parser.add_argument(\"--help\", \"-h\", help=\"Show the help and exit.\", action=\"help\")\n\n subcommands = parser.add_subparsers(\n title=\"command\",\n metavar=\"COMMAND\",\n dest=\"command\",\n )\n\n cpu_command = subcommands.add_parser(\n \"cpu\",\n help=\"archspec command line interface for CPU\",\n description=\"archspec command line interface for CPU\",\n )\n cpu_command.set_defaults(run=cpu)\n\n return parser\n\n\ndef cpu() -> int:\n \"\"\"Run the `archspec cpu` subcommand.\"\"\"\n print(archspec.cpu.host())\n return 0\n\n\ndef main(argv: typing.Optional[typing.List[str]] = None) -> int:\n \"\"\"Run the `archspec` command line interface.\"\"\"\n parser = _make_parser()\n\n try:\n args = parser.parse_args(argv)\n except SystemExit as err:\n return err.code\n\n if args.command is None:\n parser.print_help()\n return 0\n\n return args.run()\n", "path": "lib/spack/external/archspec/cli.py"}, {"content": "# Copyright 2013-2023 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\n\"\"\"This module contains the following external, potentially separately\nlicensed, packages that are included in Spack:\n\naltgraph\n--------\n\n* Homepage: https://altgraph.readthedocs.io/en/latest/index.html\n* Usage: dependency of macholib\n* Version: 0.17.3\n\narchspec\n--------\n\n* Homepage: https://pypi.python.org/pypi/archspec\n* Usage: Labeling, comparison and detection of microarchitectures\n* Version: 0.2.0-dev (commit f3667f95030c6573842fb5f6df0d647285597509)\n\nastunparse\n----------------\n\n* Homepage: https://github.com/simonpercivall/astunparse\n* Usage: Unparsing Python ASTs for package hashes in Spack\n* Version: 1.6.3 (plus modifications)\n* Note: This is in ``spack.util.unparse`` because it's very heavily\n modified, and we want to track coverage for it.\n Specifically, we have modified this library to generate consistent unparsed ASTs\n regardless of the Python version. It is based on:\n 1. The original ``astunparse`` library;\n 2. Modifications for consistency;\n 3. Backports from the ``ast.unparse`` function in Python 3.9 and later\n The unparsing is now mostly consistent with upstream ``ast.unparse``, so if\n we ever require Python 3.9 or higher, we can drop this external package.\n\nattrs\n----------------\n\n* Homepage: https://github.com/python-attrs/attrs\n* Usage: Needed by jsonschema.\n* Version: 22.1.0\n\nctest_log_parser\n----------------\n\n* Homepage: https://github.com/Kitware/CMake/blob/master/Source/CTest/cmCTestBuildHandler.cxx\n* Usage: Functions to parse build logs and extract error messages.\n* Version: Unversioned\n* Note: This is a homemade port of Kitware's CTest build handler.\n\ndistro\n------\n\n* Homepage: https://pypi.python.org/pypi/distro\n* Usage: Provides a more stable linux distribution detection.\n* Version: 1.8.0\n\njinja2\n------\n\n* Homepage: https://pypi.python.org/pypi/Jinja2\n* Usage: A modern and designer-friendly templating language for Python.\n* Version: 3.0.3 (last version supporting Python 3.6)\n\njsonschema\n----------\n\n* Homepage: https://pypi.python.org/pypi/jsonschema\n* Usage: An implementation of JSON Schema for Python.\n* Version: 3.2.0 (last version before 2.7 and 3.6 support was dropped)\n* Note: We don't include tests or benchmarks; just what Spack needs.\n\nmacholib\n--------\n\n* Homepage: https://macholib.readthedocs.io/en/latest/index.html#\n* Usage: Manipulation of Mach-o binaries for relocating macOS buildcaches on Linux\n* Version: 1.16.2\n\nmarkupsafe\n----------\n\n* Homepage: https://pypi.python.org/pypi/MarkupSafe\n* Usage: Implements a XML/HTML/XHTML Markup safe string for Python.\n* Version: 2.0.1 (last version supporting Python 3.6)\n\npyrsistent\n----------\n\n* Homepage: http://github.com/tobgu/pyrsistent/\n* Usage: Needed by `jsonschema`\n* Version: 0.18.0\n\nruamel.yaml\n------\n\n* Homepage: https://yaml.readthedocs.io/\n* Usage: Used for config files. Ruamel is based on PyYAML but is more\n actively maintained and has more features, including round-tripping\n comments read from config files.\n* Version: 0.11.15 (last version supporting Python 2.6)\n* Note: This package has been slightly modified to improve Python 2.6\n compatibility -- some ``{}`` format strings were replaced, and the\n import for ``OrderedDict`` was tweaked.\n\nsix\n---\n\n* Homepage: https://pypi.python.org/pypi/six\n* Usage: Python 2 and 3 compatibility utilities.\n* Version: 1.16.0\n\n\"\"\"\n", "path": "lib/spack/external/__init__.py"}]}
| 1,835 | 687 |
gh_patches_debug_38047
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-3216
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Better avatar uploads caching
Right now, we cache resizes on the server, but these responses are unable to be cached by any browser or proxy.
To fix this we need to:
- [ ] Send a proper `Cache-Control` header that tells the browser it's allowed to cache it. Something like `Cache-Control: max-age=315360000`
- [ ] Urls that don't change, so they _can_ be cached. Right now, uploaded avatars reuse the same `UserAvatar.ident`, which means we get the same url even if we change the file. So we can't cache.
- [ ] Remove `Set-Cookie` header on response (this may be tricky but @mattrobenolt can help). `Set-Cookie` header kills all caching.
- [ ] Remove `Vary` header. We don't need to Vary on anything, and right now it's `Vary: Accept-Encoding, Cookie`, which basically makes caching useless. (this may be tricky as well, but @mattrobenolt can help)
@macqueen
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/middleware/locale.py`
Content:
```
1 """
2 sentry.middleware.locale
3 ~~~~~~~~~~~~~~~~~~~~~~~~
4
5 :copyright: (c) 2010-2014 by the Sentry Team, see AUTHORS for more details.
6 :license: BSD, see LICENSE for more details.
7 """
8
9 from __future__ import absolute_import
10
11 import pytz
12
13 from django.middleware.locale import LocaleMiddleware
14
15 from sentry.models import UserOption
16 from sentry.utils.safe import safe_execute
17
18
19 class SentryLocaleMiddleware(LocaleMiddleware):
20 def process_request(self, request):
21 # No locale for static media
22 # This avoids touching user session, which means we avoid
23 # setting `Vary: Cookie` as a response header which will
24 # break HTTP caching entirely.
25 self.__is_static = request.path_info[:9] == '/_static/'
26 if self.__is_static:
27 return
28
29 safe_execute(self.load_user_conf, request,
30 _with_transaction=False)
31
32 super(SentryLocaleMiddleware, self).process_request(request)
33
34 def load_user_conf(self, request):
35 if not request.user.is_authenticated():
36 return
37
38 language = UserOption.objects.get_value(
39 user=request.user, project=None, key='language', default=None)
40 if language:
41 request.session['django_language'] = language
42
43 timezone = UserOption.objects.get_value(
44 user=request.user, project=None, key='timezone', default=None)
45 if timezone:
46 request.timezone = pytz.timezone(timezone)
47
48 def process_response(self, request, response):
49 # If static bound, we don't want to run the normal process_response since this
50 # adds an extra `Vary: Accept-Language`. Static files don't need this and is
51 # less effective for caching.
52 try:
53 if self.__is_static:
54 return response
55 except AttributeError:
56 # catch ourselves in case __is_static never got set.
57 # It's possible that process_request never ran.
58 pass
59 return super(SentryLocaleMiddleware, self).process_response(request, response)
60
```
Path: `src/sentry/api/endpoints/useravatar.py`
Content:
```
1 from __future__ import absolute_import
2
3 from PIL import Image
4
5 from django.conf import settings
6
7 from rest_framework import status
8 from rest_framework.response import Response
9
10 from sentry.api.bases.user import UserEndpoint
11 from sentry.api.serializers import serialize
12 from sentry.models import UserAvatar, File
13 from sentry.utils.compat import StringIO
14
15
16 MIN_DIMENSION = 256
17
18 MAX_DIMENSION = 1024
19
20
21 class UserAvatarEndpoint(UserEndpoint):
22 FILE_TYPE = 'avatar.file'
23
24 def get(self, request, user):
25 return Response(serialize(user, request.user))
26
27 def is_valid_size(self, width, height):
28 if width != height:
29 return False
30 if width < MIN_DIMENSION:
31 return False
32 if width > MAX_DIMENSION:
33 return False
34 return True
35
36 def put(self, request, user):
37 if user != request.user:
38 return Response(status=status.HTTP_403_FORBIDDEN)
39
40 photo_string = request.DATA.get('avatar_photo')
41 photo = None
42 if photo_string:
43 photo_string = photo_string.decode('base64')
44 if len(photo_string) > settings.SENTRY_MAX_AVATAR_SIZE:
45 return Response(status=status.HTTP_413_REQUEST_ENTITY_TOO_LARGE)
46 with Image.open(StringIO(photo_string)) as img:
47 width, height = img.size
48 if not self.is_valid_size(width, height):
49 return Response(status=status.HTTP_400_BAD_REQUEST)
50 file_name = '%s.png' % user.id
51 photo = File.objects.create(name=file_name, type=self.FILE_TYPE)
52 photo.putfile(StringIO(photo_string))
53
54 avatar, _ = UserAvatar.objects.get_or_create(user=user)
55 if avatar.file and photo:
56 avatar.file.delete()
57 avatar.clear_cached_photos()
58 if photo:
59 avatar.file = photo
60
61 avatar_type = request.DATA.get('avatar_type')
62
63 if not avatar.file and avatar_type == 'upload':
64 return Response(status=status.HTTP_400_BAD_REQUEST)
65
66 if avatar_type:
67 try:
68 avatar.avatar_type = [i for i, n in UserAvatar.AVATAR_TYPES if n == avatar_type][0]
69 except IndexError:
70 return Response(status=status.HTTP_400_BAD_REQUEST)
71
72 avatar.save()
73 return Response(serialize(user, request.user))
74
```
Path: `src/sentry/web/frontend/user_avatar.py`
Content:
```
1 from __future__ import absolute_import
2
3 from django.http import HttpResponse, HttpResponseBadRequest, HttpResponseNotFound
4 from django.views.generic import View
5
6 from sentry.models import UserAvatar
7
8
9 class UserAvatarPhotoView(View):
10
11 def get_file_name(self, user):
12 return '%s.png' % user.id
13
14 def get(self, request, *args, **kwargs):
15 avatar_id = kwargs['avatar_id']
16 try:
17 avatar = UserAvatar.objects.get(ident=avatar_id)
18 except UserAvatar.DoesNotExist:
19 return HttpResponseNotFound()
20
21 photo = avatar.file
22 if not photo:
23 return HttpResponseNotFound()
24
25 size = request.GET.get('s')
26 photo_file = photo.getfile()
27 if size:
28 try:
29 size = int(size)
30 except ValueError:
31 return HttpResponseBadRequest()
32 else:
33 photo_file = avatar.get_cached_photo(size)
34 return HttpResponse(photo_file, content_type='image/png')
35
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/sentry/api/endpoints/useravatar.py b/src/sentry/api/endpoints/useravatar.py
--- a/src/sentry/api/endpoints/useravatar.py
+++ b/src/sentry/api/endpoints/useravatar.py
@@ -1,5 +1,7 @@
from __future__ import absolute_import
+from uuid import uuid4
+
from PIL import Image
from django.conf import settings
@@ -57,6 +59,7 @@
avatar.clear_cached_photos()
if photo:
avatar.file = photo
+ avatar.ident = uuid4().hex
avatar_type = request.DATA.get('avatar_type')
diff --git a/src/sentry/middleware/locale.py b/src/sentry/middleware/locale.py
--- a/src/sentry/middleware/locale.py
+++ b/src/sentry/middleware/locale.py
@@ -22,8 +22,9 @@
# This avoids touching user session, which means we avoid
# setting `Vary: Cookie` as a response header which will
# break HTTP caching entirely.
- self.__is_static = request.path_info[:9] == '/_static/'
- if self.__is_static:
+ self.__skip_caching = (request.path_info[:9] == '/_static/' or
+ request.path_info[:8] == '/avatar/')
+ if self.__skip_caching:
return
safe_execute(self.load_user_conf, request,
@@ -50,10 +51,10 @@
# adds an extra `Vary: Accept-Language`. Static files don't need this and is
# less effective for caching.
try:
- if self.__is_static:
+ if self.__skip_caching:
return response
except AttributeError:
- # catch ourselves in case __is_static never got set.
+ # catch ourselves in case __skip_caching never got set.
# It's possible that process_request never ran.
pass
return super(SentryLocaleMiddleware, self).process_response(request, response)
diff --git a/src/sentry/web/frontend/user_avatar.py b/src/sentry/web/frontend/user_avatar.py
--- a/src/sentry/web/frontend/user_avatar.py
+++ b/src/sentry/web/frontend/user_avatar.py
@@ -4,6 +4,7 @@
from django.views.generic import View
from sentry.models import UserAvatar
+from sentry.web.frontend.generic import FOREVER_CACHE
class UserAvatarPhotoView(View):
@@ -31,4 +32,7 @@
return HttpResponseBadRequest()
else:
photo_file = avatar.get_cached_photo(size)
- return HttpResponse(photo_file, content_type='image/png')
+
+ res = HttpResponse(photo_file, content_type='image/png')
+ res['Cache-Control'] = FOREVER_CACHE
+ return res
|
{"golden_diff": "diff --git a/src/sentry/api/endpoints/useravatar.py b/src/sentry/api/endpoints/useravatar.py\n--- a/src/sentry/api/endpoints/useravatar.py\n+++ b/src/sentry/api/endpoints/useravatar.py\n@@ -1,5 +1,7 @@\n from __future__ import absolute_import\n \n+from uuid import uuid4\n+\n from PIL import Image\n \n from django.conf import settings\n@@ -57,6 +59,7 @@\n avatar.clear_cached_photos()\n if photo:\n avatar.file = photo\n+ avatar.ident = uuid4().hex\n \n avatar_type = request.DATA.get('avatar_type')\n \ndiff --git a/src/sentry/middleware/locale.py b/src/sentry/middleware/locale.py\n--- a/src/sentry/middleware/locale.py\n+++ b/src/sentry/middleware/locale.py\n@@ -22,8 +22,9 @@\n # This avoids touching user session, which means we avoid\n # setting `Vary: Cookie` as a response header which will\n # break HTTP caching entirely.\n- self.__is_static = request.path_info[:9] == '/_static/'\n- if self.__is_static:\n+ self.__skip_caching = (request.path_info[:9] == '/_static/' or\n+ request.path_info[:8] == '/avatar/')\n+ if self.__skip_caching:\n return\n \n safe_execute(self.load_user_conf, request,\n@@ -50,10 +51,10 @@\n # adds an extra `Vary: Accept-Language`. Static files don't need this and is\n # less effective for caching.\n try:\n- if self.__is_static:\n+ if self.__skip_caching:\n return response\n except AttributeError:\n- # catch ourselves in case __is_static never got set.\n+ # catch ourselves in case __skip_caching never got set.\n # It's possible that process_request never ran.\n pass\n return super(SentryLocaleMiddleware, self).process_response(request, response)\ndiff --git a/src/sentry/web/frontend/user_avatar.py b/src/sentry/web/frontend/user_avatar.py\n--- a/src/sentry/web/frontend/user_avatar.py\n+++ b/src/sentry/web/frontend/user_avatar.py\n@@ -4,6 +4,7 @@\n from django.views.generic import View\n \n from sentry.models import UserAvatar\n+from sentry.web.frontend.generic import FOREVER_CACHE\n \n \n class UserAvatarPhotoView(View):\n@@ -31,4 +32,7 @@\n return HttpResponseBadRequest()\n else:\n photo_file = avatar.get_cached_photo(size)\n- return HttpResponse(photo_file, content_type='image/png')\n+\n+ res = HttpResponse(photo_file, content_type='image/png')\n+ res['Cache-Control'] = FOREVER_CACHE\n+ return res\n", "issue": "Better avatar uploads caching\nRight now, we cache resizes on the server, but these responses are unable to be cached by any browser or proxy.\n\nTo fix this we need to:\n- [ ] Send a proper `Cache-Control` header that tells the browser it's allowed to cache it. Something like `Cache-Control: max-age=315360000`\n- [ ] Urls that don't change, so they _can_ be cached. Right now, uploaded avatars reuse the same `UserAvatar.ident`, which means we get the same url even if we change the file. So we can't cache.\n- [ ] Remove `Set-Cookie` header on response (this may be tricky but @mattrobenolt can help). `Set-Cookie` header kills all caching.\n- [ ] Remove `Vary` header. We don't need to Vary on anything, and right now it's `Vary: Accept-Encoding, Cookie`, which basically makes caching useless. (this may be tricky as well, but @mattrobenolt can help)\n\n@macqueen \n\n", "before_files": [{"content": "\"\"\"\nsentry.middleware.locale\n~~~~~~~~~~~~~~~~~~~~~~~~\n\n:copyright: (c) 2010-2014 by the Sentry Team, see AUTHORS for more details.\n:license: BSD, see LICENSE for more details.\n\"\"\"\n\nfrom __future__ import absolute_import\n\nimport pytz\n\nfrom django.middleware.locale import LocaleMiddleware\n\nfrom sentry.models import UserOption\nfrom sentry.utils.safe import safe_execute\n\n\nclass SentryLocaleMiddleware(LocaleMiddleware):\n def process_request(self, request):\n # No locale for static media\n # This avoids touching user session, which means we avoid\n # setting `Vary: Cookie` as a response header which will\n # break HTTP caching entirely.\n self.__is_static = request.path_info[:9] == '/_static/'\n if self.__is_static:\n return\n\n safe_execute(self.load_user_conf, request,\n _with_transaction=False)\n\n super(SentryLocaleMiddleware, self).process_request(request)\n\n def load_user_conf(self, request):\n if not request.user.is_authenticated():\n return\n\n language = UserOption.objects.get_value(\n user=request.user, project=None, key='language', default=None)\n if language:\n request.session['django_language'] = language\n\n timezone = UserOption.objects.get_value(\n user=request.user, project=None, key='timezone', default=None)\n if timezone:\n request.timezone = pytz.timezone(timezone)\n\n def process_response(self, request, response):\n # If static bound, we don't want to run the normal process_response since this\n # adds an extra `Vary: Accept-Language`. Static files don't need this and is\n # less effective for caching.\n try:\n if self.__is_static:\n return response\n except AttributeError:\n # catch ourselves in case __is_static never got set.\n # It's possible that process_request never ran.\n pass\n return super(SentryLocaleMiddleware, self).process_response(request, response)\n", "path": "src/sentry/middleware/locale.py"}, {"content": "from __future__ import absolute_import\n\nfrom PIL import Image\n\nfrom django.conf import settings\n\nfrom rest_framework import status\nfrom rest_framework.response import Response\n\nfrom sentry.api.bases.user import UserEndpoint\nfrom sentry.api.serializers import serialize\nfrom sentry.models import UserAvatar, File\nfrom sentry.utils.compat import StringIO\n\n\nMIN_DIMENSION = 256\n\nMAX_DIMENSION = 1024\n\n\nclass UserAvatarEndpoint(UserEndpoint):\n FILE_TYPE = 'avatar.file'\n\n def get(self, request, user):\n return Response(serialize(user, request.user))\n\n def is_valid_size(self, width, height):\n if width != height:\n return False\n if width < MIN_DIMENSION:\n return False\n if width > MAX_DIMENSION:\n return False\n return True\n\n def put(self, request, user):\n if user != request.user:\n return Response(status=status.HTTP_403_FORBIDDEN)\n\n photo_string = request.DATA.get('avatar_photo')\n photo = None\n if photo_string:\n photo_string = photo_string.decode('base64')\n if len(photo_string) > settings.SENTRY_MAX_AVATAR_SIZE:\n return Response(status=status.HTTP_413_REQUEST_ENTITY_TOO_LARGE)\n with Image.open(StringIO(photo_string)) as img:\n width, height = img.size\n if not self.is_valid_size(width, height):\n return Response(status=status.HTTP_400_BAD_REQUEST)\n file_name = '%s.png' % user.id\n photo = File.objects.create(name=file_name, type=self.FILE_TYPE)\n photo.putfile(StringIO(photo_string))\n\n avatar, _ = UserAvatar.objects.get_or_create(user=user)\n if avatar.file and photo:\n avatar.file.delete()\n avatar.clear_cached_photos()\n if photo:\n avatar.file = photo\n\n avatar_type = request.DATA.get('avatar_type')\n\n if not avatar.file and avatar_type == 'upload':\n return Response(status=status.HTTP_400_BAD_REQUEST)\n\n if avatar_type:\n try:\n avatar.avatar_type = [i for i, n in UserAvatar.AVATAR_TYPES if n == avatar_type][0]\n except IndexError:\n return Response(status=status.HTTP_400_BAD_REQUEST)\n\n avatar.save()\n return Response(serialize(user, request.user))\n", "path": "src/sentry/api/endpoints/useravatar.py"}, {"content": "from __future__ import absolute_import\n\nfrom django.http import HttpResponse, HttpResponseBadRequest, HttpResponseNotFound\nfrom django.views.generic import View\n\nfrom sentry.models import UserAvatar\n\n\nclass UserAvatarPhotoView(View):\n\n def get_file_name(self, user):\n return '%s.png' % user.id\n\n def get(self, request, *args, **kwargs):\n avatar_id = kwargs['avatar_id']\n try:\n avatar = UserAvatar.objects.get(ident=avatar_id)\n except UserAvatar.DoesNotExist:\n return HttpResponseNotFound()\n\n photo = avatar.file\n if not photo:\n return HttpResponseNotFound()\n\n size = request.GET.get('s')\n photo_file = photo.getfile()\n if size:\n try:\n size = int(size)\n except ValueError:\n return HttpResponseBadRequest()\n else:\n photo_file = avatar.get_cached_photo(size)\n return HttpResponse(photo_file, content_type='image/png')\n", "path": "src/sentry/web/frontend/user_avatar.py"}], "after_files": [{"content": "\"\"\"\nsentry.middleware.locale\n~~~~~~~~~~~~~~~~~~~~~~~~\n\n:copyright: (c) 2010-2014 by the Sentry Team, see AUTHORS for more details.\n:license: BSD, see LICENSE for more details.\n\"\"\"\n\nfrom __future__ import absolute_import\n\nimport pytz\n\nfrom django.middleware.locale import LocaleMiddleware\n\nfrom sentry.models import UserOption\nfrom sentry.utils.safe import safe_execute\n\n\nclass SentryLocaleMiddleware(LocaleMiddleware):\n def process_request(self, request):\n # No locale for static media\n # This avoids touching user session, which means we avoid\n # setting `Vary: Cookie` as a response header which will\n # break HTTP caching entirely.\n self.__skip_caching = (request.path_info[:9] == '/_static/' or\n request.path_info[:8] == '/avatar/')\n if self.__skip_caching:\n return\n\n safe_execute(self.load_user_conf, request,\n _with_transaction=False)\n\n super(SentryLocaleMiddleware, self).process_request(request)\n\n def load_user_conf(self, request):\n if not request.user.is_authenticated():\n return\n\n language = UserOption.objects.get_value(\n user=request.user, project=None, key='language', default=None)\n if language:\n request.session['django_language'] = language\n\n timezone = UserOption.objects.get_value(\n user=request.user, project=None, key='timezone', default=None)\n if timezone:\n request.timezone = pytz.timezone(timezone)\n\n def process_response(self, request, response):\n # If static bound, we don't want to run the normal process_response since this\n # adds an extra `Vary: Accept-Language`. Static files don't need this and is\n # less effective for caching.\n try:\n if self.__skip_caching:\n return response\n except AttributeError:\n # catch ourselves in case __skip_caching never got set.\n # It's possible that process_request never ran.\n pass\n return super(SentryLocaleMiddleware, self).process_response(request, response)\n", "path": "src/sentry/middleware/locale.py"}, {"content": "from __future__ import absolute_import\n\nfrom uuid import uuid4\n\nfrom PIL import Image\n\nfrom django.conf import settings\n\nfrom rest_framework import status\nfrom rest_framework.response import Response\n\nfrom sentry.api.bases.user import UserEndpoint\nfrom sentry.api.serializers import serialize\nfrom sentry.models import UserAvatar, File\nfrom sentry.utils.compat import StringIO\n\n\nMIN_DIMENSION = 256\n\nMAX_DIMENSION = 1024\n\n\nclass UserAvatarEndpoint(UserEndpoint):\n FILE_TYPE = 'avatar.file'\n\n def get(self, request, user):\n return Response(serialize(user, request.user))\n\n def is_valid_size(self, width, height):\n if width != height:\n return False\n if width < MIN_DIMENSION:\n return False\n if width > MAX_DIMENSION:\n return False\n return True\n\n def put(self, request, user):\n if user != request.user:\n return Response(status=status.HTTP_403_FORBIDDEN)\n\n photo_string = request.DATA.get('avatar_photo')\n photo = None\n if photo_string:\n photo_string = photo_string.decode('base64')\n if len(photo_string) > settings.SENTRY_MAX_AVATAR_SIZE:\n return Response(status=status.HTTP_413_REQUEST_ENTITY_TOO_LARGE)\n with Image.open(StringIO(photo_string)) as img:\n width, height = img.size\n if not self.is_valid_size(width, height):\n return Response(status=status.HTTP_400_BAD_REQUEST)\n file_name = '%s.png' % user.id\n photo = File.objects.create(name=file_name, type=self.FILE_TYPE)\n photo.putfile(StringIO(photo_string))\n\n avatar, _ = UserAvatar.objects.get_or_create(user=user)\n if avatar.file and photo:\n avatar.file.delete()\n avatar.clear_cached_photos()\n if photo:\n avatar.file = photo\n avatar.ident = uuid4().hex\n\n avatar_type = request.DATA.get('avatar_type')\n\n if not avatar.file and avatar_type == 'upload':\n return Response(status=status.HTTP_400_BAD_REQUEST)\n\n if avatar_type:\n try:\n avatar.avatar_type = [i for i, n in UserAvatar.AVATAR_TYPES if n == avatar_type][0]\n except IndexError:\n return Response(status=status.HTTP_400_BAD_REQUEST)\n\n avatar.save()\n return Response(serialize(user, request.user))\n", "path": "src/sentry/api/endpoints/useravatar.py"}, {"content": "from __future__ import absolute_import\n\nfrom django.http import HttpResponse, HttpResponseBadRequest, HttpResponseNotFound\nfrom django.views.generic import View\n\nfrom sentry.models import UserAvatar\nfrom sentry.web.frontend.generic import FOREVER_CACHE\n\n\nclass UserAvatarPhotoView(View):\n\n def get_file_name(self, user):\n return '%s.png' % user.id\n\n def get(self, request, *args, **kwargs):\n avatar_id = kwargs['avatar_id']\n try:\n avatar = UserAvatar.objects.get(ident=avatar_id)\n except UserAvatar.DoesNotExist:\n return HttpResponseNotFound()\n\n photo = avatar.file\n if not photo:\n return HttpResponseNotFound()\n\n size = request.GET.get('s')\n photo_file = photo.getfile()\n if size:\n try:\n size = int(size)\n except ValueError:\n return HttpResponseBadRequest()\n else:\n photo_file = avatar.get_cached_photo(size)\n\n res = HttpResponse(photo_file, content_type='image/png')\n res['Cache-Control'] = FOREVER_CACHE\n return res\n", "path": "src/sentry/web/frontend/user_avatar.py"}]}
| 1,970 | 599 |
gh_patches_debug_3407
|
rasdani/github-patches
|
git_diff
|
vllm-project__vllm-1855
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
benchmark_latency.py will hang when --batchsize=1 and --n=2
benchmark_latency.py will hang if setting --batchsize=1 and --n=2or4or8.
should max_num_seqs been set as args.batch_size * args.n ?
https://github.com/vllm-project/vllm/blob/main/benchmarks/benchmark_latency.py#L23
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `benchmarks/benchmark_latency.py`
Content:
```
1 """Benchmark the latency of processing a single batch of requests."""
2 import argparse
3 import time
4
5 import numpy as np
6 import torch
7 from tqdm import tqdm
8
9 from vllm import LLM, SamplingParams
10
11
12 def main(args: argparse.Namespace):
13 print(args)
14
15 # NOTE(woosuk): If the request cannot be processed in a single batch,
16 # the engine will automatically process the request in multiple batches.
17 llm = LLM(
18 model=args.model,
19 tokenizer=args.tokenizer,
20 quantization=args.quantization,
21 tensor_parallel_size=args.tensor_parallel_size,
22 max_num_seqs=args.batch_size,
23 trust_remote_code=args.trust_remote_code,
24 dtype=args.dtype,
25 )
26
27 sampling_params = SamplingParams(
28 n=args.n,
29 temperature=0.0 if args.use_beam_search else 1.0,
30 top_p=1.0,
31 use_beam_search=args.use_beam_search,
32 ignore_eos=True,
33 max_tokens=args.output_len,
34 )
35 print(sampling_params)
36 dummy_prompt_token_ids = [[0] * args.input_len] * args.batch_size
37
38 def run_to_completion(profile: bool = False):
39 if profile:
40 with torch.profiler.profile(activities=[
41 torch.profiler.ProfilerActivity.CPU,
42 torch.profiler.ProfilerActivity.CUDA,
43 ]) as p:
44 llm.generate(prompt_token_ids=dummy_prompt_token_ids,
45 sampling_params=sampling_params,
46 use_tqdm=False)
47 print(p.key_averages())
48 else:
49 start_time = time.perf_counter()
50 llm.generate(prompt_token_ids=dummy_prompt_token_ids,
51 sampling_params=sampling_params,
52 use_tqdm=False)
53 end_time = time.perf_counter()
54 latency = end_time - start_time
55 return latency
56
57 print("Warming up...")
58 run_to_completion(profile=False)
59
60 if args.profile:
61 print("Profiling...")
62 run_to_completion(profile=True)
63 return
64
65 # Benchmark.
66 latencies = []
67 for _ in tqdm(range(args.num_iters), desc="Profiling iterations"):
68 latencies.append(run_to_completion(profile=False))
69 print(f'Avg latency: {np.mean(latencies)} seconds')
70
71
72 if __name__ == '__main__':
73 parser = argparse.ArgumentParser(
74 description='Benchmark the latency of processing a single batch of '
75 'requests till completion.')
76 parser.add_argument('--model', type=str, default='facebook/opt-125m')
77 parser.add_argument('--tokenizer', type=str, default=None)
78 parser.add_argument('--quantization',
79 '-q',
80 choices=['awq', 'squeezellm', None],
81 default=None)
82 parser.add_argument('--tensor-parallel-size', '-tp', type=int, default=1)
83 parser.add_argument('--input-len', type=int, default=32)
84 parser.add_argument('--output-len', type=int, default=128)
85 parser.add_argument('--batch-size', type=int, default=8)
86 parser.add_argument('--n',
87 type=int,
88 default=1,
89 help='Number of generated sequences per prompt.')
90 parser.add_argument('--use-beam-search', action='store_true')
91 parser.add_argument('--num-iters',
92 type=int,
93 default=3,
94 help='Number of iterations to run.')
95 parser.add_argument('--trust-remote-code',
96 action='store_true',
97 help='trust remote code from huggingface')
98 parser.add_argument(
99 '--dtype',
100 type=str,
101 default='auto',
102 choices=['auto', 'half', 'float16', 'bfloat16', 'float', 'float32'],
103 help='data type for model weights and activations. '
104 'The "auto" option will use FP16 precision '
105 'for FP32 and FP16 models, and BF16 precision '
106 'for BF16 models.')
107 parser.add_argument(
108 '--profile',
109 action='store_true',
110 help='profile the generation process of a single batch')
111 args = parser.parse_args()
112 main(args)
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/benchmarks/benchmark_latency.py b/benchmarks/benchmark_latency.py
--- a/benchmarks/benchmark_latency.py
+++ b/benchmarks/benchmark_latency.py
@@ -19,7 +19,6 @@
tokenizer=args.tokenizer,
quantization=args.quantization,
tensor_parallel_size=args.tensor_parallel_size,
- max_num_seqs=args.batch_size,
trust_remote_code=args.trust_remote_code,
dtype=args.dtype,
)
|
{"golden_diff": "diff --git a/benchmarks/benchmark_latency.py b/benchmarks/benchmark_latency.py\n--- a/benchmarks/benchmark_latency.py\n+++ b/benchmarks/benchmark_latency.py\n@@ -19,7 +19,6 @@\n tokenizer=args.tokenizer,\n quantization=args.quantization,\n tensor_parallel_size=args.tensor_parallel_size,\n- max_num_seqs=args.batch_size,\n trust_remote_code=args.trust_remote_code,\n dtype=args.dtype,\n )\n", "issue": "benchmark_latency.py will hang when --batchsize=1 and --n=2\nbenchmark_latency.py will hang if setting --batchsize=1 and --n=2or4or8.\r\nshould max_num_seqs been set as args.batch_size * args.n \uff1f\r\nhttps://github.com/vllm-project/vllm/blob/main/benchmarks/benchmark_latency.py#L23\n", "before_files": [{"content": "\"\"\"Benchmark the latency of processing a single batch of requests.\"\"\"\nimport argparse\nimport time\n\nimport numpy as np\nimport torch\nfrom tqdm import tqdm\n\nfrom vllm import LLM, SamplingParams\n\n\ndef main(args: argparse.Namespace):\n print(args)\n\n # NOTE(woosuk): If the request cannot be processed in a single batch,\n # the engine will automatically process the request in multiple batches.\n llm = LLM(\n model=args.model,\n tokenizer=args.tokenizer,\n quantization=args.quantization,\n tensor_parallel_size=args.tensor_parallel_size,\n max_num_seqs=args.batch_size,\n trust_remote_code=args.trust_remote_code,\n dtype=args.dtype,\n )\n\n sampling_params = SamplingParams(\n n=args.n,\n temperature=0.0 if args.use_beam_search else 1.0,\n top_p=1.0,\n use_beam_search=args.use_beam_search,\n ignore_eos=True,\n max_tokens=args.output_len,\n )\n print(sampling_params)\n dummy_prompt_token_ids = [[0] * args.input_len] * args.batch_size\n\n def run_to_completion(profile: bool = False):\n if profile:\n with torch.profiler.profile(activities=[\n torch.profiler.ProfilerActivity.CPU,\n torch.profiler.ProfilerActivity.CUDA,\n ]) as p:\n llm.generate(prompt_token_ids=dummy_prompt_token_ids,\n sampling_params=sampling_params,\n use_tqdm=False)\n print(p.key_averages())\n else:\n start_time = time.perf_counter()\n llm.generate(prompt_token_ids=dummy_prompt_token_ids,\n sampling_params=sampling_params,\n use_tqdm=False)\n end_time = time.perf_counter()\n latency = end_time - start_time\n return latency\n\n print(\"Warming up...\")\n run_to_completion(profile=False)\n\n if args.profile:\n print(\"Profiling...\")\n run_to_completion(profile=True)\n return\n\n # Benchmark.\n latencies = []\n for _ in tqdm(range(args.num_iters), desc=\"Profiling iterations\"):\n latencies.append(run_to_completion(profile=False))\n print(f'Avg latency: {np.mean(latencies)} seconds')\n\n\nif __name__ == '__main__':\n parser = argparse.ArgumentParser(\n description='Benchmark the latency of processing a single batch of '\n 'requests till completion.')\n parser.add_argument('--model', type=str, default='facebook/opt-125m')\n parser.add_argument('--tokenizer', type=str, default=None)\n parser.add_argument('--quantization',\n '-q',\n choices=['awq', 'squeezellm', None],\n default=None)\n parser.add_argument('--tensor-parallel-size', '-tp', type=int, default=1)\n parser.add_argument('--input-len', type=int, default=32)\n parser.add_argument('--output-len', type=int, default=128)\n parser.add_argument('--batch-size', type=int, default=8)\n parser.add_argument('--n',\n type=int,\n default=1,\n help='Number of generated sequences per prompt.')\n parser.add_argument('--use-beam-search', action='store_true')\n parser.add_argument('--num-iters',\n type=int,\n default=3,\n help='Number of iterations to run.')\n parser.add_argument('--trust-remote-code',\n action='store_true',\n help='trust remote code from huggingface')\n parser.add_argument(\n '--dtype',\n type=str,\n default='auto',\n choices=['auto', 'half', 'float16', 'bfloat16', 'float', 'float32'],\n help='data type for model weights and activations. '\n 'The \"auto\" option will use FP16 precision '\n 'for FP32 and FP16 models, and BF16 precision '\n 'for BF16 models.')\n parser.add_argument(\n '--profile',\n action='store_true',\n help='profile the generation process of a single batch')\n args = parser.parse_args()\n main(args)\n", "path": "benchmarks/benchmark_latency.py"}], "after_files": [{"content": "\"\"\"Benchmark the latency of processing a single batch of requests.\"\"\"\nimport argparse\nimport time\n\nimport numpy as np\nimport torch\nfrom tqdm import tqdm\n\nfrom vllm import LLM, SamplingParams\n\n\ndef main(args: argparse.Namespace):\n print(args)\n\n # NOTE(woosuk): If the request cannot be processed in a single batch,\n # the engine will automatically process the request in multiple batches.\n llm = LLM(\n model=args.model,\n tokenizer=args.tokenizer,\n quantization=args.quantization,\n tensor_parallel_size=args.tensor_parallel_size,\n trust_remote_code=args.trust_remote_code,\n dtype=args.dtype,\n )\n\n sampling_params = SamplingParams(\n n=args.n,\n temperature=0.0 if args.use_beam_search else 1.0,\n top_p=1.0,\n use_beam_search=args.use_beam_search,\n ignore_eos=True,\n max_tokens=args.output_len,\n )\n print(sampling_params)\n dummy_prompt_token_ids = [[0] * args.input_len] * args.batch_size\n\n def run_to_completion(profile: bool = False):\n if profile:\n with torch.profiler.profile(activities=[\n torch.profiler.ProfilerActivity.CPU,\n torch.profiler.ProfilerActivity.CUDA,\n ]) as p:\n llm.generate(prompt_token_ids=dummy_prompt_token_ids,\n sampling_params=sampling_params,\n use_tqdm=False)\n print(p.key_averages())\n else:\n start_time = time.perf_counter()\n llm.generate(prompt_token_ids=dummy_prompt_token_ids,\n sampling_params=sampling_params,\n use_tqdm=False)\n end_time = time.perf_counter()\n latency = end_time - start_time\n return latency\n\n print(\"Warming up...\")\n run_to_completion(profile=False)\n\n if args.profile:\n print(\"Profiling...\")\n run_to_completion(profile=True)\n return\n\n # Benchmark.\n latencies = []\n for _ in tqdm(range(args.num_iters), desc=\"Profiling iterations\"):\n latencies.append(run_to_completion(profile=False))\n print(f'Avg latency: {np.mean(latencies)} seconds')\n\n\nif __name__ == '__main__':\n parser = argparse.ArgumentParser(\n description='Benchmark the latency of processing a single batch of '\n 'requests till completion.')\n parser.add_argument('--model', type=str, default='facebook/opt-125m')\n parser.add_argument('--tokenizer', type=str, default=None)\n parser.add_argument('--quantization',\n '-q',\n choices=['awq', 'squeezellm', None],\n default=None)\n parser.add_argument('--tensor-parallel-size', '-tp', type=int, default=1)\n parser.add_argument('--input-len', type=int, default=32)\n parser.add_argument('--output-len', type=int, default=128)\n parser.add_argument('--batch-size', type=int, default=8)\n parser.add_argument('--n',\n type=int,\n default=1,\n help='Number of generated sequences per prompt.')\n parser.add_argument('--use-beam-search', action='store_true')\n parser.add_argument('--num-iters',\n type=int,\n default=3,\n help='Number of iterations to run.')\n parser.add_argument('--trust-remote-code',\n action='store_true',\n help='trust remote code from huggingface')\n parser.add_argument(\n '--dtype',\n type=str,\n default='auto',\n choices=['auto', 'half', 'float16', 'bfloat16', 'float', 'float32'],\n help='data type for model weights and activations. '\n 'The \"auto\" option will use FP16 precision '\n 'for FP32 and FP16 models, and BF16 precision '\n 'for BF16 models.')\n parser.add_argument(\n '--profile',\n action='store_true',\n help='profile the generation process of a single batch')\n args = parser.parse_args()\n main(args)\n", "path": "benchmarks/benchmark_latency.py"}]}
| 1,439 | 100 |
gh_patches_debug_2566
|
rasdani/github-patches
|
git_diff
|
getpelican__pelican-2632
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add Markdown as an (optional) dependency
Since its inception, this project has taken the collective position that since not everyone uses Markdown, the `markdown` package should not be a dependency of the project and should instead be manually installed by users who want to use Markdown.
On the other hand, the `docutils` package that parses reStructuredText markup has always been a dependency. Particularly given Markdown's widespread popularity, I cannot come up with a compelling argument to support the continuation of this discrepancy.
This project is littered with issues caused by environments with missing `markdown` packages. Many folks have encountered frustration when trying to build their Markdown-content sites, only to be met with an (often inscrutable) error message. And that's just the folks who have taken the time to file issues! Imagine how many threw up their hands and just moved on to another solution…
Adding Markdown as a dependency would have significant benefits, including simplifying installation instructions, reducing frustration, and minimizing support request load. So to take another approach at this topic… What are the reasons for *not* adding Markdown as a dependency, and do you think they are sufficiently important to out-weigh the concerns raised above?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 import sys
3 from io import open
4 from os import walk
5 from os.path import join, relpath
6
7 from setuptools import setup
8
9
10 version = "4.1.2"
11
12 requires = ['feedgenerator >= 1.9', 'jinja2 >= 2.7', 'pygments', 'docutils',
13 'pytz >= 0a', 'blinker', 'unidecode', 'six >= 1.4',
14 'python-dateutil']
15
16 entry_points = {
17 'console_scripts': [
18 'pelican = pelican.__main__:main',
19 'pelican-import = pelican.tools.pelican_import:main',
20 'pelican-quickstart = pelican.tools.pelican_quickstart:main',
21 'pelican-themes = pelican.tools.pelican_themes:main'
22 ]
23 }
24
25 README = open('README.rst', encoding='utf-8').read()
26 CHANGELOG = open('docs/changelog.rst', encoding='utf-8').read()
27
28 description = u'\n'.join([README, CHANGELOG])
29 if sys.version_info.major < 3:
30 description = description.encode('utf-8')
31
32 setup(
33 name='pelican',
34 version=version,
35 url='https://getpelican.com/',
36 author='Justin Mayer',
37 author_email='[email protected]',
38 description="Static site generator supporting reStructuredText and "
39 "Markdown source content.",
40 project_urls={
41 'Documentation': 'https://docs.getpelican.com/',
42 'Funding': 'https://donate.getpelican.com/',
43 'Source': 'https://github.com/getpelican/pelican',
44 'Tracker': 'https://github.com/getpelican/pelican/issues',
45 },
46 keywords='static web site generator SSG reStructuredText Markdown',
47 license='AGPLv3',
48 long_description=description,
49 packages=['pelican', 'pelican.tools'],
50 package_data={
51 # we manually collect the package data, as opposed to using,
52 # include_package_data=True because we don't want the tests to be
53 # included automatically as package data (MANIFEST.in is too greedy)
54 'pelican': [relpath(join(root, name), 'pelican')
55 for root, _, names in walk(join('pelican', 'themes'))
56 for name in names],
57 'pelican.tools': [relpath(join(root, name), join('pelican', 'tools'))
58 for root, _, names in walk(join('pelican',
59 'tools',
60 'templates'))
61 for name in names],
62 },
63 install_requires=requires,
64 entry_points=entry_points,
65 classifiers=[
66 'Development Status :: 5 - Production/Stable',
67 'Environment :: Console',
68 'Framework :: Pelican',
69 'License :: OSI Approved :: GNU Affero General Public License v3',
70 'Operating System :: OS Independent',
71 'Programming Language :: Python :: 2',
72 'Programming Language :: Python :: 2.7',
73 'Programming Language :: Python :: 3',
74 'Programming Language :: Python :: 3.5',
75 'Programming Language :: Python :: 3.6',
76 'Programming Language :: Python :: 3.7',
77 'Programming Language :: Python :: Implementation :: CPython',
78 'Topic :: Internet :: WWW/HTTP',
79 'Topic :: Software Development :: Libraries :: Python Modules',
80 ],
81 test_suite='pelican.tests',
82 )
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -61,6 +61,9 @@
for name in names],
},
install_requires=requires,
+ extras_require={
+ 'Markdown': ['markdown~=3.1.1']
+ },
entry_points=entry_points,
classifiers=[
'Development Status :: 5 - Production/Stable',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -61,6 +61,9 @@\n for name in names],\n },\n install_requires=requires,\n+ extras_require={\n+ 'Markdown': ['markdown~=3.1.1']\n+ },\n entry_points=entry_points,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n", "issue": "Add Markdown as an (optional) dependency\nSince its inception, this project has taken the collective position that since not everyone uses Markdown, the `markdown` package should not be a dependency of the project and should instead be manually installed by users who want to use Markdown.\r\n\r\nOn the other hand, the `docutils` package that parses reStructuredText markup has always been a dependency. Particularly given Markdown's widespread popularity, I cannot come up with a compelling argument to support the continuation of this discrepancy.\r\n\r\nThis project is littered with issues caused by environments with missing `markdown` packages. Many folks have encountered frustration when trying to build their Markdown-content sites, only to be met with an (often inscrutable) error message. And that's just the folks who have taken the time to file issues! Imagine how many threw up their hands and just moved on to another solution\u2026\r\n\r\nAdding Markdown as a dependency would have significant benefits, including simplifying installation instructions, reducing frustration, and minimizing support request load. So to take another approach at this topic\u2026\u00a0What are the reasons for *not* adding Markdown as a dependency, and do you think they are sufficiently important to out-weigh the concerns raised above?\n", "before_files": [{"content": "#!/usr/bin/env python\nimport sys\nfrom io import open\nfrom os import walk\nfrom os.path import join, relpath\n\nfrom setuptools import setup\n\n\nversion = \"4.1.2\"\n\nrequires = ['feedgenerator >= 1.9', 'jinja2 >= 2.7', 'pygments', 'docutils',\n 'pytz >= 0a', 'blinker', 'unidecode', 'six >= 1.4',\n 'python-dateutil']\n\nentry_points = {\n 'console_scripts': [\n 'pelican = pelican.__main__:main',\n 'pelican-import = pelican.tools.pelican_import:main',\n 'pelican-quickstart = pelican.tools.pelican_quickstart:main',\n 'pelican-themes = pelican.tools.pelican_themes:main'\n ]\n}\n\nREADME = open('README.rst', encoding='utf-8').read()\nCHANGELOG = open('docs/changelog.rst', encoding='utf-8').read()\n\ndescription = u'\\n'.join([README, CHANGELOG])\nif sys.version_info.major < 3:\n description = description.encode('utf-8')\n\nsetup(\n name='pelican',\n version=version,\n url='https://getpelican.com/',\n author='Justin Mayer',\n author_email='[email protected]',\n description=\"Static site generator supporting reStructuredText and \"\n \"Markdown source content.\",\n project_urls={\n 'Documentation': 'https://docs.getpelican.com/',\n 'Funding': 'https://donate.getpelican.com/',\n 'Source': 'https://github.com/getpelican/pelican',\n 'Tracker': 'https://github.com/getpelican/pelican/issues',\n },\n keywords='static web site generator SSG reStructuredText Markdown',\n license='AGPLv3',\n long_description=description,\n packages=['pelican', 'pelican.tools'],\n package_data={\n # we manually collect the package data, as opposed to using,\n # include_package_data=True because we don't want the tests to be\n # included automatically as package data (MANIFEST.in is too greedy)\n 'pelican': [relpath(join(root, name), 'pelican')\n for root, _, names in walk(join('pelican', 'themes'))\n for name in names],\n 'pelican.tools': [relpath(join(root, name), join('pelican', 'tools'))\n for root, _, names in walk(join('pelican',\n 'tools',\n 'templates'))\n for name in names],\n },\n install_requires=requires,\n entry_points=entry_points,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Framework :: Pelican',\n 'License :: OSI Approved :: GNU Affero General Public License v3',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n test_suite='pelican.tests',\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport sys\nfrom io import open\nfrom os import walk\nfrom os.path import join, relpath\n\nfrom setuptools import setup\n\n\nversion = \"4.1.2\"\n\nrequires = ['feedgenerator >= 1.9', 'jinja2 >= 2.7', 'pygments', 'docutils',\n 'pytz >= 0a', 'blinker', 'unidecode', 'six >= 1.4',\n 'python-dateutil']\n\nentry_points = {\n 'console_scripts': [\n 'pelican = pelican.__main__:main',\n 'pelican-import = pelican.tools.pelican_import:main',\n 'pelican-quickstart = pelican.tools.pelican_quickstart:main',\n 'pelican-themes = pelican.tools.pelican_themes:main'\n ]\n}\n\nREADME = open('README.rst', encoding='utf-8').read()\nCHANGELOG = open('docs/changelog.rst', encoding='utf-8').read()\n\ndescription = u'\\n'.join([README, CHANGELOG])\nif sys.version_info.major < 3:\n description = description.encode('utf-8')\n\nsetup(\n name='pelican',\n version=version,\n url='https://getpelican.com/',\n author='Justin Mayer',\n author_email='[email protected]',\n description=\"Static site generator supporting reStructuredText and \"\n \"Markdown source content.\",\n project_urls={\n 'Documentation': 'https://docs.getpelican.com/',\n 'Funding': 'https://donate.getpelican.com/',\n 'Source': 'https://github.com/getpelican/pelican',\n 'Tracker': 'https://github.com/getpelican/pelican/issues',\n },\n keywords='static web site generator SSG reStructuredText Markdown',\n license='AGPLv3',\n long_description=description,\n packages=['pelican', 'pelican.tools'],\n package_data={\n # we manually collect the package data, as opposed to using,\n # include_package_data=True because we don't want the tests to be\n # included automatically as package data (MANIFEST.in is too greedy)\n 'pelican': [relpath(join(root, name), 'pelican')\n for root, _, names in walk(join('pelican', 'themes'))\n for name in names],\n 'pelican.tools': [relpath(join(root, name), join('pelican', 'tools'))\n for root, _, names in walk(join('pelican',\n 'tools',\n 'templates'))\n for name in names],\n },\n install_requires=requires,\n extras_require={\n 'Markdown': ['markdown~=3.1.1']\n },\n entry_points=entry_points,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Framework :: Pelican',\n 'License :: OSI Approved :: GNU Affero General Public License v3',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n test_suite='pelican.tests',\n)\n", "path": "setup.py"}]}
| 1,393 | 90 |
gh_patches_debug_26678
|
rasdani/github-patches
|
git_diff
|
archlinux__archinstall-2071
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Localizations revert to defaults on entry of "Locales" menu
If localizations are changed from the defaults they will revert back to the defaults upon entry of the _Locales_ menu.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `archinstall/lib/locale/locale_menu.py`
Content:
```
1 from dataclasses import dataclass
2 from typing import Dict, Any, TYPE_CHECKING, Optional
3
4 from .locale import set_kb_layout, list_keyboard_languages, list_locales
5 from ..menu import Selector, AbstractSubMenu, MenuSelectionType, Menu
6
7 if TYPE_CHECKING:
8 _: Any
9
10
11 @dataclass
12 class LocaleConfiguration:
13 kb_layout: str
14 sys_lang: str
15 sys_enc: str
16
17 @staticmethod
18 def default() -> 'LocaleConfiguration':
19 return LocaleConfiguration('us', 'en_US', 'UTF-8')
20
21 def json(self) -> Dict[str, str]:
22 return {
23 'kb_layout': self.kb_layout,
24 'sys_lang': self.sys_lang,
25 'sys_enc': self.sys_enc
26 }
27
28 @classmethod
29 def _load_config(cls, config: 'LocaleConfiguration', args: Dict[str, Any]) -> 'LocaleConfiguration':
30 if 'sys_lang' in args:
31 config.sys_lang = args['sys_lang']
32 if 'sys_enc' in args:
33 config.sys_enc = args['sys_enc']
34 if 'kb_layout' in args:
35 config.kb_layout = args['kb_layout']
36
37 return config
38
39 @classmethod
40 def parse_arg(cls, args: Dict[str, Any]) -> 'LocaleConfiguration':
41 default = cls.default()
42
43 if 'locale_config' in args:
44 default = cls._load_config(default, args['locale_config'])
45 else:
46 default = cls._load_config(default, args)
47
48 return default
49
50
51 class LocaleMenu(AbstractSubMenu):
52 def __init__(
53 self,
54 data_store: Dict[str, Any],
55 locele_conf: LocaleConfiguration
56 ):
57 self._preset = locele_conf
58 super().__init__(data_store=data_store)
59
60 def setup_selection_menu_options(self):
61 self._menu_options['keyboard-layout'] = \
62 Selector(
63 _('Keyboard layout'),
64 lambda preset: self._select_kb_layout(preset),
65 default='us',
66 enabled=True)
67 self._menu_options['sys-language'] = \
68 Selector(
69 _('Locale language'),
70 lambda preset: select_locale_lang(preset),
71 default='en_US',
72 enabled=True)
73 self._menu_options['sys-encoding'] = \
74 Selector(
75 _('Locale encoding'),
76 lambda preset: select_locale_enc(preset),
77 default='UTF-8',
78 enabled=True)
79
80 def run(self, allow_reset: bool = True) -> LocaleConfiguration:
81 super().run(allow_reset=allow_reset)
82
83 return LocaleConfiguration(
84 self._data_store['keyboard-layout'],
85 self._data_store['sys-language'],
86 self._data_store['sys-encoding']
87 )
88
89 def _select_kb_layout(self, preset: Optional[str]) -> Optional[str]:
90 kb_lang = select_kb_layout(preset)
91 if kb_lang:
92 set_kb_layout(kb_lang)
93 return kb_lang
94
95
96 def select_locale_lang(preset: Optional[str] = None) -> Optional[str]:
97 locales = list_locales()
98 locale_lang = set([locale.split()[0] for locale in locales])
99
100 choice = Menu(
101 _('Choose which locale language to use'),
102 list(locale_lang),
103 sort=True,
104 preset_values=preset
105 ).run()
106
107 match choice.type_:
108 case MenuSelectionType.Selection: return choice.single_value
109 case MenuSelectionType.Skip: return preset
110
111 return None
112
113
114 def select_locale_enc(preset: Optional[str] = None) -> Optional[str]:
115 locales = list_locales()
116 locale_enc = set([locale.split()[1] for locale in locales])
117
118 choice = Menu(
119 _('Choose which locale encoding to use'),
120 list(locale_enc),
121 sort=True,
122 preset_values=preset
123 ).run()
124
125 match choice.type_:
126 case MenuSelectionType.Selection: return choice.single_value
127 case MenuSelectionType.Skip: return preset
128
129 return None
130
131
132 def select_kb_layout(preset: Optional[str] = None) -> Optional[str]:
133 """
134 Asks the user to select a language
135 Usually this is combined with :ref:`archinstall.list_keyboard_languages`.
136
137 :return: The language/dictionary key of the selected language
138 :rtype: str
139 """
140 kb_lang = list_keyboard_languages()
141 # sort alphabetically and then by length
142 sorted_kb_lang = sorted(sorted(list(kb_lang)), key=len)
143
144 choice = Menu(
145 _('Select keyboard layout'),
146 sorted_kb_lang,
147 preset_values=preset,
148 sort=False
149 ).run()
150
151 match choice.type_:
152 case MenuSelectionType.Skip: return preset
153 case MenuSelectionType.Selection: return choice.single_value
154
155 return None
156
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/archinstall/lib/locale/locale_menu.py b/archinstall/lib/locale/locale_menu.py
--- a/archinstall/lib/locale/locale_menu.py
+++ b/archinstall/lib/locale/locale_menu.py
@@ -52,9 +52,9 @@
def __init__(
self,
data_store: Dict[str, Any],
- locele_conf: LocaleConfiguration
+ locale_conf: LocaleConfiguration
):
- self._preset = locele_conf
+ self._preset = locale_conf
super().__init__(data_store=data_store)
def setup_selection_menu_options(self):
@@ -62,19 +62,19 @@
Selector(
_('Keyboard layout'),
lambda preset: self._select_kb_layout(preset),
- default='us',
+ default=self._preset.kb_layout,
enabled=True)
self._menu_options['sys-language'] = \
Selector(
_('Locale language'),
lambda preset: select_locale_lang(preset),
- default='en_US',
+ default=self._preset.sys_lang,
enabled=True)
self._menu_options['sys-encoding'] = \
Selector(
_('Locale encoding'),
lambda preset: select_locale_enc(preset),
- default='UTF-8',
+ default=self._preset.sys_enc,
enabled=True)
def run(self, allow_reset: bool = True) -> LocaleConfiguration:
|
{"golden_diff": "diff --git a/archinstall/lib/locale/locale_menu.py b/archinstall/lib/locale/locale_menu.py\n--- a/archinstall/lib/locale/locale_menu.py\n+++ b/archinstall/lib/locale/locale_menu.py\n@@ -52,9 +52,9 @@\n \tdef __init__(\n \t\tself,\n \t\tdata_store: Dict[str, Any],\n-\t\tlocele_conf: LocaleConfiguration\n+\t\tlocale_conf: LocaleConfiguration\n \t):\n-\t\tself._preset = locele_conf\n+\t\tself._preset = locale_conf\n \t\tsuper().__init__(data_store=data_store)\n \n \tdef setup_selection_menu_options(self):\n@@ -62,19 +62,19 @@\n \t\t\tSelector(\n \t\t\t\t_('Keyboard layout'),\n \t\t\t\tlambda preset: self._select_kb_layout(preset),\n-\t\t\t\tdefault='us',\n+\t\t\t\tdefault=self._preset.kb_layout,\n \t\t\t\tenabled=True)\n \t\tself._menu_options['sys-language'] = \\\n \t\t\tSelector(\n \t\t\t\t_('Locale language'),\n \t\t\t\tlambda preset: select_locale_lang(preset),\n-\t\t\t\tdefault='en_US',\n+\t\t\t\tdefault=self._preset.sys_lang,\n \t\t\t\tenabled=True)\n \t\tself._menu_options['sys-encoding'] = \\\n \t\t\tSelector(\n \t\t\t\t_('Locale encoding'),\n \t\t\t\tlambda preset: select_locale_enc(preset),\n-\t\t\t\tdefault='UTF-8',\n+\t\t\t\tdefault=self._preset.sys_enc,\n \t\t\t\tenabled=True)\n \n \tdef run(self, allow_reset: bool = True) -> LocaleConfiguration:\n", "issue": "Localizations revert to defaults on entry of \"Locales\" menu\nIf localizations are changed from the defaults they will revert back to the defaults upon entry of the _Locales_ menu.\n", "before_files": [{"content": "from dataclasses import dataclass\nfrom typing import Dict, Any, TYPE_CHECKING, Optional\n\nfrom .locale import set_kb_layout, list_keyboard_languages, list_locales\nfrom ..menu import Selector, AbstractSubMenu, MenuSelectionType, Menu\n\nif TYPE_CHECKING:\n\t_: Any\n\n\n@dataclass\nclass LocaleConfiguration:\n\tkb_layout: str\n\tsys_lang: str\n\tsys_enc: str\n\n\t@staticmethod\n\tdef default() -> 'LocaleConfiguration':\n\t\treturn LocaleConfiguration('us', 'en_US', 'UTF-8')\n\n\tdef json(self) -> Dict[str, str]:\n\t\treturn {\n\t\t\t'kb_layout': self.kb_layout,\n\t\t\t'sys_lang': self.sys_lang,\n\t\t\t'sys_enc': self.sys_enc\n\t\t}\n\n\t@classmethod\n\tdef _load_config(cls, config: 'LocaleConfiguration', args: Dict[str, Any]) -> 'LocaleConfiguration':\n\t\tif 'sys_lang' in args:\n\t\t\tconfig.sys_lang = args['sys_lang']\n\t\tif 'sys_enc' in args:\n\t\t\tconfig.sys_enc = args['sys_enc']\n\t\tif 'kb_layout' in args:\n\t\t\tconfig.kb_layout = args['kb_layout']\n\n\t\treturn config\n\n\t@classmethod\n\tdef parse_arg(cls, args: Dict[str, Any]) -> 'LocaleConfiguration':\n\t\tdefault = cls.default()\n\n\t\tif 'locale_config' in args:\n\t\t\tdefault = cls._load_config(default, args['locale_config'])\n\t\telse:\n\t\t\tdefault = cls._load_config(default, args)\n\n\t\treturn default\n\n\nclass LocaleMenu(AbstractSubMenu):\n\tdef __init__(\n\t\tself,\n\t\tdata_store: Dict[str, Any],\n\t\tlocele_conf: LocaleConfiguration\n\t):\n\t\tself._preset = locele_conf\n\t\tsuper().__init__(data_store=data_store)\n\n\tdef setup_selection_menu_options(self):\n\t\tself._menu_options['keyboard-layout'] = \\\n\t\t\tSelector(\n\t\t\t\t_('Keyboard layout'),\n\t\t\t\tlambda preset: self._select_kb_layout(preset),\n\t\t\t\tdefault='us',\n\t\t\t\tenabled=True)\n\t\tself._menu_options['sys-language'] = \\\n\t\t\tSelector(\n\t\t\t\t_('Locale language'),\n\t\t\t\tlambda preset: select_locale_lang(preset),\n\t\t\t\tdefault='en_US',\n\t\t\t\tenabled=True)\n\t\tself._menu_options['sys-encoding'] = \\\n\t\t\tSelector(\n\t\t\t\t_('Locale encoding'),\n\t\t\t\tlambda preset: select_locale_enc(preset),\n\t\t\t\tdefault='UTF-8',\n\t\t\t\tenabled=True)\n\n\tdef run(self, allow_reset: bool = True) -> LocaleConfiguration:\n\t\tsuper().run(allow_reset=allow_reset)\n\n\t\treturn LocaleConfiguration(\n\t\t\tself._data_store['keyboard-layout'],\n\t\t\tself._data_store['sys-language'],\n\t\t\tself._data_store['sys-encoding']\n\t\t)\n\n\tdef _select_kb_layout(self, preset: Optional[str]) -> Optional[str]:\n\t\tkb_lang = select_kb_layout(preset)\n\t\tif kb_lang:\n\t\t\tset_kb_layout(kb_lang)\n\t\treturn kb_lang\n\n\ndef select_locale_lang(preset: Optional[str] = None) -> Optional[str]:\n\tlocales = list_locales()\n\tlocale_lang = set([locale.split()[0] for locale in locales])\n\n\tchoice = Menu(\n\t\t_('Choose which locale language to use'),\n\t\tlist(locale_lang),\n\t\tsort=True,\n\t\tpreset_values=preset\n\t).run()\n\n\tmatch choice.type_:\n\t\tcase MenuSelectionType.Selection: return choice.single_value\n\t\tcase MenuSelectionType.Skip: return preset\n\n\treturn None\n\n\ndef select_locale_enc(preset: Optional[str] = None) -> Optional[str]:\n\tlocales = list_locales()\n\tlocale_enc = set([locale.split()[1] for locale in locales])\n\n\tchoice = Menu(\n\t\t_('Choose which locale encoding to use'),\n\t\tlist(locale_enc),\n\t\tsort=True,\n\t\tpreset_values=preset\n\t).run()\n\n\tmatch choice.type_:\n\t\tcase MenuSelectionType.Selection: return choice.single_value\n\t\tcase MenuSelectionType.Skip: return preset\n\n\treturn None\n\n\ndef select_kb_layout(preset: Optional[str] = None) -> Optional[str]:\n\t\"\"\"\n\tAsks the user to select a language\n\tUsually this is combined with :ref:`archinstall.list_keyboard_languages`.\n\n\t:return: The language/dictionary key of the selected language\n\t:rtype: str\n\t\"\"\"\n\tkb_lang = list_keyboard_languages()\n\t# sort alphabetically and then by length\n\tsorted_kb_lang = sorted(sorted(list(kb_lang)), key=len)\n\n\tchoice = Menu(\n\t\t_('Select keyboard layout'),\n\t\tsorted_kb_lang,\n\t\tpreset_values=preset,\n\t\tsort=False\n\t).run()\n\n\tmatch choice.type_:\n\t\tcase MenuSelectionType.Skip: return preset\n\t\tcase MenuSelectionType.Selection: return choice.single_value\n\n\treturn None\n", "path": "archinstall/lib/locale/locale_menu.py"}], "after_files": [{"content": "from dataclasses import dataclass\nfrom typing import Dict, Any, TYPE_CHECKING, Optional\n\nfrom .locale import set_kb_layout, list_keyboard_languages, list_locales\nfrom ..menu import Selector, AbstractSubMenu, MenuSelectionType, Menu\n\nif TYPE_CHECKING:\n\t_: Any\n\n\n@dataclass\nclass LocaleConfiguration:\n\tkb_layout: str\n\tsys_lang: str\n\tsys_enc: str\n\n\t@staticmethod\n\tdef default() -> 'LocaleConfiguration':\n\t\treturn LocaleConfiguration('us', 'en_US', 'UTF-8')\n\n\tdef json(self) -> Dict[str, str]:\n\t\treturn {\n\t\t\t'kb_layout': self.kb_layout,\n\t\t\t'sys_lang': self.sys_lang,\n\t\t\t'sys_enc': self.sys_enc\n\t\t}\n\n\t@classmethod\n\tdef _load_config(cls, config: 'LocaleConfiguration', args: Dict[str, Any]) -> 'LocaleConfiguration':\n\t\tif 'sys_lang' in args:\n\t\t\tconfig.sys_lang = args['sys_lang']\n\t\tif 'sys_enc' in args:\n\t\t\tconfig.sys_enc = args['sys_enc']\n\t\tif 'kb_layout' in args:\n\t\t\tconfig.kb_layout = args['kb_layout']\n\n\t\treturn config\n\n\t@classmethod\n\tdef parse_arg(cls, args: Dict[str, Any]) -> 'LocaleConfiguration':\n\t\tdefault = cls.default()\n\n\t\tif 'locale_config' in args:\n\t\t\tdefault = cls._load_config(default, args['locale_config'])\n\t\telse:\n\t\t\tdefault = cls._load_config(default, args)\n\n\t\treturn default\n\n\nclass LocaleMenu(AbstractSubMenu):\n\tdef __init__(\n\t\tself,\n\t\tdata_store: Dict[str, Any],\n\t\tlocale_conf: LocaleConfiguration\n\t):\n\t\tself._preset = locale_conf\n\t\tsuper().__init__(data_store=data_store)\n\n\tdef setup_selection_menu_options(self):\n\t\tself._menu_options['keyboard-layout'] = \\\n\t\t\tSelector(\n\t\t\t\t_('Keyboard layout'),\n\t\t\t\tlambda preset: self._select_kb_layout(preset),\n\t\t\t\tdefault=self._preset.kb_layout,\n\t\t\t\tenabled=True)\n\t\tself._menu_options['sys-language'] = \\\n\t\t\tSelector(\n\t\t\t\t_('Locale language'),\n\t\t\t\tlambda preset: select_locale_lang(preset),\n\t\t\t\tdefault=self._preset.sys_lang,\n\t\t\t\tenabled=True)\n\t\tself._menu_options['sys-encoding'] = \\\n\t\t\tSelector(\n\t\t\t\t_('Locale encoding'),\n\t\t\t\tlambda preset: select_locale_enc(preset),\n\t\t\t\tdefault=self._preset.sys_enc,\n\t\t\t\tenabled=True)\n\n\tdef run(self, allow_reset: bool = True) -> LocaleConfiguration:\n\t\tsuper().run(allow_reset=allow_reset)\n\n\t\treturn LocaleConfiguration(\n\t\t\tself._data_store['keyboard-layout'],\n\t\t\tself._data_store['sys-language'],\n\t\t\tself._data_store['sys-encoding']\n\t\t)\n\n\tdef _select_kb_layout(self, preset: Optional[str]) -> Optional[str]:\n\t\tkb_lang = select_kb_layout(preset)\n\t\tif kb_lang:\n\t\t\tset_kb_layout(kb_lang)\n\t\treturn kb_lang\n\n\ndef select_locale_lang(preset: Optional[str] = None) -> Optional[str]:\n\tlocales = list_locales()\n\tlocale_lang = set([locale.split()[0] for locale in locales])\n\n\tchoice = Menu(\n\t\t_('Choose which locale language to use'),\n\t\tlist(locale_lang),\n\t\tsort=True,\n\t\tpreset_values=preset\n\t).run()\n\n\tmatch choice.type_:\n\t\tcase MenuSelectionType.Selection: return choice.single_value\n\t\tcase MenuSelectionType.Skip: return preset\n\n\treturn None\n\n\ndef select_locale_enc(preset: Optional[str] = None) -> Optional[str]:\n\tlocales = list_locales()\n\tlocale_enc = set([locale.split()[1] for locale in locales])\n\n\tchoice = Menu(\n\t\t_('Choose which locale encoding to use'),\n\t\tlist(locale_enc),\n\t\tsort=True,\n\t\tpreset_values=preset\n\t).run()\n\n\tmatch choice.type_:\n\t\tcase MenuSelectionType.Selection: return choice.single_value\n\t\tcase MenuSelectionType.Skip: return preset\n\n\treturn None\n\n\ndef select_kb_layout(preset: Optional[str] = None) -> Optional[str]:\n\t\"\"\"\n\tAsks the user to select a language\n\tUsually this is combined with :ref:`archinstall.list_keyboard_languages`.\n\n\t:return: The language/dictionary key of the selected language\n\t:rtype: str\n\t\"\"\"\n\tkb_lang = list_keyboard_languages()\n\t# sort alphabetically and then by length\n\tsorted_kb_lang = sorted(sorted(list(kb_lang)), key=len)\n\n\tchoice = Menu(\n\t\t_('Select keyboard layout'),\n\t\tsorted_kb_lang,\n\t\tpreset_values=preset,\n\t\tsort=False\n\t).run()\n\n\tmatch choice.type_:\n\t\tcase MenuSelectionType.Skip: return preset\n\t\tcase MenuSelectionType.Selection: return choice.single_value\n\n\treturn None\n", "path": "archinstall/lib/locale/locale_menu.py"}]}
| 1,720 | 312 |
gh_patches_debug_30194
|
rasdani/github-patches
|
git_diff
|
beeware__toga-800
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Table.data = data is inserting data above old data
## Expected Behavior
I'm trying to replace the data in a Table object to refresh it with live data.
table = toga.Table(headers['1', '2'], data=[['a', 'b'], ['c', 'd']])
I'd expect the table to only reflect the new data passed in,
table.data = [['e', 'f']]
should be reflected as:
1 2
e f
## Current Behavior
Currently when I run table.data = [['e', 'f']], it inserts the new data at the top, but does not delete the old data. It makes it look like:
1 2
e f
a b
c d
## Steps to reproduce
See above
## Your Environment
<!--- Provide details on your current environment you found the bug in -->
* Python Version (list the specific version number)
Python 3.7.6
* Operating System and Version (select from the following and list the specific version number; if your OS is not listed, list that as well)
- [ ] macOS - version:
- [ ] Linux - distro: - version:
- [x] Windows - version:10
- [ ] Other - name: - version:
* Toga Version (list the specific version number or git hash)
'0.3.0.dev18'
* Toga Target (the type of app you are trying to generate)
- [ ] android
- [ ] cocoa
- [ ] django
- [ ] gtk
- [ ] iOS
- [ ] tvOS
- [ ] watchOS
- [ ] winforms
- [x] win32
- [ ] Other (please specify)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/table/table/app.py`
Content:
```
1 from random import choice
2
3 import toga
4 from toga.style import Pack
5 from toga.constants import COLUMN, ROW
6
7 headings = ['Title', 'Year', 'Rating', 'Genre']
8 bee_movies = [
9 ('The Secret Life of Bees', '2008', '7.3', 'Drama'),
10 ('Bee Movie', '2007', '6.1', 'Animation, Adventure, Comedy'),
11 ('Bees', '1998', '6.3', 'Horror'),
12 ('The Girl Who Swallowed Bees', '2007', '7.5', 'Short'),
13 ('Birds Do It, Bees Do It', '1974', '7.3', 'Documentary'),
14 ('Bees: A Life for the Queen', '1998', '8.0', 'TV Movie'),
15 ('Bees in Paradise', '1944', '5.4', 'Comedy, Musical'),
16 ('Keeper of the Bees', '1947', '6.3', 'Drama')
17 ]
18
19
20 class ExampleTableApp(toga.App):
21 # Table callback functions
22 def on_select_handler(self, widget, row, **kwargs):
23 self.label.text = 'You selected row: {}'.format(row.title) if row is not None else 'No row selected'
24
25 # Button callback functions
26 def insert_handler(self, widget, **kwargs):
27 self.table1.data.insert(0, *choice(bee_movies))
28
29 def delete_handler(self, widget, **kwargs):
30 if len(self.table1.data) > 0:
31 self.table1.data.remove(self.table1.data[0])
32 else:
33 print('Table is empty!')
34
35 def clear_handler(self, widget, **kwargs):
36 self.table1.data.clear()
37
38 def startup(self):
39 self.main_window = toga.MainWindow(title=self.name)
40
41 # Label to show which row is currently selected.
42 self.label = toga.Label('Ready.')
43
44 # Data to populate the table.
45 data = []
46 for x in range(5):
47 data.append(tuple(str(x) for x in range(5)))
48
49 self.table1 = toga.Table(
50 headings=headings,
51 data=bee_movies[:4],
52 style=Pack(flex=1),
53 on_select=self.on_select_handler
54 )
55
56 self.table2 = toga.Table(
57 headings=headings,
58 data=self.table1.data,
59 style=Pack(flex=1)
60 )
61
62 tablebox = toga.Box(children=[self.table1, self.table2], style=Pack(flex=1))
63
64 # Buttons
65 btn_style = Pack(flex=1)
66 btn_insert = toga.Button('Insert Row', on_press=self.insert_handler, style=btn_style)
67 btn_delete = toga.Button('Delete Row', on_press=self.delete_handler, style=btn_style)
68 btn_clear = toga.Button('Clear Table', on_press=self.clear_handler, style=btn_style)
69 btn_box = toga.Box(children=[btn_insert, btn_delete, btn_clear], style=Pack(direction=ROW))
70
71 # Most outer box
72 outer_box = toga.Box(
73 children=[btn_box, tablebox, self.label],
74 style=Pack(
75 flex=1,
76 direction=COLUMN,
77 padding=10,
78 )
79 )
80
81 # Add the content on the main window
82 self.main_window.content = outer_box
83
84 # Show the main window
85 self.main_window.show()
86
87
88 def main():
89 return ExampleTableApp('Table', 'org.beeware.widgets.table')
90
91
92 if __name__ == '__main__':
93 app = main()
94 app.main_loop()
95
```
Path: `src/winforms/toga_winforms/widgets/table.py`
Content:
```
1 from travertino.size import at_least
2
3 from toga_winforms.libs import WinForms
4
5 from .base import Widget
6
7
8 class Table(Widget):
9 def create(self):
10 self._container = self
11 self.native = WinForms.ListView()
12 self.native.View = WinForms.View.Details
13
14 dataColumn = []
15 for heading in self.interface.headings:
16 col = WinForms.ColumnHeader()
17 col.Text = heading
18 dataColumn.append(col)
19
20 self.native.FullRowSelect = True
21 self.native.Multiselect = self.interface.multiple_select
22 self.native.Columns.AddRange(dataColumn)
23
24 def change_source(self, source):
25 for index, row in enumerate(self.interface.data):
26 row._impl = WinForms.ListViewItem([
27 getattr(row, attr) for attr in self.interface._accessors
28 ])
29 self.native.Items.Insert(index, row._impl)
30
31 def update_data(self):
32 self.native.Items.Clear()
33 for index, row in enumerate(self.interface.data):
34 row._impl = WinForms.ListViewItem([
35 getattr(row, attr) for attr in self.interface._accessors
36 ])
37 self.native.Items.Insert(index, row._impl)
38
39 def insert(self, index, item):
40 item._impl = WinForms.ListViewItem([
41 getattr(item, attr) for attr in self.interface._accessors
42 ])
43 self.native.Items.Insert(index, item._impl)
44
45 def change(self, item):
46 self.interface.factory.not_implemented('Table.change()')
47
48 def remove(self, item):
49 self.update_data()
50
51 def clear(self):
52 self.native.Items.Clear()
53
54 def set_on_select(self, handler):
55 self.interface.factory.not_implemented('Table.set_on_select()')
56
57 def scroll_to_row(self, row):
58 self.native.EnsureVisible(row)
59 self.interface.factory.not_implemented('Table.scroll_to_row()')
60
61 def rehint(self):
62 self.interface.intrinsic.width = at_least(self.interface.MIN_WIDTH)
63 self.interface.intrinsic.height = at_least(self.interface.MIN_HEIGHT)
64
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/table/table/app.py b/examples/table/table/app.py
--- a/examples/table/table/app.py
+++ b/examples/table/table/app.py
@@ -35,6 +35,9 @@
def clear_handler(self, widget, **kwargs):
self.table1.data.clear()
+ def reset_handler(self, widget, **kwargs):
+ self.table1.data = bee_movies[3:]
+
def startup(self):
self.main_window = toga.MainWindow(title=self.name)
@@ -66,7 +69,11 @@
btn_insert = toga.Button('Insert Row', on_press=self.insert_handler, style=btn_style)
btn_delete = toga.Button('Delete Row', on_press=self.delete_handler, style=btn_style)
btn_clear = toga.Button('Clear Table', on_press=self.clear_handler, style=btn_style)
- btn_box = toga.Box(children=[btn_insert, btn_delete, btn_clear], style=Pack(direction=ROW))
+ btn_reset = toga.Button('Reset Table', on_press=self.reset_handler, style=btn_style)
+ btn_box = toga.Box(
+ children=[btn_insert, btn_delete, btn_clear, btn_reset],
+ style=Pack(direction=ROW)
+ )
# Most outer box
outer_box = toga.Box(
diff --git a/src/winforms/toga_winforms/widgets/table.py b/src/winforms/toga_winforms/widgets/table.py
--- a/src/winforms/toga_winforms/widgets/table.py
+++ b/src/winforms/toga_winforms/widgets/table.py
@@ -22,6 +22,7 @@
self.native.Columns.AddRange(dataColumn)
def change_source(self, source):
+ self.native.Items.Clear()
for index, row in enumerate(self.interface.data):
row._impl = WinForms.ListViewItem([
getattr(row, attr) for attr in self.interface._accessors
|
{"golden_diff": "diff --git a/examples/table/table/app.py b/examples/table/table/app.py\n--- a/examples/table/table/app.py\n+++ b/examples/table/table/app.py\n@@ -35,6 +35,9 @@\n def clear_handler(self, widget, **kwargs):\n self.table1.data.clear()\n \n+ def reset_handler(self, widget, **kwargs):\n+ self.table1.data = bee_movies[3:]\n+\n def startup(self):\n self.main_window = toga.MainWindow(title=self.name)\n \n@@ -66,7 +69,11 @@\n btn_insert = toga.Button('Insert Row', on_press=self.insert_handler, style=btn_style)\n btn_delete = toga.Button('Delete Row', on_press=self.delete_handler, style=btn_style)\n btn_clear = toga.Button('Clear Table', on_press=self.clear_handler, style=btn_style)\n- btn_box = toga.Box(children=[btn_insert, btn_delete, btn_clear], style=Pack(direction=ROW))\n+ btn_reset = toga.Button('Reset Table', on_press=self.reset_handler, style=btn_style)\n+ btn_box = toga.Box(\n+ children=[btn_insert, btn_delete, btn_clear, btn_reset],\n+ style=Pack(direction=ROW)\n+ )\n \n # Most outer box\n outer_box = toga.Box(\ndiff --git a/src/winforms/toga_winforms/widgets/table.py b/src/winforms/toga_winforms/widgets/table.py\n--- a/src/winforms/toga_winforms/widgets/table.py\n+++ b/src/winforms/toga_winforms/widgets/table.py\n@@ -22,6 +22,7 @@\n self.native.Columns.AddRange(dataColumn)\n \n def change_source(self, source):\n+ self.native.Items.Clear()\n for index, row in enumerate(self.interface.data):\n row._impl = WinForms.ListViewItem([\n getattr(row, attr) for attr in self.interface._accessors\n", "issue": "Table.data = data is inserting data above old data\n## Expected Behavior\r\nI'm trying to replace the data in a Table object to refresh it with live data. \r\ntable = toga.Table(headers['1', '2'], data=[['a', 'b'], ['c', 'd']])\r\n\r\nI'd expect the table to only reflect the new data passed in,\r\ntable.data = [['e', 'f']]\r\nshould be reflected as:\r\n 1 2\r\n e f \r\n\r\n## Current Behavior\r\nCurrently when I run table.data = [['e', 'f']], it inserts the new data at the top, but does not delete the old data. It makes it look like:\r\n 1 2\r\n e f\r\n a b\r\n c d\r\n\r\n## Steps to reproduce\r\nSee above\r\n\r\n## Your Environment\r\n<!--- Provide details on your current environment you found the bug in -->\r\n\r\n* Python Version (list the specific version number)\r\nPython 3.7.6\r\n* Operating System and Version (select from the following and list the specific version number; if your OS is not listed, list that as well)\r\n\r\n - [ ] macOS - version:\r\n - [ ] Linux - distro: - version:\r\n - [x] Windows - version:10\r\n - [ ] Other - name: - version:\r\n\r\n* Toga Version (list the specific version number or git hash)\r\n'0.3.0.dev18'\r\n\r\n* Toga Target (the type of app you are trying to generate)\r\n\r\n - [ ] android\r\n - [ ] cocoa\r\n - [ ] django\r\n - [ ] gtk\r\n - [ ] iOS\r\n - [ ] tvOS\r\n - [ ] watchOS\r\n - [ ] winforms\r\n - [x] win32\r\n - [ ] Other (please specify)\r\n\n", "before_files": [{"content": "from random import choice\n\nimport toga\nfrom toga.style import Pack\nfrom toga.constants import COLUMN, ROW\n\nheadings = ['Title', 'Year', 'Rating', 'Genre']\nbee_movies = [\n ('The Secret Life of Bees', '2008', '7.3', 'Drama'),\n ('Bee Movie', '2007', '6.1', 'Animation, Adventure, Comedy'),\n ('Bees', '1998', '6.3', 'Horror'),\n ('The Girl Who Swallowed Bees', '2007', '7.5', 'Short'),\n ('Birds Do It, Bees Do It', '1974', '7.3', 'Documentary'),\n ('Bees: A Life for the Queen', '1998', '8.0', 'TV Movie'),\n ('Bees in Paradise', '1944', '5.4', 'Comedy, Musical'),\n ('Keeper of the Bees', '1947', '6.3', 'Drama')\n]\n\n\nclass ExampleTableApp(toga.App):\n # Table callback functions\n def on_select_handler(self, widget, row, **kwargs):\n self.label.text = 'You selected row: {}'.format(row.title) if row is not None else 'No row selected'\n\n # Button callback functions\n def insert_handler(self, widget, **kwargs):\n self.table1.data.insert(0, *choice(bee_movies))\n\n def delete_handler(self, widget, **kwargs):\n if len(self.table1.data) > 0:\n self.table1.data.remove(self.table1.data[0])\n else:\n print('Table is empty!')\n\n def clear_handler(self, widget, **kwargs):\n self.table1.data.clear()\n\n def startup(self):\n self.main_window = toga.MainWindow(title=self.name)\n\n # Label to show which row is currently selected.\n self.label = toga.Label('Ready.')\n\n # Data to populate the table.\n data = []\n for x in range(5):\n data.append(tuple(str(x) for x in range(5)))\n\n self.table1 = toga.Table(\n headings=headings,\n data=bee_movies[:4],\n style=Pack(flex=1),\n on_select=self.on_select_handler\n )\n\n self.table2 = toga.Table(\n headings=headings,\n data=self.table1.data,\n style=Pack(flex=1)\n )\n\n tablebox = toga.Box(children=[self.table1, self.table2], style=Pack(flex=1))\n\n # Buttons\n btn_style = Pack(flex=1)\n btn_insert = toga.Button('Insert Row', on_press=self.insert_handler, style=btn_style)\n btn_delete = toga.Button('Delete Row', on_press=self.delete_handler, style=btn_style)\n btn_clear = toga.Button('Clear Table', on_press=self.clear_handler, style=btn_style)\n btn_box = toga.Box(children=[btn_insert, btn_delete, btn_clear], style=Pack(direction=ROW))\n\n # Most outer box\n outer_box = toga.Box(\n children=[btn_box, tablebox, self.label],\n style=Pack(\n flex=1,\n direction=COLUMN,\n padding=10,\n )\n )\n\n # Add the content on the main window\n self.main_window.content = outer_box\n\n # Show the main window\n self.main_window.show()\n\n\ndef main():\n return ExampleTableApp('Table', 'org.beeware.widgets.table')\n\n\nif __name__ == '__main__':\n app = main()\n app.main_loop()\n", "path": "examples/table/table/app.py"}, {"content": "from travertino.size import at_least\n\nfrom toga_winforms.libs import WinForms\n\nfrom .base import Widget\n\n\nclass Table(Widget):\n def create(self):\n self._container = self\n self.native = WinForms.ListView()\n self.native.View = WinForms.View.Details\n\n dataColumn = []\n for heading in self.interface.headings:\n col = WinForms.ColumnHeader()\n col.Text = heading\n dataColumn.append(col)\n\n self.native.FullRowSelect = True\n self.native.Multiselect = self.interface.multiple_select\n self.native.Columns.AddRange(dataColumn)\n\n def change_source(self, source):\n for index, row in enumerate(self.interface.data):\n row._impl = WinForms.ListViewItem([\n getattr(row, attr) for attr in self.interface._accessors\n ])\n self.native.Items.Insert(index, row._impl)\n\n def update_data(self):\n self.native.Items.Clear()\n for index, row in enumerate(self.interface.data):\n row._impl = WinForms.ListViewItem([\n getattr(row, attr) for attr in self.interface._accessors\n ])\n self.native.Items.Insert(index, row._impl)\n\n def insert(self, index, item):\n item._impl = WinForms.ListViewItem([\n getattr(item, attr) for attr in self.interface._accessors\n ])\n self.native.Items.Insert(index, item._impl)\n\n def change(self, item):\n self.interface.factory.not_implemented('Table.change()')\n\n def remove(self, item):\n self.update_data()\n\n def clear(self):\n self.native.Items.Clear()\n\n def set_on_select(self, handler):\n self.interface.factory.not_implemented('Table.set_on_select()')\n\n def scroll_to_row(self, row):\n self.native.EnsureVisible(row)\n self.interface.factory.not_implemented('Table.scroll_to_row()')\n\n def rehint(self):\n self.interface.intrinsic.width = at_least(self.interface.MIN_WIDTH)\n self.interface.intrinsic.height = at_least(self.interface.MIN_HEIGHT)\n", "path": "src/winforms/toga_winforms/widgets/table.py"}], "after_files": [{"content": "from random import choice\n\nimport toga\nfrom toga.style import Pack\nfrom toga.constants import COLUMN, ROW\n\nheadings = ['Title', 'Year', 'Rating', 'Genre']\nbee_movies = [\n ('The Secret Life of Bees', '2008', '7.3', 'Drama'),\n ('Bee Movie', '2007', '6.1', 'Animation, Adventure, Comedy'),\n ('Bees', '1998', '6.3', 'Horror'),\n ('The Girl Who Swallowed Bees', '2007', '7.5', 'Short'),\n ('Birds Do It, Bees Do It', '1974', '7.3', 'Documentary'),\n ('Bees: A Life for the Queen', '1998', '8.0', 'TV Movie'),\n ('Bees in Paradise', '1944', '5.4', 'Comedy, Musical'),\n ('Keeper of the Bees', '1947', '6.3', 'Drama')\n]\n\n\nclass ExampleTableApp(toga.App):\n # Table callback functions\n def on_select_handler(self, widget, row, **kwargs):\n self.label.text = 'You selected row: {}'.format(row.title) if row is not None else 'No row selected'\n\n # Button callback functions\n def insert_handler(self, widget, **kwargs):\n self.table1.data.insert(0, *choice(bee_movies))\n\n def delete_handler(self, widget, **kwargs):\n if len(self.table1.data) > 0:\n self.table1.data.remove(self.table1.data[0])\n else:\n print('Table is empty!')\n\n def clear_handler(self, widget, **kwargs):\n self.table1.data.clear()\n\n def reset_handler(self, widget, **kwargs):\n self.table1.data = bee_movies[3:]\n\n def startup(self):\n self.main_window = toga.MainWindow(title=self.name)\n\n # Label to show which row is currently selected.\n self.label = toga.Label('Ready.')\n\n # Data to populate the table.\n data = []\n for x in range(5):\n data.append(tuple(str(x) for x in range(5)))\n\n self.table1 = toga.Table(\n headings=headings,\n data=bee_movies[:4],\n style=Pack(flex=1),\n on_select=self.on_select_handler\n )\n\n self.table2 = toga.Table(\n headings=headings,\n data=self.table1.data,\n style=Pack(flex=1)\n )\n\n tablebox = toga.Box(children=[self.table1, self.table2], style=Pack(flex=1))\n\n # Buttons\n btn_style = Pack(flex=1)\n btn_insert = toga.Button('Insert Row', on_press=self.insert_handler, style=btn_style)\n btn_delete = toga.Button('Delete Row', on_press=self.delete_handler, style=btn_style)\n btn_clear = toga.Button('Clear Table', on_press=self.clear_handler, style=btn_style)\n btn_reset = toga.Button('Reset Table', on_press=self.reset_handler, style=btn_style)\n btn_box = toga.Box(\n children=[btn_insert, btn_delete, btn_clear, btn_reset],\n style=Pack(direction=ROW)\n )\n\n # Most outer box\n outer_box = toga.Box(\n children=[btn_box, tablebox, self.label],\n style=Pack(\n flex=1,\n direction=COLUMN,\n padding=10,\n )\n )\n\n # Add the content on the main window\n self.main_window.content = outer_box\n\n # Show the main window\n self.main_window.show()\n\n\ndef main():\n return ExampleTableApp('Table', 'org.beeware.widgets.table')\n\n\nif __name__ == '__main__':\n app = main()\n app.main_loop()\n", "path": "examples/table/table/app.py"}, {"content": "from travertino.size import at_least\n\nfrom toga_winforms.libs import WinForms\n\nfrom .base import Widget\n\n\nclass Table(Widget):\n def create(self):\n self._container = self\n self.native = WinForms.ListView()\n self.native.View = WinForms.View.Details\n\n dataColumn = []\n for heading in self.interface.headings:\n col = WinForms.ColumnHeader()\n col.Text = heading\n dataColumn.append(col)\n\n self.native.FullRowSelect = True\n self.native.Multiselect = self.interface.multiple_select\n self.native.Columns.AddRange(dataColumn)\n\n def change_source(self, source):\n self.native.Items.Clear()\n for index, row in enumerate(self.interface.data):\n row._impl = WinForms.ListViewItem([\n getattr(row, attr) for attr in self.interface._accessors\n ])\n self.native.Items.Insert(index, row._impl)\n\n def update_data(self):\n self.native.Items.Clear()\n for index, row in enumerate(self.interface.data):\n row._impl = WinForms.ListViewItem([\n getattr(row, attr) for attr in self.interface._accessors\n ])\n self.native.Items.Insert(index, row._impl)\n\n def insert(self, index, item):\n item._impl = WinForms.ListViewItem([\n getattr(item, attr) for attr in self.interface._accessors\n ])\n self.native.Items.Insert(index, item._impl)\n\n def change(self, item):\n self.interface.factory.not_implemented('Table.change()')\n\n def remove(self, item):\n self.update_data()\n\n def clear(self):\n self.native.Items.Clear()\n\n def set_on_select(self, handler):\n self.interface.factory.not_implemented('Table.set_on_select()')\n\n def scroll_to_row(self, row):\n self.native.EnsureVisible(row)\n self.interface.factory.not_implemented('Table.scroll_to_row()')\n\n def rehint(self):\n self.interface.intrinsic.width = at_least(self.interface.MIN_WIDTH)\n self.interface.intrinsic.height = at_least(self.interface.MIN_HEIGHT)\n", "path": "src/winforms/toga_winforms/widgets/table.py"}]}
| 2,197 | 411 |
gh_patches_debug_93
|
rasdani/github-patches
|
git_diff
|
statsmodels__statsmodels-1374
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
move graphics.tsa to tsa.graphics
Makes more sense to me to keep the tsa stuff under the tsa namespace. Might need to deprecate functions that aren't new.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `statsmodels/tsa/api.py`
Content:
```
1 from .ar_model import AR
2 from .arima_model import ARMA, ARIMA
3 import vector_ar as var
4 from .vector_ar.var_model import VAR
5 from .vector_ar.svar_model import SVAR
6 from .vector_ar.dynamic import DynamicVAR
7 import filters
8 import tsatools
9 from .tsatools import (add_trend, detrend, lagmat, lagmat2ds, add_lag)
10 import interp
11 import stattools
12 from .stattools import *
13 from .base import datetools
14
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/statsmodels/tsa/api.py b/statsmodels/tsa/api.py
--- a/statsmodels/tsa/api.py
+++ b/statsmodels/tsa/api.py
@@ -11,3 +11,4 @@
import stattools
from .stattools import *
from .base import datetools
+from ..graphics import tsaplots as graphics
|
{"golden_diff": "diff --git a/statsmodels/tsa/api.py b/statsmodels/tsa/api.py\n--- a/statsmodels/tsa/api.py\n+++ b/statsmodels/tsa/api.py\n@@ -11,3 +11,4 @@\n import stattools\n from .stattools import *\n from .base import datetools\n+from ..graphics import tsaplots as graphics\n", "issue": "move graphics.tsa to tsa.graphics\nMakes more sense to me to keep the tsa stuff under the tsa namespace. Might need to deprecate functions that aren't new. \n\n", "before_files": [{"content": "from .ar_model import AR\nfrom .arima_model import ARMA, ARIMA\nimport vector_ar as var\nfrom .vector_ar.var_model import VAR\nfrom .vector_ar.svar_model import SVAR\nfrom .vector_ar.dynamic import DynamicVAR\nimport filters\nimport tsatools\nfrom .tsatools import (add_trend, detrend, lagmat, lagmat2ds, add_lag)\nimport interp\nimport stattools\nfrom .stattools import *\nfrom .base import datetools\n", "path": "statsmodels/tsa/api.py"}], "after_files": [{"content": "from .ar_model import AR\nfrom .arima_model import ARMA, ARIMA\nimport vector_ar as var\nfrom .vector_ar.var_model import VAR\nfrom .vector_ar.svar_model import SVAR\nfrom .vector_ar.dynamic import DynamicVAR\nimport filters\nimport tsatools\nfrom .tsatools import (add_trend, detrend, lagmat, lagmat2ds, add_lag)\nimport interp\nimport stattools\nfrom .stattools import *\nfrom .base import datetools\nfrom ..graphics import tsaplots as graphics\n", "path": "statsmodels/tsa/api.py"}]}
| 427 | 77 |
gh_patches_debug_44246
|
rasdani/github-patches
|
git_diff
|
huggingface__dataset-viewer-2691
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sort the rows along a column
See https://github.com/huggingface/moon-landing/issues/9201 (internal)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `services/search/src/search/routes/filter.py`
Content:
```
1 # SPDX-License-Identifier: Apache-2.0
2 # Copyright 2023 The HuggingFace Authors.
3
4 import logging
5 import random
6 import re
7 from http import HTTPStatus
8 from typing import Optional
9
10 import anyio
11 import duckdb
12 import pyarrow as pa
13 from datasets import Features
14 from libapi.authentication import auth_check
15 from libapi.duckdb import (
16 get_cache_entry_from_duckdb_index_job,
17 get_index_file_location_and_download_if_missing,
18 )
19 from libapi.exceptions import ApiError, InvalidParameterError, UnexpectedApiError
20 from libapi.request import (
21 get_request_parameter,
22 get_request_parameter_length,
23 get_request_parameter_offset,
24 )
25 from libapi.response import create_response
26 from libapi.utils import (
27 Endpoint,
28 get_json_api_error_response,
29 get_json_error_response,
30 get_json_ok_response,
31 )
32 from libcommon.duckdb_utils import duckdb_index_is_partial
33 from libcommon.prometheus import StepProfiler
34 from libcommon.storage import StrPath, clean_dir
35 from libcommon.storage_client import StorageClient
36 from libcommon.viewer_utils.features import get_supported_unsupported_columns
37 from starlette.requests import Request
38 from starlette.responses import Response
39
40 from search.duckdb_connection import duckdb_connect
41
42 FILTER_QUERY = """\
43 SELECT {columns}
44 FROM data
45 WHERE {where}
46 LIMIT {limit}
47 OFFSET {offset}"""
48
49 FILTER_COUNT_QUERY = """\
50 SELECT COUNT(*)
51 FROM data
52 WHERE {where}"""
53
54 SQL_INVALID_SYMBOLS = "|".join([";", "--", r"/\*", r"\*/"])
55 SQL_INVALID_SYMBOLS_PATTERN = re.compile(rf"(?:{SQL_INVALID_SYMBOLS})", flags=re.IGNORECASE)
56
57 logger = logging.getLogger(__name__)
58
59
60 def create_filter_endpoint(
61 duckdb_index_file_directory: StrPath,
62 target_revision: str,
63 cached_assets_storage_client: StorageClient,
64 blocked_datasets: list[str],
65 hf_endpoint: str,
66 hf_token: Optional[str] = None,
67 hf_jwt_public_keys: Optional[list[str]] = None,
68 hf_jwt_algorithm: Optional[str] = None,
69 external_auth_url: Optional[str] = None,
70 hf_timeout_seconds: Optional[float] = None,
71 max_age_long: int = 0,
72 max_age_short: int = 0,
73 storage_clients: Optional[list[StorageClient]] = None,
74 extensions_directory: Optional[str] = None,
75 clean_cache_proba: float = 0.0,
76 expiredTimeIntervalSeconds: int = 60,
77 ) -> Endpoint:
78 async def filter_endpoint(request: Request) -> Response:
79 revision: Optional[str] = None
80 with StepProfiler(method="filter_endpoint", step="all"):
81 try:
82 with StepProfiler(method="filter_endpoint", step="validate parameters"):
83 dataset = get_request_parameter(request, "dataset", required=True)
84 config = get_request_parameter(request, "config", required=True)
85 split = get_request_parameter(request, "split", required=True)
86 where = get_request_parameter(request, "where", required=True)
87 validate_where_parameter(where)
88 offset = get_request_parameter_offset(request)
89 length = get_request_parameter_length(request)
90 logger.info(f"/filter, {dataset=}, {config=}, {split=}, {where=}, {offset=}, {length=}")
91 with StepProfiler(method="filter_endpoint", step="check authentication"):
92 # If auth_check fails, it will raise an exception that will be caught below
93 await auth_check(
94 dataset=dataset,
95 external_auth_url=external_auth_url,
96 request=request,
97 hf_jwt_public_keys=hf_jwt_public_keys,
98 hf_jwt_algorithm=hf_jwt_algorithm,
99 hf_timeout_seconds=hf_timeout_seconds,
100 )
101 with StepProfiler(method="filter_endpoint", step="validate indexing was done"):
102 # no cache data is needed to download the index file
103 # but will help to validate if indexing was done
104 duckdb_index_cache_entry = get_cache_entry_from_duckdb_index_job(
105 dataset=dataset,
106 config=config,
107 split=split,
108 hf_endpoint=hf_endpoint,
109 hf_token=hf_token,
110 hf_timeout_seconds=hf_timeout_seconds,
111 blocked_datasets=blocked_datasets,
112 storage_clients=storage_clients,
113 )
114 revision = duckdb_index_cache_entry["dataset_git_revision"]
115 if duckdb_index_cache_entry["http_status"] != HTTPStatus.OK:
116 return get_json_error_response(
117 content=duckdb_index_cache_entry["content"],
118 status_code=duckdb_index_cache_entry["http_status"],
119 max_age=max_age_short,
120 error_code=duckdb_index_cache_entry["error_code"],
121 revision=revision,
122 )
123
124 # check if the index is on the full dataset or if it's partial
125 url = duckdb_index_cache_entry["content"]["url"]
126 filename = duckdb_index_cache_entry["content"]["filename"]
127 index_size = duckdb_index_cache_entry["content"]["size"]
128 partial = duckdb_index_is_partial(url)
129
130 with StepProfiler(method="filter_endpoint", step="download index file if missing"):
131 index_file_location = await get_index_file_location_and_download_if_missing(
132 duckdb_index_file_directory=duckdb_index_file_directory,
133 dataset=dataset,
134 config=config,
135 split=split,
136 revision=revision,
137 filename=filename,
138 size_bytes=index_size,
139 url=url,
140 target_revision=target_revision,
141 hf_token=hf_token,
142 )
143 with StepProfiler(method="filter_endpoint", step="get features"):
144 # in split-duckdb-index we always add the ROW_IDX_COLUMN column
145 # see https://github.com/huggingface/dataset-viewer/blob/main/services/worker/src/worker/job_runners/split/duckdb_index.py#L305
146 features = Features.from_dict(duckdb_index_cache_entry["content"]["features"])
147 with StepProfiler(method="filter_endpoint", step="get supported and unsupported columns"):
148 supported_columns, unsupported_columns = get_supported_unsupported_columns(
149 features,
150 )
151 with StepProfiler(method="filter_endpoint", step="execute filter query"):
152 num_rows_total, pa_table = await anyio.to_thread.run_sync(
153 execute_filter_query,
154 index_file_location,
155 supported_columns,
156 where,
157 length,
158 offset,
159 extensions_directory,
160 )
161 # no need to do it every time
162 # TODO: Will be moved to another process in parallel
163 if random.random() < clean_cache_proba: # nosec
164 with StepProfiler(method="filter_endpoint", step="clean old indexes"):
165 clean_dir(
166 duckdb_index_file_directory,
167 expiredTimeIntervalSeconds,
168 )
169 with StepProfiler(method="filter_endpoint", step="create response"):
170 response = await create_response(
171 dataset=dataset,
172 revision=revision,
173 config=config,
174 split=split,
175 storage_client=cached_assets_storage_client,
176 pa_table=pa_table,
177 offset=offset,
178 features=features or Features.from_arrow_schema(pa_table.schema),
179 unsupported_columns=unsupported_columns,
180 num_rows_total=num_rows_total,
181 partial=partial,
182 use_row_idx_column=True,
183 )
184 with StepProfiler(method="filter_endpoint", step="generate the OK response"):
185 return get_json_ok_response(content=response, max_age=max_age_long, revision=revision)
186 except Exception as e:
187 error = e if isinstance(e, ApiError) else UnexpectedApiError("Unexpected error.", e)
188 with StepProfiler(method="filter_endpoint", step="generate API error response"):
189 return get_json_api_error_response(error=error, max_age=max_age_short, revision=revision)
190
191 return filter_endpoint
192
193
194 def execute_filter_query(
195 index_file_location: str,
196 columns: list[str],
197 where: str,
198 limit: int,
199 offset: int,
200 extensions_directory: Optional[str] = None,
201 ) -> tuple[int, pa.Table]:
202 with duckdb_connect(extensions_directory=extensions_directory, database=index_file_location) as con:
203 filter_query = FILTER_QUERY.format(
204 columns=",".join([f'"{column}"' for column in columns]),
205 where=where,
206 limit=limit,
207 offset=offset,
208 )
209 filter_count_query = FILTER_COUNT_QUERY.format(where=where)
210 try:
211 pa_table = con.sql(filter_query).arrow()
212 num_rows_total = con.sql(filter_count_query).fetchall()[0][0]
213 except duckdb.Error:
214 raise InvalidParameterError(message="Parameter 'where' is invalid")
215 return num_rows_total, pa_table
216
217
218 def validate_where_parameter(where: str) -> None:
219 if SQL_INVALID_SYMBOLS_PATTERN.search(where):
220 raise InvalidParameterError(message="Parameter 'where' contains invalid symbols")
221
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/services/search/src/search/routes/filter.py b/services/search/src/search/routes/filter.py
--- a/services/search/src/search/routes/filter.py
+++ b/services/search/src/search/routes/filter.py
@@ -42,14 +42,15 @@
FILTER_QUERY = """\
SELECT {columns}
FROM data
- WHERE {where}
+ {where}
+ {orderby}
LIMIT {limit}
OFFSET {offset}"""
FILTER_COUNT_QUERY = """\
SELECT COUNT(*)
FROM data
- WHERE {where}"""
+ {where}"""
SQL_INVALID_SYMBOLS = "|".join([";", "--", r"/\*", r"\*/"])
SQL_INVALID_SYMBOLS_PATTERN = re.compile(rf"(?:{SQL_INVALID_SYMBOLS})", flags=re.IGNORECASE)
@@ -83,11 +84,15 @@
dataset = get_request_parameter(request, "dataset", required=True)
config = get_request_parameter(request, "config", required=True)
split = get_request_parameter(request, "split", required=True)
- where = get_request_parameter(request, "where", required=True)
- validate_where_parameter(where)
+ where = get_request_parameter(request, "where")
+ validate_query_parameter(where, "where")
+ orderby = get_request_parameter(request, "orderby")
+ validate_query_parameter(orderby, "orderby")
offset = get_request_parameter_offset(request)
length = get_request_parameter_length(request)
- logger.info(f"/filter, {dataset=}, {config=}, {split=}, {where=}, {offset=}, {length=}")
+ logger.info(
+ f"/filter, {dataset=}, {config=}, {split=}, {where=}, {orderby=}, {offset=}, {length=}"
+ )
with StepProfiler(method="filter_endpoint", step="check authentication"):
# If auth_check fails, it will raise an exception that will be caught below
await auth_check(
@@ -154,6 +159,7 @@
index_file_location,
supported_columns,
where,
+ orderby,
length,
offset,
extensions_directory,
@@ -195,6 +201,7 @@
index_file_location: str,
columns: list[str],
where: str,
+ orderby: str,
limit: int,
offset: int,
extensions_directory: Optional[str] = None,
@@ -202,19 +209,20 @@
with duckdb_connect(extensions_directory=extensions_directory, database=index_file_location) as con:
filter_query = FILTER_QUERY.format(
columns=",".join([f'"{column}"' for column in columns]),
- where=where,
+ where=f"WHERE {where}" if where else "",
+ orderby=f"ORDER BY {orderby}" if orderby else "",
limit=limit,
offset=offset,
)
- filter_count_query = FILTER_COUNT_QUERY.format(where=where)
+ filter_count_query = FILTER_COUNT_QUERY.format(where=f"WHERE {where}" if where else "")
try:
pa_table = con.sql(filter_query).arrow()
num_rows_total = con.sql(filter_count_query).fetchall()[0][0]
- except duckdb.Error:
- raise InvalidParameterError(message="Parameter 'where' is invalid")
+ except duckdb.Error as err:
+ raise InvalidParameterError(message="A query parameter is invalid") from err
return num_rows_total, pa_table
-def validate_where_parameter(where: str) -> None:
- if SQL_INVALID_SYMBOLS_PATTERN.search(where):
- raise InvalidParameterError(message="Parameter 'where' contains invalid symbols")
+def validate_query_parameter(parameter_value: str, parameter_name: str) -> None:
+ if SQL_INVALID_SYMBOLS_PATTERN.search(parameter_value):
+ raise InvalidParameterError(message=f"Parameter '{parameter_name}' contains invalid symbols")
|
{"golden_diff": "diff --git a/services/search/src/search/routes/filter.py b/services/search/src/search/routes/filter.py\n--- a/services/search/src/search/routes/filter.py\n+++ b/services/search/src/search/routes/filter.py\n@@ -42,14 +42,15 @@\n FILTER_QUERY = \"\"\"\\\n SELECT {columns}\n FROM data\n- WHERE {where}\n+ {where}\n+ {orderby}\n LIMIT {limit}\n OFFSET {offset}\"\"\"\n \n FILTER_COUNT_QUERY = \"\"\"\\\n SELECT COUNT(*)\n FROM data\n- WHERE {where}\"\"\"\n+ {where}\"\"\"\n \n SQL_INVALID_SYMBOLS = \"|\".join([\";\", \"--\", r\"/\\*\", r\"\\*/\"])\n SQL_INVALID_SYMBOLS_PATTERN = re.compile(rf\"(?:{SQL_INVALID_SYMBOLS})\", flags=re.IGNORECASE)\n@@ -83,11 +84,15 @@\n dataset = get_request_parameter(request, \"dataset\", required=True)\n config = get_request_parameter(request, \"config\", required=True)\n split = get_request_parameter(request, \"split\", required=True)\n- where = get_request_parameter(request, \"where\", required=True)\n- validate_where_parameter(where)\n+ where = get_request_parameter(request, \"where\")\n+ validate_query_parameter(where, \"where\")\n+ orderby = get_request_parameter(request, \"orderby\")\n+ validate_query_parameter(orderby, \"orderby\")\n offset = get_request_parameter_offset(request)\n length = get_request_parameter_length(request)\n- logger.info(f\"/filter, {dataset=}, {config=}, {split=}, {where=}, {offset=}, {length=}\")\n+ logger.info(\n+ f\"/filter, {dataset=}, {config=}, {split=}, {where=}, {orderby=}, {offset=}, {length=}\"\n+ )\n with StepProfiler(method=\"filter_endpoint\", step=\"check authentication\"):\n # If auth_check fails, it will raise an exception that will be caught below\n await auth_check(\n@@ -154,6 +159,7 @@\n index_file_location,\n supported_columns,\n where,\n+ orderby,\n length,\n offset,\n extensions_directory,\n@@ -195,6 +201,7 @@\n index_file_location: str,\n columns: list[str],\n where: str,\n+ orderby: str,\n limit: int,\n offset: int,\n extensions_directory: Optional[str] = None,\n@@ -202,19 +209,20 @@\n with duckdb_connect(extensions_directory=extensions_directory, database=index_file_location) as con:\n filter_query = FILTER_QUERY.format(\n columns=\",\".join([f'\"{column}\"' for column in columns]),\n- where=where,\n+ where=f\"WHERE {where}\" if where else \"\",\n+ orderby=f\"ORDER BY {orderby}\" if orderby else \"\",\n limit=limit,\n offset=offset,\n )\n- filter_count_query = FILTER_COUNT_QUERY.format(where=where)\n+ filter_count_query = FILTER_COUNT_QUERY.format(where=f\"WHERE {where}\" if where else \"\")\n try:\n pa_table = con.sql(filter_query).arrow()\n num_rows_total = con.sql(filter_count_query).fetchall()[0][0]\n- except duckdb.Error:\n- raise InvalidParameterError(message=\"Parameter 'where' is invalid\")\n+ except duckdb.Error as err:\n+ raise InvalidParameterError(message=\"A query parameter is invalid\") from err\n return num_rows_total, pa_table\n \n \n-def validate_where_parameter(where: str) -> None:\n- if SQL_INVALID_SYMBOLS_PATTERN.search(where):\n- raise InvalidParameterError(message=\"Parameter 'where' contains invalid symbols\")\n+def validate_query_parameter(parameter_value: str, parameter_name: str) -> None:\n+ if SQL_INVALID_SYMBOLS_PATTERN.search(parameter_value):\n+ raise InvalidParameterError(message=f\"Parameter '{parameter_name}' contains invalid symbols\")\n", "issue": "Sort the rows along a column\nSee https://github.com/huggingface/moon-landing/issues/9201 (internal)\r\n\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n# Copyright 2023 The HuggingFace Authors.\n\nimport logging\nimport random\nimport re\nfrom http import HTTPStatus\nfrom typing import Optional\n\nimport anyio\nimport duckdb\nimport pyarrow as pa\nfrom datasets import Features\nfrom libapi.authentication import auth_check\nfrom libapi.duckdb import (\n get_cache_entry_from_duckdb_index_job,\n get_index_file_location_and_download_if_missing,\n)\nfrom libapi.exceptions import ApiError, InvalidParameterError, UnexpectedApiError\nfrom libapi.request import (\n get_request_parameter,\n get_request_parameter_length,\n get_request_parameter_offset,\n)\nfrom libapi.response import create_response\nfrom libapi.utils import (\n Endpoint,\n get_json_api_error_response,\n get_json_error_response,\n get_json_ok_response,\n)\nfrom libcommon.duckdb_utils import duckdb_index_is_partial\nfrom libcommon.prometheus import StepProfiler\nfrom libcommon.storage import StrPath, clean_dir\nfrom libcommon.storage_client import StorageClient\nfrom libcommon.viewer_utils.features import get_supported_unsupported_columns\nfrom starlette.requests import Request\nfrom starlette.responses import Response\n\nfrom search.duckdb_connection import duckdb_connect\n\nFILTER_QUERY = \"\"\"\\\n SELECT {columns}\n FROM data\n WHERE {where}\n LIMIT {limit}\n OFFSET {offset}\"\"\"\n\nFILTER_COUNT_QUERY = \"\"\"\\\n SELECT COUNT(*)\n FROM data\n WHERE {where}\"\"\"\n\nSQL_INVALID_SYMBOLS = \"|\".join([\";\", \"--\", r\"/\\*\", r\"\\*/\"])\nSQL_INVALID_SYMBOLS_PATTERN = re.compile(rf\"(?:{SQL_INVALID_SYMBOLS})\", flags=re.IGNORECASE)\n\nlogger = logging.getLogger(__name__)\n\n\ndef create_filter_endpoint(\n duckdb_index_file_directory: StrPath,\n target_revision: str,\n cached_assets_storage_client: StorageClient,\n blocked_datasets: list[str],\n hf_endpoint: str,\n hf_token: Optional[str] = None,\n hf_jwt_public_keys: Optional[list[str]] = None,\n hf_jwt_algorithm: Optional[str] = None,\n external_auth_url: Optional[str] = None,\n hf_timeout_seconds: Optional[float] = None,\n max_age_long: int = 0,\n max_age_short: int = 0,\n storage_clients: Optional[list[StorageClient]] = None,\n extensions_directory: Optional[str] = None,\n clean_cache_proba: float = 0.0,\n expiredTimeIntervalSeconds: int = 60,\n) -> Endpoint:\n async def filter_endpoint(request: Request) -> Response:\n revision: Optional[str] = None\n with StepProfiler(method=\"filter_endpoint\", step=\"all\"):\n try:\n with StepProfiler(method=\"filter_endpoint\", step=\"validate parameters\"):\n dataset = get_request_parameter(request, \"dataset\", required=True)\n config = get_request_parameter(request, \"config\", required=True)\n split = get_request_parameter(request, \"split\", required=True)\n where = get_request_parameter(request, \"where\", required=True)\n validate_where_parameter(where)\n offset = get_request_parameter_offset(request)\n length = get_request_parameter_length(request)\n logger.info(f\"/filter, {dataset=}, {config=}, {split=}, {where=}, {offset=}, {length=}\")\n with StepProfiler(method=\"filter_endpoint\", step=\"check authentication\"):\n # If auth_check fails, it will raise an exception that will be caught below\n await auth_check(\n dataset=dataset,\n external_auth_url=external_auth_url,\n request=request,\n hf_jwt_public_keys=hf_jwt_public_keys,\n hf_jwt_algorithm=hf_jwt_algorithm,\n hf_timeout_seconds=hf_timeout_seconds,\n )\n with StepProfiler(method=\"filter_endpoint\", step=\"validate indexing was done\"):\n # no cache data is needed to download the index file\n # but will help to validate if indexing was done\n duckdb_index_cache_entry = get_cache_entry_from_duckdb_index_job(\n dataset=dataset,\n config=config,\n split=split,\n hf_endpoint=hf_endpoint,\n hf_token=hf_token,\n hf_timeout_seconds=hf_timeout_seconds,\n blocked_datasets=blocked_datasets,\n storage_clients=storage_clients,\n )\n revision = duckdb_index_cache_entry[\"dataset_git_revision\"]\n if duckdb_index_cache_entry[\"http_status\"] != HTTPStatus.OK:\n return get_json_error_response(\n content=duckdb_index_cache_entry[\"content\"],\n status_code=duckdb_index_cache_entry[\"http_status\"],\n max_age=max_age_short,\n error_code=duckdb_index_cache_entry[\"error_code\"],\n revision=revision,\n )\n\n # check if the index is on the full dataset or if it's partial\n url = duckdb_index_cache_entry[\"content\"][\"url\"]\n filename = duckdb_index_cache_entry[\"content\"][\"filename\"]\n index_size = duckdb_index_cache_entry[\"content\"][\"size\"]\n partial = duckdb_index_is_partial(url)\n\n with StepProfiler(method=\"filter_endpoint\", step=\"download index file if missing\"):\n index_file_location = await get_index_file_location_and_download_if_missing(\n duckdb_index_file_directory=duckdb_index_file_directory,\n dataset=dataset,\n config=config,\n split=split,\n revision=revision,\n filename=filename,\n size_bytes=index_size,\n url=url,\n target_revision=target_revision,\n hf_token=hf_token,\n )\n with StepProfiler(method=\"filter_endpoint\", step=\"get features\"):\n # in split-duckdb-index we always add the ROW_IDX_COLUMN column\n # see https://github.com/huggingface/dataset-viewer/blob/main/services/worker/src/worker/job_runners/split/duckdb_index.py#L305\n features = Features.from_dict(duckdb_index_cache_entry[\"content\"][\"features\"])\n with StepProfiler(method=\"filter_endpoint\", step=\"get supported and unsupported columns\"):\n supported_columns, unsupported_columns = get_supported_unsupported_columns(\n features,\n )\n with StepProfiler(method=\"filter_endpoint\", step=\"execute filter query\"):\n num_rows_total, pa_table = await anyio.to_thread.run_sync(\n execute_filter_query,\n index_file_location,\n supported_columns,\n where,\n length,\n offset,\n extensions_directory,\n )\n # no need to do it every time\n # TODO: Will be moved to another process in parallel\n if random.random() < clean_cache_proba: # nosec\n with StepProfiler(method=\"filter_endpoint\", step=\"clean old indexes\"):\n clean_dir(\n duckdb_index_file_directory,\n expiredTimeIntervalSeconds,\n )\n with StepProfiler(method=\"filter_endpoint\", step=\"create response\"):\n response = await create_response(\n dataset=dataset,\n revision=revision,\n config=config,\n split=split,\n storage_client=cached_assets_storage_client,\n pa_table=pa_table,\n offset=offset,\n features=features or Features.from_arrow_schema(pa_table.schema),\n unsupported_columns=unsupported_columns,\n num_rows_total=num_rows_total,\n partial=partial,\n use_row_idx_column=True,\n )\n with StepProfiler(method=\"filter_endpoint\", step=\"generate the OK response\"):\n return get_json_ok_response(content=response, max_age=max_age_long, revision=revision)\n except Exception as e:\n error = e if isinstance(e, ApiError) else UnexpectedApiError(\"Unexpected error.\", e)\n with StepProfiler(method=\"filter_endpoint\", step=\"generate API error response\"):\n return get_json_api_error_response(error=error, max_age=max_age_short, revision=revision)\n\n return filter_endpoint\n\n\ndef execute_filter_query(\n index_file_location: str,\n columns: list[str],\n where: str,\n limit: int,\n offset: int,\n extensions_directory: Optional[str] = None,\n) -> tuple[int, pa.Table]:\n with duckdb_connect(extensions_directory=extensions_directory, database=index_file_location) as con:\n filter_query = FILTER_QUERY.format(\n columns=\",\".join([f'\"{column}\"' for column in columns]),\n where=where,\n limit=limit,\n offset=offset,\n )\n filter_count_query = FILTER_COUNT_QUERY.format(where=where)\n try:\n pa_table = con.sql(filter_query).arrow()\n num_rows_total = con.sql(filter_count_query).fetchall()[0][0]\n except duckdb.Error:\n raise InvalidParameterError(message=\"Parameter 'where' is invalid\")\n return num_rows_total, pa_table\n\n\ndef validate_where_parameter(where: str) -> None:\n if SQL_INVALID_SYMBOLS_PATTERN.search(where):\n raise InvalidParameterError(message=\"Parameter 'where' contains invalid symbols\")\n", "path": "services/search/src/search/routes/filter.py"}], "after_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n# Copyright 2023 The HuggingFace Authors.\n\nimport logging\nimport random\nimport re\nfrom http import HTTPStatus\nfrom typing import Optional\n\nimport anyio\nimport duckdb\nimport pyarrow as pa\nfrom datasets import Features\nfrom libapi.authentication import auth_check\nfrom libapi.duckdb import (\n get_cache_entry_from_duckdb_index_job,\n get_index_file_location_and_download_if_missing,\n)\nfrom libapi.exceptions import ApiError, InvalidParameterError, UnexpectedApiError\nfrom libapi.request import (\n get_request_parameter,\n get_request_parameter_length,\n get_request_parameter_offset,\n)\nfrom libapi.response import create_response\nfrom libapi.utils import (\n Endpoint,\n get_json_api_error_response,\n get_json_error_response,\n get_json_ok_response,\n)\nfrom libcommon.duckdb_utils import duckdb_index_is_partial\nfrom libcommon.prometheus import StepProfiler\nfrom libcommon.storage import StrPath, clean_dir\nfrom libcommon.storage_client import StorageClient\nfrom libcommon.viewer_utils.features import get_supported_unsupported_columns\nfrom starlette.requests import Request\nfrom starlette.responses import Response\n\nfrom search.duckdb_connection import duckdb_connect\n\nFILTER_QUERY = \"\"\"\\\n SELECT {columns}\n FROM data\n {where}\n {orderby}\n LIMIT {limit}\n OFFSET {offset}\"\"\"\n\nFILTER_COUNT_QUERY = \"\"\"\\\n SELECT COUNT(*)\n FROM data\n {where}\"\"\"\n\nSQL_INVALID_SYMBOLS = \"|\".join([\";\", \"--\", r\"/\\*\", r\"\\*/\"])\nSQL_INVALID_SYMBOLS_PATTERN = re.compile(rf\"(?:{SQL_INVALID_SYMBOLS})\", flags=re.IGNORECASE)\n\nlogger = logging.getLogger(__name__)\n\n\ndef create_filter_endpoint(\n duckdb_index_file_directory: StrPath,\n target_revision: str,\n cached_assets_storage_client: StorageClient,\n blocked_datasets: list[str],\n hf_endpoint: str,\n hf_token: Optional[str] = None,\n hf_jwt_public_keys: Optional[list[str]] = None,\n hf_jwt_algorithm: Optional[str] = None,\n external_auth_url: Optional[str] = None,\n hf_timeout_seconds: Optional[float] = None,\n max_age_long: int = 0,\n max_age_short: int = 0,\n storage_clients: Optional[list[StorageClient]] = None,\n extensions_directory: Optional[str] = None,\n clean_cache_proba: float = 0.0,\n expiredTimeIntervalSeconds: int = 60,\n) -> Endpoint:\n async def filter_endpoint(request: Request) -> Response:\n revision: Optional[str] = None\n with StepProfiler(method=\"filter_endpoint\", step=\"all\"):\n try:\n with StepProfiler(method=\"filter_endpoint\", step=\"validate parameters\"):\n dataset = get_request_parameter(request, \"dataset\", required=True)\n config = get_request_parameter(request, \"config\", required=True)\n split = get_request_parameter(request, \"split\", required=True)\n where = get_request_parameter(request, \"where\")\n validate_query_parameter(where, \"where\")\n orderby = get_request_parameter(request, \"orderby\")\n validate_query_parameter(orderby, \"orderby\")\n offset = get_request_parameter_offset(request)\n length = get_request_parameter_length(request)\n logger.info(\n f\"/filter, {dataset=}, {config=}, {split=}, {where=}, {orderby=}, {offset=}, {length=}\"\n )\n with StepProfiler(method=\"filter_endpoint\", step=\"check authentication\"):\n # If auth_check fails, it will raise an exception that will be caught below\n await auth_check(\n dataset=dataset,\n external_auth_url=external_auth_url,\n request=request,\n hf_jwt_public_keys=hf_jwt_public_keys,\n hf_jwt_algorithm=hf_jwt_algorithm,\n hf_timeout_seconds=hf_timeout_seconds,\n )\n with StepProfiler(method=\"filter_endpoint\", step=\"validate indexing was done\"):\n # no cache data is needed to download the index file\n # but will help to validate if indexing was done\n duckdb_index_cache_entry = get_cache_entry_from_duckdb_index_job(\n dataset=dataset,\n config=config,\n split=split,\n hf_endpoint=hf_endpoint,\n hf_token=hf_token,\n hf_timeout_seconds=hf_timeout_seconds,\n blocked_datasets=blocked_datasets,\n storage_clients=storage_clients,\n )\n revision = duckdb_index_cache_entry[\"dataset_git_revision\"]\n if duckdb_index_cache_entry[\"http_status\"] != HTTPStatus.OK:\n return get_json_error_response(\n content=duckdb_index_cache_entry[\"content\"],\n status_code=duckdb_index_cache_entry[\"http_status\"],\n max_age=max_age_short,\n error_code=duckdb_index_cache_entry[\"error_code\"],\n revision=revision,\n )\n\n # check if the index is on the full dataset or if it's partial\n url = duckdb_index_cache_entry[\"content\"][\"url\"]\n filename = duckdb_index_cache_entry[\"content\"][\"filename\"]\n index_size = duckdb_index_cache_entry[\"content\"][\"size\"]\n partial = duckdb_index_is_partial(url)\n\n with StepProfiler(method=\"filter_endpoint\", step=\"download index file if missing\"):\n index_file_location = await get_index_file_location_and_download_if_missing(\n duckdb_index_file_directory=duckdb_index_file_directory,\n dataset=dataset,\n config=config,\n split=split,\n revision=revision,\n filename=filename,\n size_bytes=index_size,\n url=url,\n target_revision=target_revision,\n hf_token=hf_token,\n )\n with StepProfiler(method=\"filter_endpoint\", step=\"get features\"):\n # in split-duckdb-index we always add the ROW_IDX_COLUMN column\n # see https://github.com/huggingface/dataset-viewer/blob/main/services/worker/src/worker/job_runners/split/duckdb_index.py#L305\n features = Features.from_dict(duckdb_index_cache_entry[\"content\"][\"features\"])\n with StepProfiler(method=\"filter_endpoint\", step=\"get supported and unsupported columns\"):\n supported_columns, unsupported_columns = get_supported_unsupported_columns(\n features,\n )\n with StepProfiler(method=\"filter_endpoint\", step=\"execute filter query\"):\n num_rows_total, pa_table = await anyio.to_thread.run_sync(\n execute_filter_query,\n index_file_location,\n supported_columns,\n where,\n orderby,\n length,\n offset,\n extensions_directory,\n )\n # no need to do it every time\n # TODO: Will be moved to another process in parallel\n if random.random() < clean_cache_proba: # nosec\n with StepProfiler(method=\"filter_endpoint\", step=\"clean old indexes\"):\n clean_dir(\n duckdb_index_file_directory,\n expiredTimeIntervalSeconds,\n )\n with StepProfiler(method=\"filter_endpoint\", step=\"create response\"):\n response = await create_response(\n dataset=dataset,\n revision=revision,\n config=config,\n split=split,\n storage_client=cached_assets_storage_client,\n pa_table=pa_table,\n offset=offset,\n features=features or Features.from_arrow_schema(pa_table.schema),\n unsupported_columns=unsupported_columns,\n num_rows_total=num_rows_total,\n partial=partial,\n use_row_idx_column=True,\n )\n with StepProfiler(method=\"filter_endpoint\", step=\"generate the OK response\"):\n return get_json_ok_response(content=response, max_age=max_age_long, revision=revision)\n except Exception as e:\n error = e if isinstance(e, ApiError) else UnexpectedApiError(\"Unexpected error.\", e)\n with StepProfiler(method=\"filter_endpoint\", step=\"generate API error response\"):\n return get_json_api_error_response(error=error, max_age=max_age_short, revision=revision)\n\n return filter_endpoint\n\n\ndef execute_filter_query(\n index_file_location: str,\n columns: list[str],\n where: str,\n orderby: str,\n limit: int,\n offset: int,\n extensions_directory: Optional[str] = None,\n) -> tuple[int, pa.Table]:\n with duckdb_connect(extensions_directory=extensions_directory, database=index_file_location) as con:\n filter_query = FILTER_QUERY.format(\n columns=\",\".join([f'\"{column}\"' for column in columns]),\n where=f\"WHERE {where}\" if where else \"\",\n orderby=f\"ORDER BY {orderby}\" if orderby else \"\",\n limit=limit,\n offset=offset,\n )\n filter_count_query = FILTER_COUNT_QUERY.format(where=f\"WHERE {where}\" if where else \"\")\n try:\n pa_table = con.sql(filter_query).arrow()\n num_rows_total = con.sql(filter_count_query).fetchall()[0][0]\n except duckdb.Error as err:\n raise InvalidParameterError(message=\"A query parameter is invalid\") from err\n return num_rows_total, pa_table\n\n\ndef validate_query_parameter(parameter_value: str, parameter_name: str) -> None:\n if SQL_INVALID_SYMBOLS_PATTERN.search(parameter_value):\n raise InvalidParameterError(message=f\"Parameter '{parameter_name}' contains invalid symbols\")\n", "path": "services/search/src/search/routes/filter.py"}]}
| 2,681 | 836 |
gh_patches_debug_57241
|
rasdani/github-patches
|
git_diff
|
pymodbus-dev__pymodbus-1288
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
REPL client Remove `unit` argument
<!--
Please use the Pymodbus gitter channel at https://gitter.im/pymodbus_dev/Lobby or Stack Overflow(tag [pymodbus](https://stackoverflow.com/questions/tagged/pymodbus) for
support questions.
Before opening a new issue, make sure you do the following:
* check that your issue isn't already filed: https://github.com/riptideio/pymodbus/issues
* check the discussions forum https://github.com/riptideio/pymodbus/discussions
* prepare a short, runnable example that reproduce the issue with the latest development version of Pymodbus
-->
### Versions
* Python:
* OS:
* Pymodbus:
* Modbus Hardware (if used):
### Pymodbus Specific
* Server: tcp/rtu/ascii - NA
* Client: tcp/rtu/ascii - REPL tcp
### Description
Run REPL client to interact with modbus server
### Code and Logs
<img width="561" alt="image" src="https://user-images.githubusercontent.com/9276974/212352120-db97d9ec-feba-4a55-a72b-bf82bf383745.png">
<img width="576" alt="image" src="https://user-images.githubusercontent.com/9276974/212352174-51b45a43-ac93-49ac-9682-1e041b8d0c8e.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pymodbus/repl/client/helper.py`
Content:
```
1 """Helper Module for REPL actions."""
2 import inspect
3
4 # pylint: disable=missing-type-doc
5 import json
6 from collections import OrderedDict
7
8 import pygments
9 from prompt_toolkit import print_formatted_text
10 from prompt_toolkit.formatted_text import HTML, PygmentsTokens
11 from pygments.lexers.data import JsonLexer
12
13 from pymodbus.payload import BinaryPayloadDecoder, Endian
14
15
16 predicate = inspect.isfunction
17 argspec = inspect.signature
18
19
20 FORMATTERS = {
21 "int8": "decode_8bit_int",
22 "int16": "decode_16bit_int",
23 "int32": "decode_32bit_int",
24 "int64": "decode_64bit_int",
25 "uint8": "decode_8bit_uint",
26 "uint16": "decode_16bit_uint",
27 "uint32": "decode_32bit_uint",
28 "uint64": "decode_64bit_int",
29 "float16": "decode_16bit_float",
30 "float32": "decode_32bit_float",
31 "float64": "decode_64bit_float",
32 }
33
34
35 DEFAULT_KWARGS = {"unit": "Slave address"}
36
37 OTHER_COMMANDS = {
38 "result.raw": "Show RAW Result",
39 "result.decode": "Decode register response to known formats",
40 }
41 EXCLUDE = ["execute", "recv", "send", "trace", "set_debug"]
42 CLIENT_METHODS = [
43 "connect",
44 "close",
45 "idle_time",
46 "is_socket_open",
47 "get_port",
48 "set_port",
49 "get_stopbits",
50 "set_stopbits",
51 "get_bytesize",
52 "set_bytesize",
53 "get_parity",
54 "set_parity",
55 "get_baudrate",
56 "set_baudrate",
57 "get_timeout",
58 "set_timeout",
59 "get_serial_settings",
60 ]
61 CLIENT_ATTRIBUTES = []
62
63
64 class Command:
65 """Class representing Commands to be consumed by Completer."""
66
67 def __init__(self, name, signature, doc, unit=False):
68 """Initialize.
69
70 :param name: Name of the command
71 :param signature: inspect object
72 :param doc: Doc string for the command
73 :param unit: Use unit as additional argument in the command .
74 """
75 self.name = name
76 self.doc = doc.split("\n") if doc else " ".join(name.split("_"))
77 self.help_text = self._create_help()
78 self.param_help = self._create_arg_help()
79 if signature:
80 self._params = signature.parameters
81 self.args = self.create_completion()
82 else:
83 self._params = ""
84
85 if self.name.startswith("client.") and unit:
86 self.args.update(**DEFAULT_KWARGS)
87
88 def _create_help(self):
89 """Create help."""
90 doc = filter(lambda d: d, self.doc)
91 cmd_help = list(
92 filter(
93 lambda x: not x.startswith(":param") and not x.startswith(":return"),
94 doc,
95 )
96 )
97 return " ".join(cmd_help).strip()
98
99 def _create_arg_help(self):
100 """Create arg help."""
101 param_dict = {}
102 params = list(filter(lambda d: d.strip().startswith(":param"), self.doc))
103 for param in params:
104 param, param_help = param.split(":param")[1].strip().split(":")
105 param_dict[param] = param_help
106 return param_dict
107
108 def create_completion(self):
109 """Create command completion meta data.
110
111 :return:
112 """
113 words = {}
114
115 def _create(entry, default):
116 if entry not in ["self", "kwargs"]:
117 if isinstance(default, (int, str)):
118 entry += f"={default}"
119 return entry
120 return None
121
122 for arg in self._params.values():
123 if entry := _create(arg.name, arg.default):
124 entry, meta = self.get_meta(entry)
125 words[entry] = meta
126
127 return words
128
129 def get_completion(self):
130 """Get a list of completions.
131
132 :return:
133 """
134 return self.args.keys()
135
136 def get_meta(self, cmd):
137 """Get Meta info of a given command.
138
139 :param cmd: Name of command.
140 :return: Dict containing meta info.
141 """
142 cmd = cmd.strip()
143 cmd = cmd.split("=")[0].strip()
144 return cmd, self.param_help.get(cmd, "")
145
146 def __str__(self):
147 """Return string representation."""
148 if self.doc:
149 return "Command {:>50}{:<20}".format( # pylint: disable=consider-using-f-string
150 self.name, self.doc
151 )
152 return f"Command {self.name}"
153
154
155 def _get_requests(members):
156 """Get requests."""
157 commands = list(
158 filter(
159 lambda x: (
160 x[0] not in EXCLUDE and x[0] not in CLIENT_METHODS and callable(x[1])
161 ),
162 members,
163 )
164 )
165 commands = {
166 f"client.{c[0]}": Command(
167 f"client.{c[0]}", argspec(c[1]), inspect.getdoc(c[1]), unit=True
168 )
169 for c in commands
170 if not c[0].startswith("_")
171 }
172 return commands
173
174
175 def _get_client_methods(members):
176 """Get client methods."""
177 commands = list(
178 filter(lambda x: (x[0] not in EXCLUDE and x[0] in CLIENT_METHODS), members)
179 )
180 commands = {
181 "client.{c[0]}": Command(
182 "client.{c[0]}", argspec(c[1]), inspect.getdoc(c[1]), unit=False
183 )
184 for c in commands
185 if not c[0].startswith("_")
186 }
187 return commands
188
189
190 def _get_client_properties(members):
191 """Get client properties."""
192 global CLIENT_ATTRIBUTES # pylint: disable=global-variable-not-assigned
193 commands = list(filter(lambda x: not callable(x[1]), members))
194 commands = {
195 f"client.{c[0]}": Command(f"client.{c[0]}", None, "Read Only!", unit=False)
196 for c in commands
197 if (not c[0].startswith("_") and isinstance(c[1], (str, int, float)))
198 }
199 CLIENT_ATTRIBUTES.extend(list(commands.keys()))
200 return commands
201
202
203 def get_commands(client):
204 """Retrieve all required methods and attributes.
205
206 Of a client object and convert it to commands.
207
208 :param client: Modbus Client object.
209 :return:
210 """
211 commands = {}
212 members = inspect.getmembers(client)
213 requests = _get_requests(members)
214 client_methods = _get_client_methods(members)
215 client_attr = _get_client_properties(members)
216
217 result_commands = inspect.getmembers(Result, predicate=predicate)
218 result_commands = {
219 f"result.{c[0]}": Command(f"result.{c[0]}", argspec(c[1]), inspect.getdoc(c[1]))
220 for c in result_commands
221 if (not c[0].startswith("_") and c[0] != "print_result")
222 }
223 commands.update(requests)
224 commands.update(client_methods)
225 commands.update(client_attr)
226 commands.update(result_commands)
227 return commands
228
229
230 class Result:
231 """Represent result command."""
232
233 function_code = None
234 data = None
235
236 def __init__(self, result):
237 """Initialize.
238
239 :param result: Response of a modbus command.
240 """
241 if isinstance(result, dict): # Modbus response
242 self.function_code = result.pop("function_code", None)
243 self.data = dict(result)
244 else:
245 self.data = result
246
247 def decode(self, formatters, byte_order="big", word_order="big"):
248 """Decode the register response to known formatters.
249
250 :param formatters: int8/16/32/64, uint8/16/32/64, float32/64
251 :param byte_order: little/big
252 :param word_order: little/big
253 """
254 # Read Holding Registers (3)
255 # Read Input Registers (4)
256 # Read Write Registers (23)
257 if not isinstance(formatters, (list, tuple)):
258 formatters = [formatters]
259
260 if self.function_code not in [3, 4, 23]:
261 print_formatted_text(HTML("<red>Decoder works only for registers!!</red>"))
262 return
263 byte_order = (
264 Endian.Little if byte_order.strip().lower() == "little" else Endian.Big
265 )
266 word_order = (
267 Endian.Little if word_order.strip().lower() == "little" else Endian.Big
268 )
269 decoder = BinaryPayloadDecoder.fromRegisters(
270 self.data.get("registers"), byteorder=byte_order, wordorder=word_order
271 )
272 for formatter in formatters:
273 if not (formatter := FORMATTERS.get(formatter)):
274 print_formatted_text(
275 HTML(f"<red>Invalid Formatter - {formatter}!!</red>")
276 )
277 return
278 decoded = getattr(decoder, formatter)()
279 self.print_result(decoded)
280
281 def raw(self):
282 """Return raw result dict."""
283 self.print_result()
284
285 def _process_dict(self, use_dict):
286 """Process dict."""
287 new_dict = OrderedDict()
288 for k, v_item in use_dict.items():
289 if isinstance(v_item, bytes):
290 v_item = v_item.decode("utf-8")
291 elif isinstance(v_item, dict):
292 v_item = self._process_dict(v_item)
293 elif isinstance(v_item, (list, tuple)):
294 v_item = [
295 v1.decode("utf-8") if isinstance(v1, bytes) else v1 for v1 in v_item
296 ]
297 new_dict[k] = v_item
298 return new_dict
299
300 def print_result(self, data=None):
301 """Print result object pretty.
302
303 :param data: Data to be printed.
304 """
305 data = data or self.data
306 if isinstance(data, dict):
307 data = self._process_dict(data)
308 elif isinstance(data, (list, tuple)):
309 data = [v.decode("utf-8") if isinstance(v, bytes) else v for v in data]
310 elif isinstance(data, bytes):
311 data = data.decode("utf-8")
312 tokens = list(pygments.lex(json.dumps(data, indent=4), lexer=JsonLexer()))
313 print_formatted_text(PygmentsTokens(tokens))
314
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pymodbus/repl/client/helper.py b/pymodbus/repl/client/helper.py
--- a/pymodbus/repl/client/helper.py
+++ b/pymodbus/repl/client/helper.py
@@ -164,7 +164,7 @@
)
commands = {
f"client.{c[0]}": Command(
- f"client.{c[0]}", argspec(c[1]), inspect.getdoc(c[1]), unit=True
+ f"client.{c[0]}", argspec(c[1]), inspect.getdoc(c[1]), unit=False
)
for c in commands
if not c[0].startswith("_")
|
{"golden_diff": "diff --git a/pymodbus/repl/client/helper.py b/pymodbus/repl/client/helper.py\n--- a/pymodbus/repl/client/helper.py\n+++ b/pymodbus/repl/client/helper.py\n@@ -164,7 +164,7 @@\n )\n commands = {\n f\"client.{c[0]}\": Command(\n- f\"client.{c[0]}\", argspec(c[1]), inspect.getdoc(c[1]), unit=True\n+ f\"client.{c[0]}\", argspec(c[1]), inspect.getdoc(c[1]), unit=False\n )\n for c in commands\n if not c[0].startswith(\"_\")\n", "issue": "REPL client Remove `unit` argument\n<!--\r\nPlease use the Pymodbus gitter channel at https://gitter.im/pymodbus_dev/Lobby or Stack Overflow(tag [pymodbus](https://stackoverflow.com/questions/tagged/pymodbus) for\r\nsupport questions.\r\n\r\nBefore opening a new issue, make sure you do the following:\r\n * check that your issue isn't already filed: https://github.com/riptideio/pymodbus/issues\r\n * check the discussions forum https://github.com/riptideio/pymodbus/discussions\r\n * prepare a short, runnable example that reproduce the issue with the latest development version of Pymodbus\r\n-->\r\n\r\n### Versions\r\n\r\n* Python:\r\n* OS:\r\n* Pymodbus:\r\n* Modbus Hardware (if used):\r\n\r\n### Pymodbus Specific\r\n* Server: tcp/rtu/ascii - NA\r\n* Client: tcp/rtu/ascii - REPL tcp\r\n\r\n### Description\r\n\r\nRun REPL client to interact with modbus server\r\n\r\n### Code and Logs\r\n\r\n<img width=\"561\" alt=\"image\" src=\"https://user-images.githubusercontent.com/9276974/212352120-db97d9ec-feba-4a55-a72b-bf82bf383745.png\">\r\n\r\n<img width=\"576\" alt=\"image\" src=\"https://user-images.githubusercontent.com/9276974/212352174-51b45a43-ac93-49ac-9682-1e041b8d0c8e.png\">\r\n\r\n\n", "before_files": [{"content": "\"\"\"Helper Module for REPL actions.\"\"\"\nimport inspect\n\n# pylint: disable=missing-type-doc\nimport json\nfrom collections import OrderedDict\n\nimport pygments\nfrom prompt_toolkit import print_formatted_text\nfrom prompt_toolkit.formatted_text import HTML, PygmentsTokens\nfrom pygments.lexers.data import JsonLexer\n\nfrom pymodbus.payload import BinaryPayloadDecoder, Endian\n\n\npredicate = inspect.isfunction\nargspec = inspect.signature\n\n\nFORMATTERS = {\n \"int8\": \"decode_8bit_int\",\n \"int16\": \"decode_16bit_int\",\n \"int32\": \"decode_32bit_int\",\n \"int64\": \"decode_64bit_int\",\n \"uint8\": \"decode_8bit_uint\",\n \"uint16\": \"decode_16bit_uint\",\n \"uint32\": \"decode_32bit_uint\",\n \"uint64\": \"decode_64bit_int\",\n \"float16\": \"decode_16bit_float\",\n \"float32\": \"decode_32bit_float\",\n \"float64\": \"decode_64bit_float\",\n}\n\n\nDEFAULT_KWARGS = {\"unit\": \"Slave address\"}\n\nOTHER_COMMANDS = {\n \"result.raw\": \"Show RAW Result\",\n \"result.decode\": \"Decode register response to known formats\",\n}\nEXCLUDE = [\"execute\", \"recv\", \"send\", \"trace\", \"set_debug\"]\nCLIENT_METHODS = [\n \"connect\",\n \"close\",\n \"idle_time\",\n \"is_socket_open\",\n \"get_port\",\n \"set_port\",\n \"get_stopbits\",\n \"set_stopbits\",\n \"get_bytesize\",\n \"set_bytesize\",\n \"get_parity\",\n \"set_parity\",\n \"get_baudrate\",\n \"set_baudrate\",\n \"get_timeout\",\n \"set_timeout\",\n \"get_serial_settings\",\n]\nCLIENT_ATTRIBUTES = []\n\n\nclass Command:\n \"\"\"Class representing Commands to be consumed by Completer.\"\"\"\n\n def __init__(self, name, signature, doc, unit=False):\n \"\"\"Initialize.\n\n :param name: Name of the command\n :param signature: inspect object\n :param doc: Doc string for the command\n :param unit: Use unit as additional argument in the command .\n \"\"\"\n self.name = name\n self.doc = doc.split(\"\\n\") if doc else \" \".join(name.split(\"_\"))\n self.help_text = self._create_help()\n self.param_help = self._create_arg_help()\n if signature:\n self._params = signature.parameters\n self.args = self.create_completion()\n else:\n self._params = \"\"\n\n if self.name.startswith(\"client.\") and unit:\n self.args.update(**DEFAULT_KWARGS)\n\n def _create_help(self):\n \"\"\"Create help.\"\"\"\n doc = filter(lambda d: d, self.doc)\n cmd_help = list(\n filter(\n lambda x: not x.startswith(\":param\") and not x.startswith(\":return\"),\n doc,\n )\n )\n return \" \".join(cmd_help).strip()\n\n def _create_arg_help(self):\n \"\"\"Create arg help.\"\"\"\n param_dict = {}\n params = list(filter(lambda d: d.strip().startswith(\":param\"), self.doc))\n for param in params:\n param, param_help = param.split(\":param\")[1].strip().split(\":\")\n param_dict[param] = param_help\n return param_dict\n\n def create_completion(self):\n \"\"\"Create command completion meta data.\n\n :return:\n \"\"\"\n words = {}\n\n def _create(entry, default):\n if entry not in [\"self\", \"kwargs\"]:\n if isinstance(default, (int, str)):\n entry += f\"={default}\"\n return entry\n return None\n\n for arg in self._params.values():\n if entry := _create(arg.name, arg.default):\n entry, meta = self.get_meta(entry)\n words[entry] = meta\n\n return words\n\n def get_completion(self):\n \"\"\"Get a list of completions.\n\n :return:\n \"\"\"\n return self.args.keys()\n\n def get_meta(self, cmd):\n \"\"\"Get Meta info of a given command.\n\n :param cmd: Name of command.\n :return: Dict containing meta info.\n \"\"\"\n cmd = cmd.strip()\n cmd = cmd.split(\"=\")[0].strip()\n return cmd, self.param_help.get(cmd, \"\")\n\n def __str__(self):\n \"\"\"Return string representation.\"\"\"\n if self.doc:\n return \"Command {:>50}{:<20}\".format( # pylint: disable=consider-using-f-string\n self.name, self.doc\n )\n return f\"Command {self.name}\"\n\n\ndef _get_requests(members):\n \"\"\"Get requests.\"\"\"\n commands = list(\n filter(\n lambda x: (\n x[0] not in EXCLUDE and x[0] not in CLIENT_METHODS and callable(x[1])\n ),\n members,\n )\n )\n commands = {\n f\"client.{c[0]}\": Command(\n f\"client.{c[0]}\", argspec(c[1]), inspect.getdoc(c[1]), unit=True\n )\n for c in commands\n if not c[0].startswith(\"_\")\n }\n return commands\n\n\ndef _get_client_methods(members):\n \"\"\"Get client methods.\"\"\"\n commands = list(\n filter(lambda x: (x[0] not in EXCLUDE and x[0] in CLIENT_METHODS), members)\n )\n commands = {\n \"client.{c[0]}\": Command(\n \"client.{c[0]}\", argspec(c[1]), inspect.getdoc(c[1]), unit=False\n )\n for c in commands\n if not c[0].startswith(\"_\")\n }\n return commands\n\n\ndef _get_client_properties(members):\n \"\"\"Get client properties.\"\"\"\n global CLIENT_ATTRIBUTES # pylint: disable=global-variable-not-assigned\n commands = list(filter(lambda x: not callable(x[1]), members))\n commands = {\n f\"client.{c[0]}\": Command(f\"client.{c[0]}\", None, \"Read Only!\", unit=False)\n for c in commands\n if (not c[0].startswith(\"_\") and isinstance(c[1], (str, int, float)))\n }\n CLIENT_ATTRIBUTES.extend(list(commands.keys()))\n return commands\n\n\ndef get_commands(client):\n \"\"\"Retrieve all required methods and attributes.\n\n Of a client object and convert it to commands.\n\n :param client: Modbus Client object.\n :return:\n \"\"\"\n commands = {}\n members = inspect.getmembers(client)\n requests = _get_requests(members)\n client_methods = _get_client_methods(members)\n client_attr = _get_client_properties(members)\n\n result_commands = inspect.getmembers(Result, predicate=predicate)\n result_commands = {\n f\"result.{c[0]}\": Command(f\"result.{c[0]}\", argspec(c[1]), inspect.getdoc(c[1]))\n for c in result_commands\n if (not c[0].startswith(\"_\") and c[0] != \"print_result\")\n }\n commands.update(requests)\n commands.update(client_methods)\n commands.update(client_attr)\n commands.update(result_commands)\n return commands\n\n\nclass Result:\n \"\"\"Represent result command.\"\"\"\n\n function_code = None\n data = None\n\n def __init__(self, result):\n \"\"\"Initialize.\n\n :param result: Response of a modbus command.\n \"\"\"\n if isinstance(result, dict): # Modbus response\n self.function_code = result.pop(\"function_code\", None)\n self.data = dict(result)\n else:\n self.data = result\n\n def decode(self, formatters, byte_order=\"big\", word_order=\"big\"):\n \"\"\"Decode the register response to known formatters.\n\n :param formatters: int8/16/32/64, uint8/16/32/64, float32/64\n :param byte_order: little/big\n :param word_order: little/big\n \"\"\"\n # Read Holding Registers (3)\n # Read Input Registers (4)\n # Read Write Registers (23)\n if not isinstance(formatters, (list, tuple)):\n formatters = [formatters]\n\n if self.function_code not in [3, 4, 23]:\n print_formatted_text(HTML(\"<red>Decoder works only for registers!!</red>\"))\n return\n byte_order = (\n Endian.Little if byte_order.strip().lower() == \"little\" else Endian.Big\n )\n word_order = (\n Endian.Little if word_order.strip().lower() == \"little\" else Endian.Big\n )\n decoder = BinaryPayloadDecoder.fromRegisters(\n self.data.get(\"registers\"), byteorder=byte_order, wordorder=word_order\n )\n for formatter in formatters:\n if not (formatter := FORMATTERS.get(formatter)):\n print_formatted_text(\n HTML(f\"<red>Invalid Formatter - {formatter}!!</red>\")\n )\n return\n decoded = getattr(decoder, formatter)()\n self.print_result(decoded)\n\n def raw(self):\n \"\"\"Return raw result dict.\"\"\"\n self.print_result()\n\n def _process_dict(self, use_dict):\n \"\"\"Process dict.\"\"\"\n new_dict = OrderedDict()\n for k, v_item in use_dict.items():\n if isinstance(v_item, bytes):\n v_item = v_item.decode(\"utf-8\")\n elif isinstance(v_item, dict):\n v_item = self._process_dict(v_item)\n elif isinstance(v_item, (list, tuple)):\n v_item = [\n v1.decode(\"utf-8\") if isinstance(v1, bytes) else v1 for v1 in v_item\n ]\n new_dict[k] = v_item\n return new_dict\n\n def print_result(self, data=None):\n \"\"\"Print result object pretty.\n\n :param data: Data to be printed.\n \"\"\"\n data = data or self.data\n if isinstance(data, dict):\n data = self._process_dict(data)\n elif isinstance(data, (list, tuple)):\n data = [v.decode(\"utf-8\") if isinstance(v, bytes) else v for v in data]\n elif isinstance(data, bytes):\n data = data.decode(\"utf-8\")\n tokens = list(pygments.lex(json.dumps(data, indent=4), lexer=JsonLexer()))\n print_formatted_text(PygmentsTokens(tokens))\n", "path": "pymodbus/repl/client/helper.py"}], "after_files": [{"content": "\"\"\"Helper Module for REPL actions.\"\"\"\nimport inspect\n\n# pylint: disable=missing-type-doc\nimport json\nfrom collections import OrderedDict\n\nimport pygments\nfrom prompt_toolkit import print_formatted_text\nfrom prompt_toolkit.formatted_text import HTML, PygmentsTokens\nfrom pygments.lexers.data import JsonLexer\n\nfrom pymodbus.payload import BinaryPayloadDecoder, Endian\n\n\npredicate = inspect.isfunction\nargspec = inspect.signature\n\n\nFORMATTERS = {\n \"int8\": \"decode_8bit_int\",\n \"int16\": \"decode_16bit_int\",\n \"int32\": \"decode_32bit_int\",\n \"int64\": \"decode_64bit_int\",\n \"uint8\": \"decode_8bit_uint\",\n \"uint16\": \"decode_16bit_uint\",\n \"uint32\": \"decode_32bit_uint\",\n \"uint64\": \"decode_64bit_int\",\n \"float16\": \"decode_16bit_float\",\n \"float32\": \"decode_32bit_float\",\n \"float64\": \"decode_64bit_float\",\n}\n\n\nDEFAULT_KWARGS = {\"unit\": \"Slave address\"}\n\nOTHER_COMMANDS = {\n \"result.raw\": \"Show RAW Result\",\n \"result.decode\": \"Decode register response to known formats\",\n}\nEXCLUDE = [\"execute\", \"recv\", \"send\", \"trace\", \"set_debug\"]\nCLIENT_METHODS = [\n \"connect\",\n \"close\",\n \"idle_time\",\n \"is_socket_open\",\n \"get_port\",\n \"set_port\",\n \"get_stopbits\",\n \"set_stopbits\",\n \"get_bytesize\",\n \"set_bytesize\",\n \"get_parity\",\n \"set_parity\",\n \"get_baudrate\",\n \"set_baudrate\",\n \"get_timeout\",\n \"set_timeout\",\n \"get_serial_settings\",\n]\nCLIENT_ATTRIBUTES = []\n\n\nclass Command:\n \"\"\"Class representing Commands to be consumed by Completer.\"\"\"\n\n def __init__(self, name, signature, doc, unit=False):\n \"\"\"Initialize.\n\n :param name: Name of the command\n :param signature: inspect object\n :param doc: Doc string for the command\n :param unit: Use unit as additional argument in the command .\n \"\"\"\n self.name = name\n self.doc = doc.split(\"\\n\") if doc else \" \".join(name.split(\"_\"))\n self.help_text = self._create_help()\n self.param_help = self._create_arg_help()\n if signature:\n self._params = signature.parameters\n self.args = self.create_completion()\n else:\n self._params = \"\"\n\n if self.name.startswith(\"client.\") and unit:\n self.args.update(**DEFAULT_KWARGS)\n\n def _create_help(self):\n \"\"\"Create help.\"\"\"\n doc = filter(lambda d: d, self.doc)\n cmd_help = list(\n filter(\n lambda x: not x.startswith(\":param\") and not x.startswith(\":return\"),\n doc,\n )\n )\n return \" \".join(cmd_help).strip()\n\n def _create_arg_help(self):\n \"\"\"Create arg help.\"\"\"\n param_dict = {}\n params = list(filter(lambda d: d.strip().startswith(\":param\"), self.doc))\n for param in params:\n param, param_help = param.split(\":param\")[1].strip().split(\":\")\n param_dict[param] = param_help\n return param_dict\n\n def create_completion(self):\n \"\"\"Create command completion meta data.\n\n :return:\n \"\"\"\n words = {}\n\n def _create(entry, default):\n if entry not in [\"self\", \"kwargs\"]:\n if isinstance(default, (int, str)):\n entry += f\"={default}\"\n return entry\n return None\n\n for arg in self._params.values():\n if entry := _create(arg.name, arg.default):\n entry, meta = self.get_meta(entry)\n words[entry] = meta\n\n return words\n\n def get_completion(self):\n \"\"\"Get a list of completions.\n\n :return:\n \"\"\"\n return self.args.keys()\n\n def get_meta(self, cmd):\n \"\"\"Get Meta info of a given command.\n\n :param cmd: Name of command.\n :return: Dict containing meta info.\n \"\"\"\n cmd = cmd.strip()\n cmd = cmd.split(\"=\")[0].strip()\n return cmd, self.param_help.get(cmd, \"\")\n\n def __str__(self):\n \"\"\"Return string representation.\"\"\"\n if self.doc:\n return \"Command {:>50}{:<20}\".format( # pylint: disable=consider-using-f-string\n self.name, self.doc\n )\n return f\"Command {self.name}\"\n\n\ndef _get_requests(members):\n \"\"\"Get requests.\"\"\"\n commands = list(\n filter(\n lambda x: (\n x[0] not in EXCLUDE and x[0] not in CLIENT_METHODS and callable(x[1])\n ),\n members,\n )\n )\n commands = {\n f\"client.{c[0]}\": Command(\n f\"client.{c[0]}\", argspec(c[1]), inspect.getdoc(c[1]), unit=False\n )\n for c in commands\n if not c[0].startswith(\"_\")\n }\n return commands\n\n\ndef _get_client_methods(members):\n \"\"\"Get client methods.\"\"\"\n commands = list(\n filter(lambda x: (x[0] not in EXCLUDE and x[0] in CLIENT_METHODS), members)\n )\n commands = {\n \"client.{c[0]}\": Command(\n \"client.{c[0]}\", argspec(c[1]), inspect.getdoc(c[1]), unit=False\n )\n for c in commands\n if not c[0].startswith(\"_\")\n }\n return commands\n\n\ndef _get_client_properties(members):\n \"\"\"Get client properties.\"\"\"\n global CLIENT_ATTRIBUTES # pylint: disable=global-variable-not-assigned\n commands = list(filter(lambda x: not callable(x[1]), members))\n commands = {\n f\"client.{c[0]}\": Command(f\"client.{c[0]}\", None, \"Read Only!\", unit=False)\n for c in commands\n if (not c[0].startswith(\"_\") and isinstance(c[1], (str, int, float)))\n }\n CLIENT_ATTRIBUTES.extend(list(commands.keys()))\n return commands\n\n\ndef get_commands(client):\n \"\"\"Retrieve all required methods and attributes.\n\n Of a client object and convert it to commands.\n\n :param client: Modbus Client object.\n :return:\n \"\"\"\n commands = {}\n members = inspect.getmembers(client)\n requests = _get_requests(members)\n client_methods = _get_client_methods(members)\n client_attr = _get_client_properties(members)\n\n result_commands = inspect.getmembers(Result, predicate=predicate)\n result_commands = {\n f\"result.{c[0]}\": Command(f\"result.{c[0]}\", argspec(c[1]), inspect.getdoc(c[1]))\n for c in result_commands\n if (not c[0].startswith(\"_\") and c[0] != \"print_result\")\n }\n commands.update(requests)\n commands.update(client_methods)\n commands.update(client_attr)\n commands.update(result_commands)\n return commands\n\n\nclass Result:\n \"\"\"Represent result command.\"\"\"\n\n function_code = None\n data = None\n\n def __init__(self, result):\n \"\"\"Initialize.\n\n :param result: Response of a modbus command.\n \"\"\"\n if isinstance(result, dict): # Modbus response\n self.function_code = result.pop(\"function_code\", None)\n self.data = dict(result)\n else:\n self.data = result\n\n def decode(self, formatters, byte_order=\"big\", word_order=\"big\"):\n \"\"\"Decode the register response to known formatters.\n\n :param formatters: int8/16/32/64, uint8/16/32/64, float32/64\n :param byte_order: little/big\n :param word_order: little/big\n \"\"\"\n # Read Holding Registers (3)\n # Read Input Registers (4)\n # Read Write Registers (23)\n if not isinstance(formatters, (list, tuple)):\n formatters = [formatters]\n\n if self.function_code not in [3, 4, 23]:\n print_formatted_text(HTML(\"<red>Decoder works only for registers!!</red>\"))\n return\n byte_order = (\n Endian.Little if byte_order.strip().lower() == \"little\" else Endian.Big\n )\n word_order = (\n Endian.Little if word_order.strip().lower() == \"little\" else Endian.Big\n )\n decoder = BinaryPayloadDecoder.fromRegisters(\n self.data.get(\"registers\"), byteorder=byte_order, wordorder=word_order\n )\n for formatter in formatters:\n if not (formatter := FORMATTERS.get(formatter)):\n print_formatted_text(\n HTML(f\"<red>Invalid Formatter - {formatter}!!</red>\")\n )\n return\n decoded = getattr(decoder, formatter)()\n self.print_result(decoded)\n\n def raw(self):\n \"\"\"Return raw result dict.\"\"\"\n self.print_result()\n\n def _process_dict(self, use_dict):\n \"\"\"Process dict.\"\"\"\n new_dict = OrderedDict()\n for k, v_item in use_dict.items():\n if isinstance(v_item, bytes):\n v_item = v_item.decode(\"utf-8\")\n elif isinstance(v_item, dict):\n v_item = self._process_dict(v_item)\n elif isinstance(v_item, (list, tuple)):\n v_item = [\n v1.decode(\"utf-8\") if isinstance(v1, bytes) else v1 for v1 in v_item\n ]\n new_dict[k] = v_item\n return new_dict\n\n def print_result(self, data=None):\n \"\"\"Print result object pretty.\n\n :param data: Data to be printed.\n \"\"\"\n data = data or self.data\n if isinstance(data, dict):\n data = self._process_dict(data)\n elif isinstance(data, (list, tuple)):\n data = [v.decode(\"utf-8\") if isinstance(v, bytes) else v for v in data]\n elif isinstance(data, bytes):\n data = data.decode(\"utf-8\")\n tokens = list(pygments.lex(json.dumps(data, indent=4), lexer=JsonLexer()))\n print_formatted_text(PygmentsTokens(tokens))\n", "path": "pymodbus/repl/client/helper.py"}]}
| 3,738 | 152 |
gh_patches_debug_11665
|
rasdani/github-patches
|
git_diff
|
CTFd__CTFd-1971
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Consider enabling THEME_FALLBACK by default
In order to improve theme development and usage, I think we should enable THEME_FALLBACK or perhaps create a way for themes to specify their parent theme.
Wordpress has a sort of similar idea: https://developer.wordpress.org/themes/advanced-topics/child-themes/
I am not sure what downsides there could be with this but I think the potential gain of much easier themes is pretty compelling.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `CTFd/config.py`
Content:
```
1 import configparser
2 import os
3 from distutils.util import strtobool
4
5
6 class EnvInterpolation(configparser.BasicInterpolation):
7 """Interpolation which expands environment variables in values."""
8
9 def before_get(self, parser, section, option, value, defaults):
10 value = super().before_get(parser, section, option, value, defaults)
11 envvar = os.getenv(option)
12 if value == "" and envvar:
13 return process_string_var(envvar)
14 else:
15 return value
16
17
18 def process_string_var(value):
19 if value == "":
20 return None
21
22 if value.isdigit():
23 return int(value)
24 elif value.replace(".", "", 1).isdigit():
25 return float(value)
26
27 try:
28 return bool(strtobool(value))
29 except ValueError:
30 return value
31
32
33 def process_boolean_str(value):
34 if type(value) is bool:
35 return value
36
37 if value is None:
38 return False
39
40 if value == "":
41 return None
42
43 return bool(strtobool(value))
44
45
46 def empty_str_cast(value, default=None):
47 if value == "":
48 return default
49 return value
50
51
52 def gen_secret_key():
53 # Attempt to read the secret from the secret file
54 # This will fail if the secret has not been written
55 try:
56 with open(".ctfd_secret_key", "rb") as secret:
57 key = secret.read()
58 except OSError:
59 key = None
60
61 if not key:
62 key = os.urandom(64)
63 # Attempt to write the secret file
64 # This will fail if the filesystem is read-only
65 try:
66 with open(".ctfd_secret_key", "wb") as secret:
67 secret.write(key)
68 secret.flush()
69 except OSError:
70 pass
71 return key
72
73
74 config_ini = configparser.ConfigParser(interpolation=EnvInterpolation())
75 config_ini.optionxform = str # Makes the key value case-insensitive
76 path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "config.ini")
77 config_ini.read(path)
78
79
80 # fmt: off
81 class ServerConfig(object):
82 SECRET_KEY: str = empty_str_cast(config_ini["server"]["SECRET_KEY"]) \
83 or gen_secret_key()
84
85 DATABASE_URL: str = empty_str_cast(config_ini["server"]["DATABASE_URL"]) \
86 or f"sqlite:///{os.path.dirname(os.path.abspath(__file__))}/ctfd.db"
87
88 REDIS_URL: str = empty_str_cast(config_ini["server"]["REDIS_URL"])
89
90 SQLALCHEMY_DATABASE_URI = DATABASE_URL
91 CACHE_REDIS_URL = REDIS_URL
92 if CACHE_REDIS_URL:
93 CACHE_TYPE: str = "redis"
94 else:
95 CACHE_TYPE: str = "filesystem"
96 CACHE_DIR: str = os.path.join(
97 os.path.dirname(__file__), os.pardir, ".data", "filesystem_cache"
98 )
99 # Override the threshold of cached values on the filesystem. The default is 500. Don't change unless you know what you're doing.
100 CACHE_THRESHOLD: int = 0
101
102 # === SECURITY ===
103 SESSION_COOKIE_HTTPONLY: bool = config_ini["security"].getboolean("SESSION_COOKIE_HTTPONLY", fallback=True)
104
105 SESSION_COOKIE_SAMESITE: str = empty_str_cast(config_ini["security"]["SESSION_COOKIE_SAMESITE"]) \
106 or "Lax"
107
108 PERMANENT_SESSION_LIFETIME: int = config_ini["security"].getint("PERMANENT_SESSION_LIFETIME") \
109 or 604800
110
111 """
112 TRUSTED_PROXIES:
113 Defines a set of regular expressions used for finding a user's IP address if the CTFd instance
114 is behind a proxy. If you are running a CTF and users are on the same network as you, you may choose to remove
115 some proxies from the list.
116
117 CTFd only uses IP addresses for cursory tracking purposes. It is ill-advised to do anything complicated based
118 solely on IP addresses unless you know what you are doing.
119 """
120 TRUSTED_PROXIES = [
121 r"^127\.0\.0\.1$",
122 # Remove the following proxies if you do not trust the local network
123 # For example if you are running a CTF on your laptop and the teams are
124 # all on the same network
125 r"^::1$",
126 r"^fc00:",
127 r"^10\.",
128 r"^172\.(1[6-9]|2[0-9]|3[0-1])\.",
129 r"^192\.168\.",
130 ]
131
132 # === EMAIL ===
133 MAILFROM_ADDR: str = config_ini["email"]["MAILFROM_ADDR"] \
134 or "[email protected]"
135
136 MAIL_SERVER: str = empty_str_cast(config_ini["email"]["MAIL_SERVER"])
137
138 MAIL_PORT: int = empty_str_cast(config_ini["email"]["MAIL_PORT"])
139
140 MAIL_USEAUTH: bool = process_boolean_str(config_ini["email"]["MAIL_USEAUTH"])
141
142 MAIL_USERNAME: str = empty_str_cast(config_ini["email"]["MAIL_USERNAME"])
143
144 MAIL_PASSWORD: str = empty_str_cast(config_ini["email"]["MAIL_PASSWORD"])
145
146 MAIL_TLS: bool = process_boolean_str(config_ini["email"]["MAIL_TLS"])
147
148 MAIL_SSL: bool = process_boolean_str(config_ini["email"]["MAIL_SSL"])
149
150 MAILSENDER_ADDR: str = empty_str_cast(config_ini["email"]["MAILSENDER_ADDR"])
151
152 MAILGUN_API_KEY: str = empty_str_cast(config_ini["email"]["MAILGUN_API_KEY"])
153
154 MAILGUN_BASE_URL: str = empty_str_cast(config_ini["email"]["MAILGUN_API_KEY"])
155
156 # === LOGS ===
157 LOG_FOLDER: str = empty_str_cast(config_ini["logs"]["LOG_FOLDER"]) \
158 or os.path.join(os.path.dirname(os.path.abspath(__file__)), "logs")
159
160 # === UPLOADS ===
161 UPLOAD_PROVIDER: str = empty_str_cast(config_ini["uploads"]["UPLOAD_PROVIDER"]) \
162 or "filesystem"
163
164 UPLOAD_FOLDER: str = empty_str_cast(config_ini["uploads"]["UPLOAD_FOLDER"]) \
165 or os.path.join(os.path.dirname(os.path.abspath(__file__)), "uploads")
166
167 if UPLOAD_PROVIDER == "s3":
168 AWS_ACCESS_KEY_ID: str = empty_str_cast(config_ini["uploads"]["AWS_ACCESS_KEY_ID"])
169
170 AWS_SECRET_ACCESS_KEY: str = empty_str_cast(config_ini["uploads"]["AWS_SECRET_ACCESS_KEY"])
171
172 AWS_S3_BUCKET: str = empty_str_cast(config_ini["uploads"]["AWS_S3_BUCKET"])
173
174 AWS_S3_ENDPOINT_URL: str = empty_str_cast(config_ini["uploads"]["AWS_S3_ENDPOINT_URL"])
175
176 # === OPTIONAL ===
177 REVERSE_PROXY: bool = process_boolean_str(empty_str_cast(config_ini["optional"]["REVERSE_PROXY"], default=False))
178
179 TEMPLATES_AUTO_RELOAD: bool = process_boolean_str(empty_str_cast(config_ini["optional"]["TEMPLATES_AUTO_RELOAD"], default=True))
180
181 THEME_FALLBACK: bool = process_boolean_str(empty_str_cast(config_ini["optional"]["THEME_FALLBACK"], default=False))
182
183 SQLALCHEMY_TRACK_MODIFICATIONS: bool = process_boolean_str(empty_str_cast(config_ini["optional"]["SQLALCHEMY_TRACK_MODIFICATIONS"], default=False))
184
185 SWAGGER_UI: bool = process_boolean_str(empty_str_cast(config_ini["optional"]["SWAGGER_UI"], default=False))
186
187 SWAGGER_UI_ENDPOINT: str = "/" if SWAGGER_UI else None
188
189 UPDATE_CHECK: bool = process_boolean_str(empty_str_cast(config_ini["optional"]["UPDATE_CHECK"], default=True))
190
191 APPLICATION_ROOT: str = empty_str_cast(config_ini["optional"]["APPLICATION_ROOT"], default="/")
192
193 SERVER_SENT_EVENTS: bool = process_boolean_str(empty_str_cast(config_ini["optional"]["SERVER_SENT_EVENTS"], default=True))
194
195 HTML_SANITIZATION: bool = process_boolean_str(empty_str_cast(config_ini["optional"]["HTML_SANITIZATION"], default=False))
196
197 if DATABASE_URL.startswith("sqlite") is False:
198 SQLALCHEMY_ENGINE_OPTIONS = {
199 "max_overflow": int(empty_str_cast(config_ini["optional"]["SQLALCHEMY_MAX_OVERFLOW"], default=20)), # noqa: E131
200 "pool_pre_ping": empty_str_cast(config_ini["optional"]["SQLALCHEMY_POOL_PRE_PING"], default=True), # noqa: E131
201 }
202
203 # === OAUTH ===
204 OAUTH_CLIENT_ID: str = empty_str_cast(config_ini["oauth"]["OAUTH_CLIENT_ID"])
205 OAUTH_CLIENT_SECRET: str = empty_str_cast(config_ini["oauth"]["OAUTH_CLIENT_SECRET"])
206 # fmt: on
207
208
209 class TestingConfig(ServerConfig):
210 SECRET_KEY = "AAAAAAAAAAAAAAAAAAAA"
211 PRESERVE_CONTEXT_ON_EXCEPTION = False
212 TESTING = True
213 DEBUG = True
214 SQLALCHEMY_DATABASE_URI = os.getenv("TESTING_DATABASE_URL") or "sqlite://"
215 MAIL_SERVER = os.getenv("TESTING_MAIL_SERVER")
216 SERVER_NAME = "localhost"
217 UPDATE_CHECK = False
218 REDIS_URL = None
219 CACHE_TYPE = "simple"
220 CACHE_THRESHOLD = 500
221 SAFE_MODE = True
222
223
224 # Actually initialize ServerConfig to allow us to add more attributes on
225 Config = ServerConfig()
226 for k, v in config_ini.items("extra"):
227 setattr(Config, k, process_string_var(v))
228
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/CTFd/config.py b/CTFd/config.py
--- a/CTFd/config.py
+++ b/CTFd/config.py
@@ -178,7 +178,7 @@
TEMPLATES_AUTO_RELOAD: bool = process_boolean_str(empty_str_cast(config_ini["optional"]["TEMPLATES_AUTO_RELOAD"], default=True))
- THEME_FALLBACK: bool = process_boolean_str(empty_str_cast(config_ini["optional"]["THEME_FALLBACK"], default=False))
+ THEME_FALLBACK: bool = process_boolean_str(empty_str_cast(config_ini["optional"]["THEME_FALLBACK"], default=True))
SQLALCHEMY_TRACK_MODIFICATIONS: bool = process_boolean_str(empty_str_cast(config_ini["optional"]["SQLALCHEMY_TRACK_MODIFICATIONS"], default=False))
|
{"golden_diff": "diff --git a/CTFd/config.py b/CTFd/config.py\n--- a/CTFd/config.py\n+++ b/CTFd/config.py\n@@ -178,7 +178,7 @@\n \n TEMPLATES_AUTO_RELOAD: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"TEMPLATES_AUTO_RELOAD\"], default=True))\n \n- THEME_FALLBACK: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"THEME_FALLBACK\"], default=False))\n+ THEME_FALLBACK: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"THEME_FALLBACK\"], default=True))\n \n SQLALCHEMY_TRACK_MODIFICATIONS: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"SQLALCHEMY_TRACK_MODIFICATIONS\"], default=False))\n", "issue": "Consider enabling THEME_FALLBACK by default\nIn order to improve theme development and usage, I think we should enable THEME_FALLBACK or perhaps create a way for themes to specify their parent theme. \r\n\r\nWordpress has a sort of similar idea: https://developer.wordpress.org/themes/advanced-topics/child-themes/\r\n\r\nI am not sure what downsides there could be with this but I think the potential gain of much easier themes is pretty compelling. \n", "before_files": [{"content": "import configparser\nimport os\nfrom distutils.util import strtobool\n\n\nclass EnvInterpolation(configparser.BasicInterpolation):\n \"\"\"Interpolation which expands environment variables in values.\"\"\"\n\n def before_get(self, parser, section, option, value, defaults):\n value = super().before_get(parser, section, option, value, defaults)\n envvar = os.getenv(option)\n if value == \"\" and envvar:\n return process_string_var(envvar)\n else:\n return value\n\n\ndef process_string_var(value):\n if value == \"\":\n return None\n\n if value.isdigit():\n return int(value)\n elif value.replace(\".\", \"\", 1).isdigit():\n return float(value)\n\n try:\n return bool(strtobool(value))\n except ValueError:\n return value\n\n\ndef process_boolean_str(value):\n if type(value) is bool:\n return value\n\n if value is None:\n return False\n\n if value == \"\":\n return None\n\n return bool(strtobool(value))\n\n\ndef empty_str_cast(value, default=None):\n if value == \"\":\n return default\n return value\n\n\ndef gen_secret_key():\n # Attempt to read the secret from the secret file\n # This will fail if the secret has not been written\n try:\n with open(\".ctfd_secret_key\", \"rb\") as secret:\n key = secret.read()\n except OSError:\n key = None\n\n if not key:\n key = os.urandom(64)\n # Attempt to write the secret file\n # This will fail if the filesystem is read-only\n try:\n with open(\".ctfd_secret_key\", \"wb\") as secret:\n secret.write(key)\n secret.flush()\n except OSError:\n pass\n return key\n\n\nconfig_ini = configparser.ConfigParser(interpolation=EnvInterpolation())\nconfig_ini.optionxform = str # Makes the key value case-insensitive\npath = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"config.ini\")\nconfig_ini.read(path)\n\n\n# fmt: off\nclass ServerConfig(object):\n SECRET_KEY: str = empty_str_cast(config_ini[\"server\"][\"SECRET_KEY\"]) \\\n or gen_secret_key()\n\n DATABASE_URL: str = empty_str_cast(config_ini[\"server\"][\"DATABASE_URL\"]) \\\n or f\"sqlite:///{os.path.dirname(os.path.abspath(__file__))}/ctfd.db\"\n\n REDIS_URL: str = empty_str_cast(config_ini[\"server\"][\"REDIS_URL\"])\n\n SQLALCHEMY_DATABASE_URI = DATABASE_URL\n CACHE_REDIS_URL = REDIS_URL\n if CACHE_REDIS_URL:\n CACHE_TYPE: str = \"redis\"\n else:\n CACHE_TYPE: str = \"filesystem\"\n CACHE_DIR: str = os.path.join(\n os.path.dirname(__file__), os.pardir, \".data\", \"filesystem_cache\"\n )\n # Override the threshold of cached values on the filesystem. The default is 500. Don't change unless you know what you're doing.\n CACHE_THRESHOLD: int = 0\n\n # === SECURITY ===\n SESSION_COOKIE_HTTPONLY: bool = config_ini[\"security\"].getboolean(\"SESSION_COOKIE_HTTPONLY\", fallback=True)\n\n SESSION_COOKIE_SAMESITE: str = empty_str_cast(config_ini[\"security\"][\"SESSION_COOKIE_SAMESITE\"]) \\\n or \"Lax\"\n\n PERMANENT_SESSION_LIFETIME: int = config_ini[\"security\"].getint(\"PERMANENT_SESSION_LIFETIME\") \\\n or 604800\n\n \"\"\"\n TRUSTED_PROXIES:\n Defines a set of regular expressions used for finding a user's IP address if the CTFd instance\n is behind a proxy. If you are running a CTF and users are on the same network as you, you may choose to remove\n some proxies from the list.\n\n CTFd only uses IP addresses for cursory tracking purposes. It is ill-advised to do anything complicated based\n solely on IP addresses unless you know what you are doing.\n \"\"\"\n TRUSTED_PROXIES = [\n r\"^127\\.0\\.0\\.1$\",\n # Remove the following proxies if you do not trust the local network\n # For example if you are running a CTF on your laptop and the teams are\n # all on the same network\n r\"^::1$\",\n r\"^fc00:\",\n r\"^10\\.\",\n r\"^172\\.(1[6-9]|2[0-9]|3[0-1])\\.\",\n r\"^192\\.168\\.\",\n ]\n\n # === EMAIL ===\n MAILFROM_ADDR: str = config_ini[\"email\"][\"MAILFROM_ADDR\"] \\\n or \"[email protected]\"\n\n MAIL_SERVER: str = empty_str_cast(config_ini[\"email\"][\"MAIL_SERVER\"])\n\n MAIL_PORT: int = empty_str_cast(config_ini[\"email\"][\"MAIL_PORT\"])\n\n MAIL_USEAUTH: bool = process_boolean_str(config_ini[\"email\"][\"MAIL_USEAUTH\"])\n\n MAIL_USERNAME: str = empty_str_cast(config_ini[\"email\"][\"MAIL_USERNAME\"])\n\n MAIL_PASSWORD: str = empty_str_cast(config_ini[\"email\"][\"MAIL_PASSWORD\"])\n\n MAIL_TLS: bool = process_boolean_str(config_ini[\"email\"][\"MAIL_TLS\"])\n\n MAIL_SSL: bool = process_boolean_str(config_ini[\"email\"][\"MAIL_SSL\"])\n\n MAILSENDER_ADDR: str = empty_str_cast(config_ini[\"email\"][\"MAILSENDER_ADDR\"])\n\n MAILGUN_API_KEY: str = empty_str_cast(config_ini[\"email\"][\"MAILGUN_API_KEY\"])\n\n MAILGUN_BASE_URL: str = empty_str_cast(config_ini[\"email\"][\"MAILGUN_API_KEY\"])\n\n # === LOGS ===\n LOG_FOLDER: str = empty_str_cast(config_ini[\"logs\"][\"LOG_FOLDER\"]) \\\n or os.path.join(os.path.dirname(os.path.abspath(__file__)), \"logs\")\n\n # === UPLOADS ===\n UPLOAD_PROVIDER: str = empty_str_cast(config_ini[\"uploads\"][\"UPLOAD_PROVIDER\"]) \\\n or \"filesystem\"\n\n UPLOAD_FOLDER: str = empty_str_cast(config_ini[\"uploads\"][\"UPLOAD_FOLDER\"]) \\\n or os.path.join(os.path.dirname(os.path.abspath(__file__)), \"uploads\")\n\n if UPLOAD_PROVIDER == \"s3\":\n AWS_ACCESS_KEY_ID: str = empty_str_cast(config_ini[\"uploads\"][\"AWS_ACCESS_KEY_ID\"])\n\n AWS_SECRET_ACCESS_KEY: str = empty_str_cast(config_ini[\"uploads\"][\"AWS_SECRET_ACCESS_KEY\"])\n\n AWS_S3_BUCKET: str = empty_str_cast(config_ini[\"uploads\"][\"AWS_S3_BUCKET\"])\n\n AWS_S3_ENDPOINT_URL: str = empty_str_cast(config_ini[\"uploads\"][\"AWS_S3_ENDPOINT_URL\"])\n\n # === OPTIONAL ===\n REVERSE_PROXY: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"REVERSE_PROXY\"], default=False))\n\n TEMPLATES_AUTO_RELOAD: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"TEMPLATES_AUTO_RELOAD\"], default=True))\n\n THEME_FALLBACK: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"THEME_FALLBACK\"], default=False))\n\n SQLALCHEMY_TRACK_MODIFICATIONS: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"SQLALCHEMY_TRACK_MODIFICATIONS\"], default=False))\n\n SWAGGER_UI: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"SWAGGER_UI\"], default=False))\n\n SWAGGER_UI_ENDPOINT: str = \"/\" if SWAGGER_UI else None\n\n UPDATE_CHECK: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"UPDATE_CHECK\"], default=True))\n\n APPLICATION_ROOT: str = empty_str_cast(config_ini[\"optional\"][\"APPLICATION_ROOT\"], default=\"/\")\n\n SERVER_SENT_EVENTS: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"SERVER_SENT_EVENTS\"], default=True))\n\n HTML_SANITIZATION: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"HTML_SANITIZATION\"], default=False))\n\n if DATABASE_URL.startswith(\"sqlite\") is False:\n SQLALCHEMY_ENGINE_OPTIONS = {\n \"max_overflow\": int(empty_str_cast(config_ini[\"optional\"][\"SQLALCHEMY_MAX_OVERFLOW\"], default=20)), # noqa: E131\n \"pool_pre_ping\": empty_str_cast(config_ini[\"optional\"][\"SQLALCHEMY_POOL_PRE_PING\"], default=True), # noqa: E131\n }\n\n # === OAUTH ===\n OAUTH_CLIENT_ID: str = empty_str_cast(config_ini[\"oauth\"][\"OAUTH_CLIENT_ID\"])\n OAUTH_CLIENT_SECRET: str = empty_str_cast(config_ini[\"oauth\"][\"OAUTH_CLIENT_SECRET\"])\n# fmt: on\n\n\nclass TestingConfig(ServerConfig):\n SECRET_KEY = \"AAAAAAAAAAAAAAAAAAAA\"\n PRESERVE_CONTEXT_ON_EXCEPTION = False\n TESTING = True\n DEBUG = True\n SQLALCHEMY_DATABASE_URI = os.getenv(\"TESTING_DATABASE_URL\") or \"sqlite://\"\n MAIL_SERVER = os.getenv(\"TESTING_MAIL_SERVER\")\n SERVER_NAME = \"localhost\"\n UPDATE_CHECK = False\n REDIS_URL = None\n CACHE_TYPE = \"simple\"\n CACHE_THRESHOLD = 500\n SAFE_MODE = True\n\n\n# Actually initialize ServerConfig to allow us to add more attributes on\nConfig = ServerConfig()\nfor k, v in config_ini.items(\"extra\"):\n setattr(Config, k, process_string_var(v))\n", "path": "CTFd/config.py"}], "after_files": [{"content": "import configparser\nimport os\nfrom distutils.util import strtobool\n\n\nclass EnvInterpolation(configparser.BasicInterpolation):\n \"\"\"Interpolation which expands environment variables in values.\"\"\"\n\n def before_get(self, parser, section, option, value, defaults):\n value = super().before_get(parser, section, option, value, defaults)\n envvar = os.getenv(option)\n if value == \"\" and envvar:\n return process_string_var(envvar)\n else:\n return value\n\n\ndef process_string_var(value):\n if value == \"\":\n return None\n\n if value.isdigit():\n return int(value)\n elif value.replace(\".\", \"\", 1).isdigit():\n return float(value)\n\n try:\n return bool(strtobool(value))\n except ValueError:\n return value\n\n\ndef process_boolean_str(value):\n if type(value) is bool:\n return value\n\n if value is None:\n return False\n\n if value == \"\":\n return None\n\n return bool(strtobool(value))\n\n\ndef empty_str_cast(value, default=None):\n if value == \"\":\n return default\n return value\n\n\ndef gen_secret_key():\n # Attempt to read the secret from the secret file\n # This will fail if the secret has not been written\n try:\n with open(\".ctfd_secret_key\", \"rb\") as secret:\n key = secret.read()\n except OSError:\n key = None\n\n if not key:\n key = os.urandom(64)\n # Attempt to write the secret file\n # This will fail if the filesystem is read-only\n try:\n with open(\".ctfd_secret_key\", \"wb\") as secret:\n secret.write(key)\n secret.flush()\n except OSError:\n pass\n return key\n\n\nconfig_ini = configparser.ConfigParser(interpolation=EnvInterpolation())\nconfig_ini.optionxform = str # Makes the key value case-insensitive\npath = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"config.ini\")\nconfig_ini.read(path)\n\n\n# fmt: off\nclass ServerConfig(object):\n SECRET_KEY: str = empty_str_cast(config_ini[\"server\"][\"SECRET_KEY\"]) \\\n or gen_secret_key()\n\n DATABASE_URL: str = empty_str_cast(config_ini[\"server\"][\"DATABASE_URL\"]) \\\n or f\"sqlite:///{os.path.dirname(os.path.abspath(__file__))}/ctfd.db\"\n\n REDIS_URL: str = empty_str_cast(config_ini[\"server\"][\"REDIS_URL\"])\n\n SQLALCHEMY_DATABASE_URI = DATABASE_URL\n CACHE_REDIS_URL = REDIS_URL\n if CACHE_REDIS_URL:\n CACHE_TYPE: str = \"redis\"\n else:\n CACHE_TYPE: str = \"filesystem\"\n CACHE_DIR: str = os.path.join(\n os.path.dirname(__file__), os.pardir, \".data\", \"filesystem_cache\"\n )\n # Override the threshold of cached values on the filesystem. The default is 500. Don't change unless you know what you're doing.\n CACHE_THRESHOLD: int = 0\n\n # === SECURITY ===\n SESSION_COOKIE_HTTPONLY: bool = config_ini[\"security\"].getboolean(\"SESSION_COOKIE_HTTPONLY\", fallback=True)\n\n SESSION_COOKIE_SAMESITE: str = empty_str_cast(config_ini[\"security\"][\"SESSION_COOKIE_SAMESITE\"]) \\\n or \"Lax\"\n\n PERMANENT_SESSION_LIFETIME: int = config_ini[\"security\"].getint(\"PERMANENT_SESSION_LIFETIME\") \\\n or 604800\n\n \"\"\"\n TRUSTED_PROXIES:\n Defines a set of regular expressions used for finding a user's IP address if the CTFd instance\n is behind a proxy. If you are running a CTF and users are on the same network as you, you may choose to remove\n some proxies from the list.\n\n CTFd only uses IP addresses for cursory tracking purposes. It is ill-advised to do anything complicated based\n solely on IP addresses unless you know what you are doing.\n \"\"\"\n TRUSTED_PROXIES = [\n r\"^127\\.0\\.0\\.1$\",\n # Remove the following proxies if you do not trust the local network\n # For example if you are running a CTF on your laptop and the teams are\n # all on the same network\n r\"^::1$\",\n r\"^fc00:\",\n r\"^10\\.\",\n r\"^172\\.(1[6-9]|2[0-9]|3[0-1])\\.\",\n r\"^192\\.168\\.\",\n ]\n\n # === EMAIL ===\n MAILFROM_ADDR: str = config_ini[\"email\"][\"MAILFROM_ADDR\"] \\\n or \"[email protected]\"\n\n MAIL_SERVER: str = empty_str_cast(config_ini[\"email\"][\"MAIL_SERVER\"])\n\n MAIL_PORT: int = empty_str_cast(config_ini[\"email\"][\"MAIL_PORT\"])\n\n MAIL_USEAUTH: bool = process_boolean_str(config_ini[\"email\"][\"MAIL_USEAUTH\"])\n\n MAIL_USERNAME: str = empty_str_cast(config_ini[\"email\"][\"MAIL_USERNAME\"])\n\n MAIL_PASSWORD: str = empty_str_cast(config_ini[\"email\"][\"MAIL_PASSWORD\"])\n\n MAIL_TLS: bool = process_boolean_str(config_ini[\"email\"][\"MAIL_TLS\"])\n\n MAIL_SSL: bool = process_boolean_str(config_ini[\"email\"][\"MAIL_SSL\"])\n\n MAILSENDER_ADDR: str = empty_str_cast(config_ini[\"email\"][\"MAILSENDER_ADDR\"])\n\n MAILGUN_API_KEY: str = empty_str_cast(config_ini[\"email\"][\"MAILGUN_API_KEY\"])\n\n MAILGUN_BASE_URL: str = empty_str_cast(config_ini[\"email\"][\"MAILGUN_API_KEY\"])\n\n # === LOGS ===\n LOG_FOLDER: str = empty_str_cast(config_ini[\"logs\"][\"LOG_FOLDER\"]) \\\n or os.path.join(os.path.dirname(os.path.abspath(__file__)), \"logs\")\n\n # === UPLOADS ===\n UPLOAD_PROVIDER: str = empty_str_cast(config_ini[\"uploads\"][\"UPLOAD_PROVIDER\"]) \\\n or \"filesystem\"\n\n UPLOAD_FOLDER: str = empty_str_cast(config_ini[\"uploads\"][\"UPLOAD_FOLDER\"]) \\\n or os.path.join(os.path.dirname(os.path.abspath(__file__)), \"uploads\")\n\n if UPLOAD_PROVIDER == \"s3\":\n AWS_ACCESS_KEY_ID: str = empty_str_cast(config_ini[\"uploads\"][\"AWS_ACCESS_KEY_ID\"])\n\n AWS_SECRET_ACCESS_KEY: str = empty_str_cast(config_ini[\"uploads\"][\"AWS_SECRET_ACCESS_KEY\"])\n\n AWS_S3_BUCKET: str = empty_str_cast(config_ini[\"uploads\"][\"AWS_S3_BUCKET\"])\n\n AWS_S3_ENDPOINT_URL: str = empty_str_cast(config_ini[\"uploads\"][\"AWS_S3_ENDPOINT_URL\"])\n\n # === OPTIONAL ===\n REVERSE_PROXY: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"REVERSE_PROXY\"], default=False))\n\n TEMPLATES_AUTO_RELOAD: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"TEMPLATES_AUTO_RELOAD\"], default=True))\n\n THEME_FALLBACK: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"THEME_FALLBACK\"], default=True))\n\n SQLALCHEMY_TRACK_MODIFICATIONS: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"SQLALCHEMY_TRACK_MODIFICATIONS\"], default=False))\n\n SWAGGER_UI: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"SWAGGER_UI\"], default=False))\n\n SWAGGER_UI_ENDPOINT: str = \"/\" if SWAGGER_UI else None\n\n UPDATE_CHECK: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"UPDATE_CHECK\"], default=True))\n\n APPLICATION_ROOT: str = empty_str_cast(config_ini[\"optional\"][\"APPLICATION_ROOT\"], default=\"/\")\n\n SERVER_SENT_EVENTS: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"SERVER_SENT_EVENTS\"], default=True))\n\n HTML_SANITIZATION: bool = process_boolean_str(empty_str_cast(config_ini[\"optional\"][\"HTML_SANITIZATION\"], default=False))\n\n if DATABASE_URL.startswith(\"sqlite\") is False:\n SQLALCHEMY_ENGINE_OPTIONS = {\n \"max_overflow\": int(empty_str_cast(config_ini[\"optional\"][\"SQLALCHEMY_MAX_OVERFLOW\"], default=20)), # noqa: E131\n \"pool_pre_ping\": empty_str_cast(config_ini[\"optional\"][\"SQLALCHEMY_POOL_PRE_PING\"], default=True), # noqa: E131\n }\n\n # === OAUTH ===\n OAUTH_CLIENT_ID: str = empty_str_cast(config_ini[\"oauth\"][\"OAUTH_CLIENT_ID\"])\n OAUTH_CLIENT_SECRET: str = empty_str_cast(config_ini[\"oauth\"][\"OAUTH_CLIENT_SECRET\"])\n# fmt: on\n\n\nclass TestingConfig(ServerConfig):\n SECRET_KEY = \"AAAAAAAAAAAAAAAAAAAA\"\n PRESERVE_CONTEXT_ON_EXCEPTION = False\n TESTING = True\n DEBUG = True\n SQLALCHEMY_DATABASE_URI = os.getenv(\"TESTING_DATABASE_URL\") or \"sqlite://\"\n MAIL_SERVER = os.getenv(\"TESTING_MAIL_SERVER\")\n SERVER_NAME = \"localhost\"\n UPDATE_CHECK = False\n REDIS_URL = None\n CACHE_TYPE = \"simple\"\n CACHE_THRESHOLD = 500\n SAFE_MODE = True\n\n\n# Actually initialize ServerConfig to allow us to add more attributes on\nConfig = ServerConfig()\nfor k, v in config_ini.items(\"extra\"):\n setattr(Config, k, process_string_var(v))\n", "path": "CTFd/config.py"}]}
| 2,920 | 162 |
gh_patches_debug_12989
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-1818
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CKV_AZURE_23 - Reports incorrect fail when using azurerm_mssql_server_extended_auditing_policy
Check: CKV_AZURE_23: "Ensure that 'Auditing' is set to 'On' for SQL servers"
FAILED for resource: azurerm_mssql_server.server
File: /sql_server.tf:19-37
Guide: https://docs.bridgecrew.io/docs/bc_azr_logging_2
19 | resource "azurerm_mssql_server" "server" {
20 | name = "${var.org_short}-${var.env_short}-${var.loc_short}-${var.service}-mssql"
21 | location = var.location
22 | resource_group_name = azurerm_resource_group.server.name
23 | version = var.sql_server_server_version
24 | administrator_login = "${var.org_short}-${var.env_short}-${var.loc_short}-${var.service}-mssql-admin"
25 | administrator_login_password = random_password.server.result
26 | minimum_tls_version = var.min_tls_version
27 | public_network_access_enabled = false
28 | azuread_administrator {
29 | login_username = "AzureAD Admin"
30 | object_id = data.azuread_group.sqldba.object_id
31 | tenant_id = data.azurerm_client_config.current.tenant_id
32 | }
33 | identity {
34 | type = "SystemAssigned"
35 | }
36 | tags = local.default_tags
37 | }
When using Checkov v1.0.861
However
The auditing policy is set by the separate resource azurerm_mssql_server_extended_auditing_policy
```
resource "azurerm_mssql_server_extended_auditing_policy" "server" {
server_id = azurerm_mssql_server.server.id
log_monitoring_enabled = true
storage_endpoint = azurerm_storage_account.server.primary_blob_endpoint
storage_account_access_key = azurerm_storage_account.server.primary_access_key
storage_account_access_key_is_secondary = true
retention_in_days = var.log_rention_policy_days
}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/terraform/checks/resource/azure/SQLServerAuditingEnabled.py`
Content:
```
1 from checkov.common.models.enums import CheckCategories
2 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
3 from checkov.common.models.consts import ANY_VALUE
4
5
6 class SQLServerAuditingEnabled(BaseResourceValueCheck):
7 def __init__(self):
8 name = "Ensure that 'Auditing' is set to 'On' for SQL servers"
9 id = "CKV_AZURE_23"
10 supported_resources = ['azurerm_sql_server', 'azurerm_mssql_server']
11 categories = [CheckCategories.LOGGING]
12 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
13
14 def get_inspected_key(self):
15 return 'extended_auditing_policy'
16
17 def get_expected_value(self):
18 return ANY_VALUE
19
20
21 check = SQLServerAuditingEnabled()
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/terraform/checks/resource/azure/SQLServerAuditingEnabled.py b/checkov/terraform/checks/resource/azure/SQLServerAuditingEnabled.py
deleted file mode 100644
--- a/checkov/terraform/checks/resource/azure/SQLServerAuditingEnabled.py
+++ /dev/null
@@ -1,21 +0,0 @@
-from checkov.common.models.enums import CheckCategories
-from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
-from checkov.common.models.consts import ANY_VALUE
-
-
-class SQLServerAuditingEnabled(BaseResourceValueCheck):
- def __init__(self):
- name = "Ensure that 'Auditing' is set to 'On' for SQL servers"
- id = "CKV_AZURE_23"
- supported_resources = ['azurerm_sql_server', 'azurerm_mssql_server']
- categories = [CheckCategories.LOGGING]
- super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
-
- def get_inspected_key(self):
- return 'extended_auditing_policy'
-
- def get_expected_value(self):
- return ANY_VALUE
-
-
-check = SQLServerAuditingEnabled()
|
{"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/SQLServerAuditingEnabled.py b/checkov/terraform/checks/resource/azure/SQLServerAuditingEnabled.py\ndeleted file mode 100644\n--- a/checkov/terraform/checks/resource/azure/SQLServerAuditingEnabled.py\n+++ /dev/null\n@@ -1,21 +0,0 @@\n-from checkov.common.models.enums import CheckCategories\n-from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n-from checkov.common.models.consts import ANY_VALUE\n-\n-\n-class SQLServerAuditingEnabled(BaseResourceValueCheck):\n- def __init__(self):\n- name = \"Ensure that 'Auditing' is set to 'On' for SQL servers\"\n- id = \"CKV_AZURE_23\"\n- supported_resources = ['azurerm_sql_server', 'azurerm_mssql_server']\n- categories = [CheckCategories.LOGGING]\n- super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n-\n- def get_inspected_key(self):\n- return 'extended_auditing_policy'\n-\n- def get_expected_value(self):\n- return ANY_VALUE\n-\n-\n-check = SQLServerAuditingEnabled()\n", "issue": "CKV_AZURE_23 - Reports incorrect fail when using azurerm_mssql_server_extended_auditing_policy\nCheck: CKV_AZURE_23: \"Ensure that 'Auditing' is set to 'On' for SQL servers\"\r\n FAILED for resource: azurerm_mssql_server.server\r\n File: /sql_server.tf:19-37\r\n Guide: https://docs.bridgecrew.io/docs/bc_azr_logging_2\r\n\r\n 19 | resource \"azurerm_mssql_server\" \"server\" {\r\n 20 | name = \"${var.org_short}-${var.env_short}-${var.loc_short}-${var.service}-mssql\"\r\n 21 | location = var.location\r\n 22 | resource_group_name = azurerm_resource_group.server.name\r\n 23 | version = var.sql_server_server_version\r\n 24 | administrator_login = \"${var.org_short}-${var.env_short}-${var.loc_short}-${var.service}-mssql-admin\"\r\n 25 | administrator_login_password = random_password.server.result\r\n 26 | minimum_tls_version = var.min_tls_version\r\n 27 | public_network_access_enabled = false\r\n 28 | azuread_administrator {\r\n 29 | login_username = \"AzureAD Admin\"\r\n 30 | object_id = data.azuread_group.sqldba.object_id\r\n 31 | tenant_id = data.azurerm_client_config.current.tenant_id\r\n 32 | }\r\n 33 | identity {\r\n 34 | type = \"SystemAssigned\"\r\n 35 | }\r\n 36 | tags = local.default_tags\r\n 37 | }\r\n\r\nWhen using Checkov v1.0.861\r\n\r\nHowever\r\nThe auditing policy is set by the separate resource azurerm_mssql_server_extended_auditing_policy\r\n\r\n```\r\nresource \"azurerm_mssql_server_extended_auditing_policy\" \"server\" {\r\n server_id = azurerm_mssql_server.server.id\r\n log_monitoring_enabled = true\r\n storage_endpoint = azurerm_storage_account.server.primary_blob_endpoint\r\n storage_account_access_key = azurerm_storage_account.server.primary_access_key\r\n storage_account_access_key_is_secondary = true\r\n retention_in_days = var.log_rention_policy_days\r\n}\r\n```\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\nfrom checkov.common.models.consts import ANY_VALUE\n\n\nclass SQLServerAuditingEnabled(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure that 'Auditing' is set to 'On' for SQL servers\"\n id = \"CKV_AZURE_23\"\n supported_resources = ['azurerm_sql_server', 'azurerm_mssql_server']\n categories = [CheckCategories.LOGGING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return 'extended_auditing_policy'\n\n def get_expected_value(self):\n return ANY_VALUE\n\n\ncheck = SQLServerAuditingEnabled()\n", "path": "checkov/terraform/checks/resource/azure/SQLServerAuditingEnabled.py"}], "after_files": [{"content": null, "path": "checkov/terraform/checks/resource/azure/SQLServerAuditingEnabled.py"}]}
| 1,012 | 276 |
gh_patches_debug_40043
|
rasdani/github-patches
|
git_diff
|
Project-MONAI__MONAI-5965
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
overriding iteration_log and epoch_log in StatsHandler
Currently, they are booleans but we could extend them to `Union[bool, Callable]` where if arg is callable then it could be used for event filtering:
```python
def attach(self, engine):
if self.iteration_log and not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):
event_filter = self.iteration_log if callable(self.iteration_log) else None
engine.add_event_handler(
Events.ITERATION_COMPLETED(event_filter=event_filter), self.iteration_completed
)
if self.epoch_log and not engine.has_event_handler(self.epoch_completed, Events.EPOCH_COMPLETED):
event_filter = self.epoch_log if callable(self.epoch_log) else None
engine.add_event_handler(
Events.EPOCH_COMPLETED(event_filter=event_filter), self.epoch_completed
)
```
_Originally posted by @vfdev-5 in https://github.com/Project-MONAI/MONAI/discussions/5958#discussioncomment-4912997_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `monai/handlers/stats_handler.py`
Content:
```
1 # Copyright (c) MONAI Consortium
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 # http://www.apache.org/licenses/LICENSE-2.0
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11
12 from __future__ import annotations
13
14 import logging
15 import warnings
16 from collections.abc import Callable, Sequence
17 from typing import TYPE_CHECKING, Any
18
19 import torch
20
21 from monai.config import IgniteInfo
22 from monai.utils import is_scalar, min_version, optional_import
23
24 Events, _ = optional_import("ignite.engine", IgniteInfo.OPT_IMPORT_VERSION, min_version, "Events")
25 if TYPE_CHECKING:
26 from ignite.engine import Engine
27 else:
28 Engine, _ = optional_import(
29 "ignite.engine", IgniteInfo.OPT_IMPORT_VERSION, min_version, "Engine", as_type="decorator"
30 )
31
32 DEFAULT_KEY_VAL_FORMAT = "{}: {:.4f} "
33 DEFAULT_TAG = "Loss"
34
35
36 class StatsHandler:
37 """
38 StatsHandler defines a set of Ignite Event-handlers for all the log printing logics.
39 It can be used for any Ignite Engine(trainer, validator and evaluator).
40 And it can support logging for epoch level and iteration level with pre-defined loggers.
41
42 Note that if `name` arg is None, will leverage `engine.logger` as default logger directly, otherwise,
43 get logger from `logging.getLogger(name)`, we can setup a logger outside first with the same `name`.
44 As the default log level of `RootLogger` is `WARNING`, may need to call
45 `logging.basicConfig(stream=sys.stdout, level=logging.INFO)` before running this handler to enable
46 the stats logging.
47
48 Default behaviors:
49 - When EPOCH_COMPLETED, logs ``engine.state.metrics`` using ``self.logger``.
50 - When ITERATION_COMPLETED, logs
51 ``self.output_transform(engine.state.output)`` using ``self.logger``.
52
53 Usage example::
54
55 logging.basicConfig(stream=sys.stdout, level=logging.INFO)
56
57 trainer = SupervisedTrainer(...)
58 StatsHandler(name="train_stats").attach(trainer)
59
60 trainer.run()
61
62 More details of example is available in the tutorial:
63 https://github.com/Project-MONAI/tutorials/blob/master/modules/engines/unet_training_dict.py.
64
65 """
66
67 def __init__(
68 self,
69 iteration_log: bool = True,
70 epoch_log: bool = True,
71 epoch_print_logger: Callable[[Engine], Any] | None = None,
72 iteration_print_logger: Callable[[Engine], Any] | None = None,
73 output_transform: Callable = lambda x: x[0],
74 global_epoch_transform: Callable = lambda x: x,
75 state_attributes: Sequence[str] | None = None,
76 name: str | None = None,
77 tag_name: str = DEFAULT_TAG,
78 key_var_format: str = DEFAULT_KEY_VAL_FORMAT,
79 ) -> None:
80 """
81
82 Args:
83 iteration_log: whether to log data when iteration completed, default to `True`.
84 epoch_log: whether to log data when epoch completed, default to `True`.
85 epoch_print_logger: customized callable printer for epoch level logging.
86 Must accept parameter "engine", use default printer if None.
87 iteration_print_logger: customized callable printer for iteration level logging.
88 Must accept parameter "engine", use default printer if None.
89 output_transform: a callable that is used to transform the
90 ``ignite.engine.state.output`` into a scalar to print, or a dictionary of {key: scalar}.
91 In the latter case, the output string will be formatted as key: value.
92 By default this value logging happens when every iteration completed.
93 The default behavior is to print loss from output[0] as output is a decollated list
94 and we replicated loss value for every item of the decollated list.
95 `engine.state` and `output_transform` inherit from the ignite concept:
96 https://pytorch.org/ignite/concepts.html#state, explanation and usage example are in the tutorial:
97 https://github.com/Project-MONAI/tutorials/blob/master/modules/batch_output_transform.ipynb.
98 global_epoch_transform: a callable that is used to customize global epoch number.
99 For example, in evaluation, the evaluator engine might want to print synced epoch number
100 with the trainer engine.
101 state_attributes: expected attributes from `engine.state`, if provided, will extract them
102 when epoch completed.
103 name: identifier of `logging.logger` to use, if None, defaulting to ``engine.logger``.
104 tag_name: when iteration output is a scalar, tag_name is used to print
105 tag_name: scalar_value to logger. Defaults to ``'Loss'``.
106 key_var_format: a formatting string to control the output string format of key: value.
107
108 """
109
110 self.iteration_log = iteration_log
111 self.epoch_log = epoch_log
112 self.epoch_print_logger = epoch_print_logger
113 self.iteration_print_logger = iteration_print_logger
114 self.output_transform = output_transform
115 self.global_epoch_transform = global_epoch_transform
116 self.state_attributes = state_attributes
117 self.tag_name = tag_name
118 self.key_var_format = key_var_format
119 self.logger = logging.getLogger(name) # if `name` is None, will default to `engine.logger` when attached
120 self.name = name
121
122 def attach(self, engine: Engine) -> None:
123 """
124 Register a set of Ignite Event-Handlers to a specified Ignite engine.
125
126 Args:
127 engine: Ignite Engine, it can be a trainer, validator or evaluator.
128
129 """
130 if self.name is None:
131 self.logger = engine.logger
132 if self.logger.getEffectiveLevel() > logging.INFO or logging.root.getEffectiveLevel() > logging.INFO:
133 warnings.warn(
134 "the effective log level of engine logger or RootLogger is higher than INFO, may not record log,"
135 " please call `logging.basicConfig(stream=sys.stdout, level=logging.INFO)` to enable it."
136 )
137 if self.iteration_log and not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):
138 engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)
139 if self.epoch_log and not engine.has_event_handler(self.epoch_completed, Events.EPOCH_COMPLETED):
140 engine.add_event_handler(Events.EPOCH_COMPLETED, self.epoch_completed)
141 if not engine.has_event_handler(self.exception_raised, Events.EXCEPTION_RAISED):
142 engine.add_event_handler(Events.EXCEPTION_RAISED, self.exception_raised)
143
144 def epoch_completed(self, engine: Engine) -> None:
145 """
146 Handler for train or validation/evaluation epoch completed Event.
147 Print epoch level log, default values are from Ignite `engine.state.metrics` dict.
148
149 Args:
150 engine: Ignite Engine, it can be a trainer, validator or evaluator.
151
152 """
153 if self.epoch_print_logger is not None:
154 self.epoch_print_logger(engine)
155 else:
156 self._default_epoch_print(engine)
157
158 def iteration_completed(self, engine: Engine) -> None:
159 """
160 Handler for train or validation/evaluation iteration completed Event.
161 Print iteration level log, default values are from Ignite `engine.state.output`.
162
163 Args:
164 engine: Ignite Engine, it can be a trainer, validator or evaluator.
165
166 """
167 if self.iteration_print_logger is not None:
168 self.iteration_print_logger(engine)
169 else:
170 self._default_iteration_print(engine)
171
172 def exception_raised(self, _engine: Engine, e: Exception) -> None:
173 """
174 Handler for train or validation/evaluation exception raised Event.
175 Print the exception information and traceback. This callback may be skipped because the logic
176 with Ignite can only trigger the first attached handler for `EXCEPTION_RAISED` event.
177
178 Args:
179 _engine: Ignite Engine, unused argument.
180 e: the exception caught in Ignite during engine.run().
181
182 """
183 self.logger.exception(f"Exception: {e}")
184 raise e
185
186 def _default_epoch_print(self, engine: Engine) -> None:
187 """
188 Execute epoch level log operation.
189 Default to print the values from Ignite `engine.state.metrics` dict and
190 print the values of specified attributes of `engine.state`.
191
192 Args:
193 engine: Ignite Engine, it can be a trainer, validator or evaluator.
194
195 """
196 current_epoch = self.global_epoch_transform(engine.state.epoch)
197
198 prints_dict = engine.state.metrics
199 if prints_dict is not None and len(prints_dict) > 0:
200 out_str = f"Epoch[{current_epoch}] Metrics -- "
201 for name in sorted(prints_dict):
202 value = prints_dict[name]
203 out_str += self.key_var_format.format(name, value) if is_scalar(value) else f"{name}: {str(value)}"
204 self.logger.info(out_str)
205
206 if (
207 hasattr(engine.state, "key_metric_name")
208 and hasattr(engine.state, "best_metric")
209 and hasattr(engine.state, "best_metric_epoch")
210 and engine.state.key_metric_name is not None
211 ):
212 out_str = f"Key metric: {engine.state.key_metric_name} "
213 out_str += f"best value: {engine.state.best_metric} "
214 out_str += f"at epoch: {engine.state.best_metric_epoch}"
215 self.logger.info(out_str)
216
217 if self.state_attributes is not None and len(self.state_attributes) > 0:
218 out_str = "State values: "
219 for attr in self.state_attributes:
220 out_str += f"{attr}: {getattr(engine.state, attr, None)} "
221 self.logger.info(out_str)
222
223 def _default_iteration_print(self, engine: Engine) -> None:
224 """
225 Execute iteration log operation based on Ignite `engine.state.output` data.
226 Print the values from `self.output_transform(engine.state.output)`.
227 Since `engine.state.output` is a decollated list and we replicated the loss value for every item
228 of the decollated list, the default behavior is to print the loss from `output[0]`.
229
230 Args:
231 engine: Ignite Engine, it can be a trainer, validator or evaluator.
232
233 """
234 loss = self.output_transform(engine.state.output)
235 if loss is None:
236 return # no printing if the output is empty
237
238 out_str = ""
239 if isinstance(loss, dict): # print dictionary items
240 for name in sorted(loss):
241 value = loss[name]
242 if not is_scalar(value):
243 warnings.warn(
244 "ignoring non-scalar output in StatsHandler,"
245 " make sure `output_transform(engine.state.output)` returns"
246 " a scalar or dictionary of key and scalar pairs to avoid this warning."
247 " {}:{}".format(name, type(value))
248 )
249 continue # not printing multi dimensional output
250 out_str += self.key_var_format.format(name, value.item() if isinstance(value, torch.Tensor) else value)
251 elif is_scalar(loss): # not printing multi dimensional output
252 out_str += self.key_var_format.format(
253 self.tag_name, loss.item() if isinstance(loss, torch.Tensor) else loss
254 )
255 else:
256 warnings.warn(
257 "ignoring non-scalar output in StatsHandler,"
258 " make sure `output_transform(engine.state.output)` returns"
259 " a scalar or a dictionary of key and scalar pairs to avoid this warning."
260 " {}".format(type(loss))
261 )
262
263 if not out_str:
264 return # no value to print
265
266 num_iterations = engine.state.epoch_length
267 current_iteration = engine.state.iteration
268 if num_iterations is not None:
269 current_iteration = (current_iteration - 1) % num_iterations + 1
270 current_epoch = engine.state.epoch
271 num_epochs = engine.state.max_epochs
272
273 base_str = f"Epoch: {current_epoch}/{num_epochs}, Iter: {current_iteration}/{num_iterations} --"
274
275 self.logger.info(" ".join([base_str, out_str]))
276
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/monai/handlers/stats_handler.py b/monai/handlers/stats_handler.py
--- a/monai/handlers/stats_handler.py
+++ b/monai/handlers/stats_handler.py
@@ -66,8 +66,8 @@
def __init__(
self,
- iteration_log: bool = True,
- epoch_log: bool = True,
+ iteration_log: bool | Callable[[Engine, int], bool] = True,
+ epoch_log: bool | Callable[[Engine, int], bool] = True,
epoch_print_logger: Callable[[Engine], Any] | None = None,
iteration_print_logger: Callable[[Engine], Any] | None = None,
output_transform: Callable = lambda x: x[0],
@@ -80,8 +80,14 @@
"""
Args:
- iteration_log: whether to log data when iteration completed, default to `True`.
- epoch_log: whether to log data when epoch completed, default to `True`.
+ iteration_log: whether to log data when iteration completed, default to `True`. ``iteration_log`` can
+ be also a function and it will be interpreted as an event filter
+ (see https://pytorch.org/ignite/generated/ignite.engine.events.Events.html for details).
+ Event filter function accepts as input engine and event value (iteration) and should return True/False.
+ Event filtering can be helpful to customize iteration logging frequency.
+ epoch_log: whether to log data when epoch completed, default to `True`. ``epoch_log`` can be
+ also a function and it will be interpreted as an event filter. See ``iteration_log`` argument for more
+ details.
epoch_print_logger: customized callable printer for epoch level logging.
Must accept parameter "engine", use default printer if None.
iteration_print_logger: customized callable printer for iteration level logging.
@@ -135,9 +141,15 @@
" please call `logging.basicConfig(stream=sys.stdout, level=logging.INFO)` to enable it."
)
if self.iteration_log and not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):
- engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)
+ event = Events.ITERATION_COMPLETED
+ if callable(self.iteration_log): # substitute event with new one using filter callable
+ event = event(event_filter=self.iteration_log)
+ engine.add_event_handler(event, self.iteration_completed)
if self.epoch_log and not engine.has_event_handler(self.epoch_completed, Events.EPOCH_COMPLETED):
- engine.add_event_handler(Events.EPOCH_COMPLETED, self.epoch_completed)
+ event = Events.EPOCH_COMPLETED
+ if callable(self.epoch_log): # substitute event with new one using filter callable
+ event = event(event_filter=self.epoch_log)
+ engine.add_event_handler(event, self.epoch_completed)
if not engine.has_event_handler(self.exception_raised, Events.EXCEPTION_RAISED):
engine.add_event_handler(Events.EXCEPTION_RAISED, self.exception_raised)
|
{"golden_diff": "diff --git a/monai/handlers/stats_handler.py b/monai/handlers/stats_handler.py\n--- a/monai/handlers/stats_handler.py\n+++ b/monai/handlers/stats_handler.py\n@@ -66,8 +66,8 @@\n \n def __init__(\n self,\n- iteration_log: bool = True,\n- epoch_log: bool = True,\n+ iteration_log: bool | Callable[[Engine, int], bool] = True,\n+ epoch_log: bool | Callable[[Engine, int], bool] = True,\n epoch_print_logger: Callable[[Engine], Any] | None = None,\n iteration_print_logger: Callable[[Engine], Any] | None = None,\n output_transform: Callable = lambda x: x[0],\n@@ -80,8 +80,14 @@\n \"\"\"\n \n Args:\n- iteration_log: whether to log data when iteration completed, default to `True`.\n- epoch_log: whether to log data when epoch completed, default to `True`.\n+ iteration_log: whether to log data when iteration completed, default to `True`. ``iteration_log`` can\n+ be also a function and it will be interpreted as an event filter\n+ (see https://pytorch.org/ignite/generated/ignite.engine.events.Events.html for details).\n+ Event filter function accepts as input engine and event value (iteration) and should return True/False.\n+ Event filtering can be helpful to customize iteration logging frequency.\n+ epoch_log: whether to log data when epoch completed, default to `True`. ``epoch_log`` can be\n+ also a function and it will be interpreted as an event filter. See ``iteration_log`` argument for more\n+ details.\n epoch_print_logger: customized callable printer for epoch level logging.\n Must accept parameter \"engine\", use default printer if None.\n iteration_print_logger: customized callable printer for iteration level logging.\n@@ -135,9 +141,15 @@\n \" please call `logging.basicConfig(stream=sys.stdout, level=logging.INFO)` to enable it.\"\n )\n if self.iteration_log and not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):\n- engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)\n+ event = Events.ITERATION_COMPLETED\n+ if callable(self.iteration_log): # substitute event with new one using filter callable\n+ event = event(event_filter=self.iteration_log)\n+ engine.add_event_handler(event, self.iteration_completed)\n if self.epoch_log and not engine.has_event_handler(self.epoch_completed, Events.EPOCH_COMPLETED):\n- engine.add_event_handler(Events.EPOCH_COMPLETED, self.epoch_completed)\n+ event = Events.EPOCH_COMPLETED\n+ if callable(self.epoch_log): # substitute event with new one using filter callable\n+ event = event(event_filter=self.epoch_log)\n+ engine.add_event_handler(event, self.epoch_completed)\n if not engine.has_event_handler(self.exception_raised, Events.EXCEPTION_RAISED):\n engine.add_event_handler(Events.EXCEPTION_RAISED, self.exception_raised)\n", "issue": "overriding iteration_log and epoch_log in StatsHandler\nCurrently, they are booleans but we could extend them to `Union[bool, Callable]` where if arg is callable then it could be used for event filtering:\r\n\r\n```python\r\n def attach(self, engine):\r\n\r\n\r\n if self.iteration_log and not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):\r\n event_filter = self.iteration_log if callable(self.iteration_log) else None\r\n engine.add_event_handler(\r\n Events.ITERATION_COMPLETED(event_filter=event_filter), self.iteration_completed\r\n ) \r\n if self.epoch_log and not engine.has_event_handler(self.epoch_completed, Events.EPOCH_COMPLETED):\r\n event_filter = self.epoch_log if callable(self.epoch_log) else None\r\n engine.add_event_handler(\r\n Events.EPOCH_COMPLETED(event_filter=event_filter), self.epoch_completed\r\n ) \r\n```\r\n\r\n_Originally posted by @vfdev-5 in https://github.com/Project-MONAI/MONAI/discussions/5958#discussioncomment-4912997_\r\n \n", "before_files": [{"content": "# Copyright (c) MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import annotations\n\nimport logging\nimport warnings\nfrom collections.abc import Callable, Sequence\nfrom typing import TYPE_CHECKING, Any\n\nimport torch\n\nfrom monai.config import IgniteInfo\nfrom monai.utils import is_scalar, min_version, optional_import\n\nEvents, _ = optional_import(\"ignite.engine\", IgniteInfo.OPT_IMPORT_VERSION, min_version, \"Events\")\nif TYPE_CHECKING:\n from ignite.engine import Engine\nelse:\n Engine, _ = optional_import(\n \"ignite.engine\", IgniteInfo.OPT_IMPORT_VERSION, min_version, \"Engine\", as_type=\"decorator\"\n )\n\nDEFAULT_KEY_VAL_FORMAT = \"{}: {:.4f} \"\nDEFAULT_TAG = \"Loss\"\n\n\nclass StatsHandler:\n \"\"\"\n StatsHandler defines a set of Ignite Event-handlers for all the log printing logics.\n It can be used for any Ignite Engine(trainer, validator and evaluator).\n And it can support logging for epoch level and iteration level with pre-defined loggers.\n\n Note that if `name` arg is None, will leverage `engine.logger` as default logger directly, otherwise,\n get logger from `logging.getLogger(name)`, we can setup a logger outside first with the same `name`.\n As the default log level of `RootLogger` is `WARNING`, may need to call\n `logging.basicConfig(stream=sys.stdout, level=logging.INFO)` before running this handler to enable\n the stats logging.\n\n Default behaviors:\n - When EPOCH_COMPLETED, logs ``engine.state.metrics`` using ``self.logger``.\n - When ITERATION_COMPLETED, logs\n ``self.output_transform(engine.state.output)`` using ``self.logger``.\n\n Usage example::\n\n logging.basicConfig(stream=sys.stdout, level=logging.INFO)\n\n trainer = SupervisedTrainer(...)\n StatsHandler(name=\"train_stats\").attach(trainer)\n\n trainer.run()\n\n More details of example is available in the tutorial:\n https://github.com/Project-MONAI/tutorials/blob/master/modules/engines/unet_training_dict.py.\n\n \"\"\"\n\n def __init__(\n self,\n iteration_log: bool = True,\n epoch_log: bool = True,\n epoch_print_logger: Callable[[Engine], Any] | None = None,\n iteration_print_logger: Callable[[Engine], Any] | None = None,\n output_transform: Callable = lambda x: x[0],\n global_epoch_transform: Callable = lambda x: x,\n state_attributes: Sequence[str] | None = None,\n name: str | None = None,\n tag_name: str = DEFAULT_TAG,\n key_var_format: str = DEFAULT_KEY_VAL_FORMAT,\n ) -> None:\n \"\"\"\n\n Args:\n iteration_log: whether to log data when iteration completed, default to `True`.\n epoch_log: whether to log data when epoch completed, default to `True`.\n epoch_print_logger: customized callable printer for epoch level logging.\n Must accept parameter \"engine\", use default printer if None.\n iteration_print_logger: customized callable printer for iteration level logging.\n Must accept parameter \"engine\", use default printer if None.\n output_transform: a callable that is used to transform the\n ``ignite.engine.state.output`` into a scalar to print, or a dictionary of {key: scalar}.\n In the latter case, the output string will be formatted as key: value.\n By default this value logging happens when every iteration completed.\n The default behavior is to print loss from output[0] as output is a decollated list\n and we replicated loss value for every item of the decollated list.\n `engine.state` and `output_transform` inherit from the ignite concept:\n https://pytorch.org/ignite/concepts.html#state, explanation and usage example are in the tutorial:\n https://github.com/Project-MONAI/tutorials/blob/master/modules/batch_output_transform.ipynb.\n global_epoch_transform: a callable that is used to customize global epoch number.\n For example, in evaluation, the evaluator engine might want to print synced epoch number\n with the trainer engine.\n state_attributes: expected attributes from `engine.state`, if provided, will extract them\n when epoch completed.\n name: identifier of `logging.logger` to use, if None, defaulting to ``engine.logger``.\n tag_name: when iteration output is a scalar, tag_name is used to print\n tag_name: scalar_value to logger. Defaults to ``'Loss'``.\n key_var_format: a formatting string to control the output string format of key: value.\n\n \"\"\"\n\n self.iteration_log = iteration_log\n self.epoch_log = epoch_log\n self.epoch_print_logger = epoch_print_logger\n self.iteration_print_logger = iteration_print_logger\n self.output_transform = output_transform\n self.global_epoch_transform = global_epoch_transform\n self.state_attributes = state_attributes\n self.tag_name = tag_name\n self.key_var_format = key_var_format\n self.logger = logging.getLogger(name) # if `name` is None, will default to `engine.logger` when attached\n self.name = name\n\n def attach(self, engine: Engine) -> None:\n \"\"\"\n Register a set of Ignite Event-Handlers to a specified Ignite engine.\n\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n\n \"\"\"\n if self.name is None:\n self.logger = engine.logger\n if self.logger.getEffectiveLevel() > logging.INFO or logging.root.getEffectiveLevel() > logging.INFO:\n warnings.warn(\n \"the effective log level of engine logger or RootLogger is higher than INFO, may not record log,\"\n \" please call `logging.basicConfig(stream=sys.stdout, level=logging.INFO)` to enable it.\"\n )\n if self.iteration_log and not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):\n engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)\n if self.epoch_log and not engine.has_event_handler(self.epoch_completed, Events.EPOCH_COMPLETED):\n engine.add_event_handler(Events.EPOCH_COMPLETED, self.epoch_completed)\n if not engine.has_event_handler(self.exception_raised, Events.EXCEPTION_RAISED):\n engine.add_event_handler(Events.EXCEPTION_RAISED, self.exception_raised)\n\n def epoch_completed(self, engine: Engine) -> None:\n \"\"\"\n Handler for train or validation/evaluation epoch completed Event.\n Print epoch level log, default values are from Ignite `engine.state.metrics` dict.\n\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n\n \"\"\"\n if self.epoch_print_logger is not None:\n self.epoch_print_logger(engine)\n else:\n self._default_epoch_print(engine)\n\n def iteration_completed(self, engine: Engine) -> None:\n \"\"\"\n Handler for train or validation/evaluation iteration completed Event.\n Print iteration level log, default values are from Ignite `engine.state.output`.\n\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n\n \"\"\"\n if self.iteration_print_logger is not None:\n self.iteration_print_logger(engine)\n else:\n self._default_iteration_print(engine)\n\n def exception_raised(self, _engine: Engine, e: Exception) -> None:\n \"\"\"\n Handler for train or validation/evaluation exception raised Event.\n Print the exception information and traceback. This callback may be skipped because the logic\n with Ignite can only trigger the first attached handler for `EXCEPTION_RAISED` event.\n\n Args:\n _engine: Ignite Engine, unused argument.\n e: the exception caught in Ignite during engine.run().\n\n \"\"\"\n self.logger.exception(f\"Exception: {e}\")\n raise e\n\n def _default_epoch_print(self, engine: Engine) -> None:\n \"\"\"\n Execute epoch level log operation.\n Default to print the values from Ignite `engine.state.metrics` dict and\n print the values of specified attributes of `engine.state`.\n\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n\n \"\"\"\n current_epoch = self.global_epoch_transform(engine.state.epoch)\n\n prints_dict = engine.state.metrics\n if prints_dict is not None and len(prints_dict) > 0:\n out_str = f\"Epoch[{current_epoch}] Metrics -- \"\n for name in sorted(prints_dict):\n value = prints_dict[name]\n out_str += self.key_var_format.format(name, value) if is_scalar(value) else f\"{name}: {str(value)}\"\n self.logger.info(out_str)\n\n if (\n hasattr(engine.state, \"key_metric_name\")\n and hasattr(engine.state, \"best_metric\")\n and hasattr(engine.state, \"best_metric_epoch\")\n and engine.state.key_metric_name is not None\n ):\n out_str = f\"Key metric: {engine.state.key_metric_name} \"\n out_str += f\"best value: {engine.state.best_metric} \"\n out_str += f\"at epoch: {engine.state.best_metric_epoch}\"\n self.logger.info(out_str)\n\n if self.state_attributes is not None and len(self.state_attributes) > 0:\n out_str = \"State values: \"\n for attr in self.state_attributes:\n out_str += f\"{attr}: {getattr(engine.state, attr, None)} \"\n self.logger.info(out_str)\n\n def _default_iteration_print(self, engine: Engine) -> None:\n \"\"\"\n Execute iteration log operation based on Ignite `engine.state.output` data.\n Print the values from `self.output_transform(engine.state.output)`.\n Since `engine.state.output` is a decollated list and we replicated the loss value for every item\n of the decollated list, the default behavior is to print the loss from `output[0]`.\n\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n\n \"\"\"\n loss = self.output_transform(engine.state.output)\n if loss is None:\n return # no printing if the output is empty\n\n out_str = \"\"\n if isinstance(loss, dict): # print dictionary items\n for name in sorted(loss):\n value = loss[name]\n if not is_scalar(value):\n warnings.warn(\n \"ignoring non-scalar output in StatsHandler,\"\n \" make sure `output_transform(engine.state.output)` returns\"\n \" a scalar or dictionary of key and scalar pairs to avoid this warning.\"\n \" {}:{}\".format(name, type(value))\n )\n continue # not printing multi dimensional output\n out_str += self.key_var_format.format(name, value.item() if isinstance(value, torch.Tensor) else value)\n elif is_scalar(loss): # not printing multi dimensional output\n out_str += self.key_var_format.format(\n self.tag_name, loss.item() if isinstance(loss, torch.Tensor) else loss\n )\n else:\n warnings.warn(\n \"ignoring non-scalar output in StatsHandler,\"\n \" make sure `output_transform(engine.state.output)` returns\"\n \" a scalar or a dictionary of key and scalar pairs to avoid this warning.\"\n \" {}\".format(type(loss))\n )\n\n if not out_str:\n return # no value to print\n\n num_iterations = engine.state.epoch_length\n current_iteration = engine.state.iteration\n if num_iterations is not None:\n current_iteration = (current_iteration - 1) % num_iterations + 1\n current_epoch = engine.state.epoch\n num_epochs = engine.state.max_epochs\n\n base_str = f\"Epoch: {current_epoch}/{num_epochs}, Iter: {current_iteration}/{num_iterations} --\"\n\n self.logger.info(\" \".join([base_str, out_str]))\n", "path": "monai/handlers/stats_handler.py"}], "after_files": [{"content": "# Copyright (c) MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import annotations\n\nimport logging\nimport warnings\nfrom collections.abc import Callable, Sequence\nfrom typing import TYPE_CHECKING, Any\n\nimport torch\n\nfrom monai.config import IgniteInfo\nfrom monai.utils import is_scalar, min_version, optional_import\n\nEvents, _ = optional_import(\"ignite.engine\", IgniteInfo.OPT_IMPORT_VERSION, min_version, \"Events\")\nif TYPE_CHECKING:\n from ignite.engine import Engine\nelse:\n Engine, _ = optional_import(\n \"ignite.engine\", IgniteInfo.OPT_IMPORT_VERSION, min_version, \"Engine\", as_type=\"decorator\"\n )\n\nDEFAULT_KEY_VAL_FORMAT = \"{}: {:.4f} \"\nDEFAULT_TAG = \"Loss\"\n\n\nclass StatsHandler:\n \"\"\"\n StatsHandler defines a set of Ignite Event-handlers for all the log printing logics.\n It can be used for any Ignite Engine(trainer, validator and evaluator).\n And it can support logging for epoch level and iteration level with pre-defined loggers.\n\n Note that if `name` arg is None, will leverage `engine.logger` as default logger directly, otherwise,\n get logger from `logging.getLogger(name)`, we can setup a logger outside first with the same `name`.\n As the default log level of `RootLogger` is `WARNING`, may need to call\n `logging.basicConfig(stream=sys.stdout, level=logging.INFO)` before running this handler to enable\n the stats logging.\n\n Default behaviors:\n - When EPOCH_COMPLETED, logs ``engine.state.metrics`` using ``self.logger``.\n - When ITERATION_COMPLETED, logs\n ``self.output_transform(engine.state.output)`` using ``self.logger``.\n\n Usage example::\n\n logging.basicConfig(stream=sys.stdout, level=logging.INFO)\n\n trainer = SupervisedTrainer(...)\n StatsHandler(name=\"train_stats\").attach(trainer)\n\n trainer.run()\n\n More details of example is available in the tutorial:\n https://github.com/Project-MONAI/tutorials/blob/master/modules/engines/unet_training_dict.py.\n\n \"\"\"\n\n def __init__(\n self,\n iteration_log: bool | Callable[[Engine, int], bool] = True,\n epoch_log: bool | Callable[[Engine, int], bool] = True,\n epoch_print_logger: Callable[[Engine], Any] | None = None,\n iteration_print_logger: Callable[[Engine], Any] | None = None,\n output_transform: Callable = lambda x: x[0],\n global_epoch_transform: Callable = lambda x: x,\n state_attributes: Sequence[str] | None = None,\n name: str | None = None,\n tag_name: str = DEFAULT_TAG,\n key_var_format: str = DEFAULT_KEY_VAL_FORMAT,\n ) -> None:\n \"\"\"\n\n Args:\n iteration_log: whether to log data when iteration completed, default to `True`. ``iteration_log`` can\n be also a function and it will be interpreted as an event filter\n (see https://pytorch.org/ignite/generated/ignite.engine.events.Events.html for details).\n Event filter function accepts as input engine and event value (iteration) and should return True/False.\n Event filtering can be helpful to customize iteration logging frequency.\n epoch_log: whether to log data when epoch completed, default to `True`. ``epoch_log`` can be\n also a function and it will be interpreted as an event filter. See ``iteration_log`` argument for more\n details.\n epoch_print_logger: customized callable printer for epoch level logging.\n Must accept parameter \"engine\", use default printer if None.\n iteration_print_logger: customized callable printer for iteration level logging.\n Must accept parameter \"engine\", use default printer if None.\n output_transform: a callable that is used to transform the\n ``ignite.engine.state.output`` into a scalar to print, or a dictionary of {key: scalar}.\n In the latter case, the output string will be formatted as key: value.\n By default this value logging happens when every iteration completed.\n The default behavior is to print loss from output[0] as output is a decollated list\n and we replicated loss value for every item of the decollated list.\n `engine.state` and `output_transform` inherit from the ignite concept:\n https://pytorch.org/ignite/concepts.html#state, explanation and usage example are in the tutorial:\n https://github.com/Project-MONAI/tutorials/blob/master/modules/batch_output_transform.ipynb.\n global_epoch_transform: a callable that is used to customize global epoch number.\n For example, in evaluation, the evaluator engine might want to print synced epoch number\n with the trainer engine.\n state_attributes: expected attributes from `engine.state`, if provided, will extract them\n when epoch completed.\n name: identifier of `logging.logger` to use, if None, defaulting to ``engine.logger``.\n tag_name: when iteration output is a scalar, tag_name is used to print\n tag_name: scalar_value to logger. Defaults to ``'Loss'``.\n key_var_format: a formatting string to control the output string format of key: value.\n\n \"\"\"\n\n self.iteration_log = iteration_log\n self.epoch_log = epoch_log\n self.epoch_print_logger = epoch_print_logger\n self.iteration_print_logger = iteration_print_logger\n self.output_transform = output_transform\n self.global_epoch_transform = global_epoch_transform\n self.state_attributes = state_attributes\n self.tag_name = tag_name\n self.key_var_format = key_var_format\n self.logger = logging.getLogger(name) # if `name` is None, will default to `engine.logger` when attached\n self.name = name\n\n def attach(self, engine: Engine) -> None:\n \"\"\"\n Register a set of Ignite Event-Handlers to a specified Ignite engine.\n\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n\n \"\"\"\n if self.name is None:\n self.logger = engine.logger\n if self.logger.getEffectiveLevel() > logging.INFO or logging.root.getEffectiveLevel() > logging.INFO:\n warnings.warn(\n \"the effective log level of engine logger or RootLogger is higher than INFO, may not record log,\"\n \" please call `logging.basicConfig(stream=sys.stdout, level=logging.INFO)` to enable it.\"\n )\n if self.iteration_log and not engine.has_event_handler(self.iteration_completed, Events.ITERATION_COMPLETED):\n event = Events.ITERATION_COMPLETED\n if callable(self.iteration_log): # substitute event with new one using filter callable\n event = event(event_filter=self.iteration_log)\n engine.add_event_handler(event, self.iteration_completed)\n if self.epoch_log and not engine.has_event_handler(self.epoch_completed, Events.EPOCH_COMPLETED):\n event = Events.EPOCH_COMPLETED\n if callable(self.epoch_log): # substitute event with new one using filter callable\n event = event(event_filter=self.epoch_log)\n engine.add_event_handler(event, self.epoch_completed)\n if not engine.has_event_handler(self.exception_raised, Events.EXCEPTION_RAISED):\n engine.add_event_handler(Events.EXCEPTION_RAISED, self.exception_raised)\n\n def epoch_completed(self, engine: Engine) -> None:\n \"\"\"\n Handler for train or validation/evaluation epoch completed Event.\n Print epoch level log, default values are from Ignite `engine.state.metrics` dict.\n\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n\n \"\"\"\n if self.epoch_print_logger is not None:\n self.epoch_print_logger(engine)\n else:\n self._default_epoch_print(engine)\n\n def iteration_completed(self, engine: Engine) -> None:\n \"\"\"\n Handler for train or validation/evaluation iteration completed Event.\n Print iteration level log, default values are from Ignite `engine.state.output`.\n\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n\n \"\"\"\n if self.iteration_print_logger is not None:\n self.iteration_print_logger(engine)\n else:\n self._default_iteration_print(engine)\n\n def exception_raised(self, _engine: Engine, e: Exception) -> None:\n \"\"\"\n Handler for train or validation/evaluation exception raised Event.\n Print the exception information and traceback. This callback may be skipped because the logic\n with Ignite can only trigger the first attached handler for `EXCEPTION_RAISED` event.\n\n Args:\n _engine: Ignite Engine, unused argument.\n e: the exception caught in Ignite during engine.run().\n\n \"\"\"\n self.logger.exception(f\"Exception: {e}\")\n raise e\n\n def _default_epoch_print(self, engine: Engine) -> None:\n \"\"\"\n Execute epoch level log operation.\n Default to print the values from Ignite `engine.state.metrics` dict and\n print the values of specified attributes of `engine.state`.\n\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n\n \"\"\"\n current_epoch = self.global_epoch_transform(engine.state.epoch)\n\n prints_dict = engine.state.metrics\n if prints_dict is not None and len(prints_dict) > 0:\n out_str = f\"Epoch[{current_epoch}] Metrics -- \"\n for name in sorted(prints_dict):\n value = prints_dict[name]\n out_str += self.key_var_format.format(name, value) if is_scalar(value) else f\"{name}: {str(value)}\"\n self.logger.info(out_str)\n\n if (\n hasattr(engine.state, \"key_metric_name\")\n and hasattr(engine.state, \"best_metric\")\n and hasattr(engine.state, \"best_metric_epoch\")\n and engine.state.key_metric_name is not None\n ):\n out_str = f\"Key metric: {engine.state.key_metric_name} \"\n out_str += f\"best value: {engine.state.best_metric} \"\n out_str += f\"at epoch: {engine.state.best_metric_epoch}\"\n self.logger.info(out_str)\n\n if self.state_attributes is not None and len(self.state_attributes) > 0:\n out_str = \"State values: \"\n for attr in self.state_attributes:\n out_str += f\"{attr}: {getattr(engine.state, attr, None)} \"\n self.logger.info(out_str)\n\n def _default_iteration_print(self, engine: Engine) -> None:\n \"\"\"\n Execute iteration log operation based on Ignite `engine.state.output` data.\n Print the values from `self.output_transform(engine.state.output)`.\n Since `engine.state.output` is a decollated list and we replicated the loss value for every item\n of the decollated list, the default behavior is to print the loss from `output[0]`.\n\n Args:\n engine: Ignite Engine, it can be a trainer, validator or evaluator.\n\n \"\"\"\n loss = self.output_transform(engine.state.output)\n if loss is None:\n return # no printing if the output is empty\n\n out_str = \"\"\n if isinstance(loss, dict): # print dictionary items\n for name in sorted(loss):\n value = loss[name]\n if not is_scalar(value):\n warnings.warn(\n \"ignoring non-scalar output in StatsHandler,\"\n \" make sure `output_transform(engine.state.output)` returns\"\n \" a scalar or dictionary of key and scalar pairs to avoid this warning.\"\n \" {}:{}\".format(name, type(value))\n )\n continue # not printing multi dimensional output\n out_str += self.key_var_format.format(name, value.item() if isinstance(value, torch.Tensor) else value)\n elif is_scalar(loss): # not printing multi dimensional output\n out_str += self.key_var_format.format(\n self.tag_name, loss.item() if isinstance(loss, torch.Tensor) else loss\n )\n else:\n warnings.warn(\n \"ignoring non-scalar output in StatsHandler,\"\n \" make sure `output_transform(engine.state.output)` returns\"\n \" a scalar or a dictionary of key and scalar pairs to avoid this warning.\"\n \" {}\".format(type(loss))\n )\n\n if not out_str:\n return # no value to print\n\n num_iterations = engine.state.epoch_length\n current_iteration = engine.state.iteration\n if num_iterations is not None:\n current_iteration = (current_iteration - 1) % num_iterations + 1\n current_epoch = engine.state.epoch\n num_epochs = engine.state.max_epochs\n\n base_str = f\"Epoch: {current_epoch}/{num_epochs}, Iter: {current_iteration}/{num_iterations} --\"\n\n self.logger.info(\" \".join([base_str, out_str]))\n", "path": "monai/handlers/stats_handler.py"}]}
| 3,824 | 665 |
gh_patches_debug_24712
|
rasdani/github-patches
|
git_diff
|
googleapis__google-api-python-client-1126
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Not possible to provide `http` parameter in `next_chunk()` with resumable upload? HttpError 400
Hi,
This ticket is probably pointless, but maybe it hilights the need of a better error message?
The following code works fine with chunk size larger than 1024*1024, but using a chunksize of 1024 ends in a HttpError 400. Why is that so?
```python
from google.oauth2 import service_account
import googleapiclient.discovery as gad
import googleapiclient.http as gah
# Run once at start.
creds = service_account.Credentials.from_service_account_file(creds).with_scopes(['https://www.googleapis.com/auth/drive'])
drive = gad.build('drive', 'v3', credentials=creds)
# Run in a loop / implement both thread-safety and caching in http object.
filename = 'my_file.txt'
cache_path='.cache'
# Using 1024 ends up in an error.
# Using 1024*1024 works fine
# Using default parameter also works fine, but is actually not a resumable upload.
media = gah.MediaFileUpload(file_name, chunksize=1024, resumable=True) # Here
file_metadata = {'name': file_name, 'parents': [folder_id]}
request = self.drive.files().create(body=file_metadata, media_body=media, fields='id')
auth_http = _auth.authorized_http(creds)
auth_http.cache = httplib2.FileCache(cache_path)
response = None
while response is None:
status, response = request.next_chunk(num_retries=4, http=auth_http)
```
Here is the error message I get when `chunksize=1024`.
```python
HttpError: <HttpError 400 when requesting https://www.googleapis.com/upload/drive/v3/files?fields=id&alt=json&uploadType=resumable returned "Bad Request">
```
Please, why is that so?
I thank you for your support.
Have a good day,
Bests,
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `googleapiclient/errors.py`
Content:
```
1 # Copyright 2014 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Errors for the library.
16
17 All exceptions defined by the library
18 should be defined in this file.
19 """
20 from __future__ import absolute_import
21
22 __author__ = "[email protected] (Joe Gregorio)"
23
24 import json
25
26 from googleapiclient import _helpers as util
27
28
29 class Error(Exception):
30 """Base error for this module."""
31
32 pass
33
34
35 class HttpError(Error):
36 """HTTP data was invalid or unexpected."""
37
38 @util.positional(3)
39 def __init__(self, resp, content, uri=None):
40 self.resp = resp
41 if not isinstance(content, bytes):
42 raise TypeError("HTTP content should be bytes")
43 self.content = content
44 self.uri = uri
45 self.error_details = ""
46
47 def _get_reason(self):
48 """Calculate the reason for the error from the response content."""
49 reason = self.resp.reason
50 try:
51 data = json.loads(self.content.decode("utf-8"))
52 if isinstance(data, dict):
53 reason = data["error"]["message"]
54 error_detail_keyword = next((kw for kw in ["detail", "details", "message"] if kw in data["error"]), "")
55 if error_detail_keyword:
56 self.error_details = data["error"][error_detail_keyword]
57 elif isinstance(data, list) and len(data) > 0:
58 first_error = data[0]
59 reason = first_error["error"]["message"]
60 if "details" in first_error["error"]:
61 self.error_details = first_error["error"]["details"]
62 except (ValueError, KeyError, TypeError):
63 pass
64 if reason is None:
65 reason = ""
66 return reason
67
68 def __repr__(self):
69 reason = self._get_reason()
70 if self.error_details:
71 return '<HttpError %s when requesting %s returned "%s". Details: "%s">' % (
72 self.resp.status,
73 self.uri,
74 reason.strip(),
75 self.error_details,
76 )
77 elif self.uri:
78 return '<HttpError %s when requesting %s returned "%s">' % (
79 self.resp.status,
80 self.uri,
81 self._get_reason().strip(),
82 )
83 else:
84 return '<HttpError %s "%s">' % (self.resp.status, self._get_reason())
85
86 __str__ = __repr__
87
88
89 class InvalidJsonError(Error):
90 """The JSON returned could not be parsed."""
91
92 pass
93
94
95 class UnknownFileType(Error):
96 """File type unknown or unexpected."""
97
98 pass
99
100
101 class UnknownLinkType(Error):
102 """Link type unknown or unexpected."""
103
104 pass
105
106
107 class UnknownApiNameOrVersion(Error):
108 """No API with that name and version exists."""
109
110 pass
111
112
113 class UnacceptableMimeTypeError(Error):
114 """That is an unacceptable mimetype for this operation."""
115
116 pass
117
118
119 class MediaUploadSizeError(Error):
120 """Media is larger than the method can accept."""
121
122 pass
123
124
125 class ResumableUploadError(HttpError):
126 """Error occurred during resumable upload."""
127
128 pass
129
130
131 class InvalidChunkSizeError(Error):
132 """The given chunksize is not valid."""
133
134 pass
135
136
137 class InvalidNotificationError(Error):
138 """The channel Notification is invalid."""
139
140 pass
141
142
143 class BatchError(HttpError):
144 """Error occurred during batch operations."""
145
146 @util.positional(2)
147 def __init__(self, reason, resp=None, content=None):
148 self.resp = resp
149 self.content = content
150 self.reason = reason
151
152 def __repr__(self):
153 if getattr(self.resp, "status", None) is None:
154 return '<BatchError "%s">' % (self.reason)
155 else:
156 return '<BatchError %s "%s">' % (self.resp.status, self.reason)
157
158 __str__ = __repr__
159
160
161 class UnexpectedMethodError(Error):
162 """Exception raised by RequestMockBuilder on unexpected calls."""
163
164 @util.positional(1)
165 def __init__(self, methodId=None):
166 """Constructor for an UnexpectedMethodError."""
167 super(UnexpectedMethodError, self).__init__(
168 "Received unexpected call %s" % methodId
169 )
170
171
172 class UnexpectedBodyError(Error):
173 """Exception raised by RequestMockBuilder on unexpected bodies."""
174
175 def __init__(self, expected, provided):
176 """Constructor for an UnexpectedMethodError."""
177 super(UnexpectedBodyError, self).__init__(
178 "Expected: [%s] - Provided: [%s]" % (expected, provided)
179 )
180
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/googleapiclient/errors.py b/googleapiclient/errors.py
--- a/googleapiclient/errors.py
+++ b/googleapiclient/errors.py
@@ -48,7 +48,11 @@
"""Calculate the reason for the error from the response content."""
reason = self.resp.reason
try:
- data = json.loads(self.content.decode("utf-8"))
+ try:
+ data = json.loads(self.content.decode("utf-8"))
+ except json.JSONDecodeError:
+ # In case it is not json
+ data = self.content.decode("utf-8")
if isinstance(data, dict):
reason = data["error"]["message"]
error_detail_keyword = next((kw for kw in ["detail", "details", "message"] if kw in data["error"]), "")
@@ -59,6 +63,8 @@
reason = first_error["error"]["message"]
if "details" in first_error["error"]:
self.error_details = first_error["error"]["details"]
+ else:
+ self.error_details = data
except (ValueError, KeyError, TypeError):
pass
if reason is None:
|
{"golden_diff": "diff --git a/googleapiclient/errors.py b/googleapiclient/errors.py\n--- a/googleapiclient/errors.py\n+++ b/googleapiclient/errors.py\n@@ -48,7 +48,11 @@\n \"\"\"Calculate the reason for the error from the response content.\"\"\"\n reason = self.resp.reason\n try:\n- data = json.loads(self.content.decode(\"utf-8\"))\n+ try:\n+ data = json.loads(self.content.decode(\"utf-8\"))\n+ except json.JSONDecodeError:\n+ # In case it is not json\n+ data = self.content.decode(\"utf-8\")\n if isinstance(data, dict):\n reason = data[\"error\"][\"message\"]\n error_detail_keyword = next((kw for kw in [\"detail\", \"details\", \"message\"] if kw in data[\"error\"]), \"\")\n@@ -59,6 +63,8 @@\n reason = first_error[\"error\"][\"message\"]\n if \"details\" in first_error[\"error\"]:\n self.error_details = first_error[\"error\"][\"details\"]\n+ else:\n+ self.error_details = data\n except (ValueError, KeyError, TypeError):\n pass\n if reason is None:\n", "issue": "Not possible to provide `http` parameter in `next_chunk()` with resumable upload? HttpError 400\nHi,\r\n\r\nThis ticket is probably pointless, but maybe it hilights the need of a better error message?\r\n\r\nThe following code works fine with chunk size larger than 1024*1024, but using a chunksize of 1024 ends in a HttpError 400. Why is that so?\r\n\r\n```python\r\nfrom google.oauth2 import service_account\r\nimport googleapiclient.discovery as gad\r\nimport googleapiclient.http as gah\r\n\r\n# Run once at start.\r\ncreds = service_account.Credentials.from_service_account_file(creds).with_scopes(['https://www.googleapis.com/auth/drive'])\r\ndrive = gad.build('drive', 'v3', credentials=creds)\r\n\r\n# Run in a loop / implement both thread-safety and caching in http object.\r\nfilename = 'my_file.txt'\r\ncache_path='.cache'\r\n# Using 1024 ends up in an error.\r\n# Using 1024*1024 works fine\r\n# Using default parameter also works fine, but is actually not a resumable upload.\r\nmedia = gah.MediaFileUpload(file_name, chunksize=1024, resumable=True) # Here\r\nfile_metadata = {'name': file_name, 'parents': [folder_id]}\r\nrequest = self.drive.files().create(body=file_metadata, media_body=media, fields='id')\r\nauth_http = _auth.authorized_http(creds)\r\nauth_http.cache = httplib2.FileCache(cache_path)\r\nresponse = None\r\nwhile response is None:\r\n status, response = request.next_chunk(num_retries=4, http=auth_http)\r\n```\r\nHere is the error message I get when `chunksize=1024`.\r\n```python\r\nHttpError: <HttpError 400 when requesting https://www.googleapis.com/upload/drive/v3/files?fields=id&alt=json&uploadType=resumable returned \"Bad Request\">\r\n```\r\nPlease, why is that so?\r\n\r\nI thank you for your support.\r\nHave a good day,\r\nBests,\n", "before_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Errors for the library.\n\nAll exceptions defined by the library\nshould be defined in this file.\n\"\"\"\nfrom __future__ import absolute_import\n\n__author__ = \"[email protected] (Joe Gregorio)\"\n\nimport json\n\nfrom googleapiclient import _helpers as util\n\n\nclass Error(Exception):\n \"\"\"Base error for this module.\"\"\"\n\n pass\n\n\nclass HttpError(Error):\n \"\"\"HTTP data was invalid or unexpected.\"\"\"\n\n @util.positional(3)\n def __init__(self, resp, content, uri=None):\n self.resp = resp\n if not isinstance(content, bytes):\n raise TypeError(\"HTTP content should be bytes\")\n self.content = content\n self.uri = uri\n self.error_details = \"\"\n\n def _get_reason(self):\n \"\"\"Calculate the reason for the error from the response content.\"\"\"\n reason = self.resp.reason\n try:\n data = json.loads(self.content.decode(\"utf-8\"))\n if isinstance(data, dict):\n reason = data[\"error\"][\"message\"]\n error_detail_keyword = next((kw for kw in [\"detail\", \"details\", \"message\"] if kw in data[\"error\"]), \"\")\n if error_detail_keyword:\n self.error_details = data[\"error\"][error_detail_keyword]\n elif isinstance(data, list) and len(data) > 0:\n first_error = data[0]\n reason = first_error[\"error\"][\"message\"]\n if \"details\" in first_error[\"error\"]:\n self.error_details = first_error[\"error\"][\"details\"]\n except (ValueError, KeyError, TypeError):\n pass\n if reason is None:\n reason = \"\"\n return reason\n\n def __repr__(self):\n reason = self._get_reason()\n if self.error_details:\n return '<HttpError %s when requesting %s returned \"%s\". Details: \"%s\">' % (\n self.resp.status,\n self.uri,\n reason.strip(),\n self.error_details,\n )\n elif self.uri:\n return '<HttpError %s when requesting %s returned \"%s\">' % (\n self.resp.status,\n self.uri,\n self._get_reason().strip(),\n )\n else:\n return '<HttpError %s \"%s\">' % (self.resp.status, self._get_reason())\n\n __str__ = __repr__\n\n\nclass InvalidJsonError(Error):\n \"\"\"The JSON returned could not be parsed.\"\"\"\n\n pass\n\n\nclass UnknownFileType(Error):\n \"\"\"File type unknown or unexpected.\"\"\"\n\n pass\n\n\nclass UnknownLinkType(Error):\n \"\"\"Link type unknown or unexpected.\"\"\"\n\n pass\n\n\nclass UnknownApiNameOrVersion(Error):\n \"\"\"No API with that name and version exists.\"\"\"\n\n pass\n\n\nclass UnacceptableMimeTypeError(Error):\n \"\"\"That is an unacceptable mimetype for this operation.\"\"\"\n\n pass\n\n\nclass MediaUploadSizeError(Error):\n \"\"\"Media is larger than the method can accept.\"\"\"\n\n pass\n\n\nclass ResumableUploadError(HttpError):\n \"\"\"Error occurred during resumable upload.\"\"\"\n\n pass\n\n\nclass InvalidChunkSizeError(Error):\n \"\"\"The given chunksize is not valid.\"\"\"\n\n pass\n\n\nclass InvalidNotificationError(Error):\n \"\"\"The channel Notification is invalid.\"\"\"\n\n pass\n\n\nclass BatchError(HttpError):\n \"\"\"Error occurred during batch operations.\"\"\"\n\n @util.positional(2)\n def __init__(self, reason, resp=None, content=None):\n self.resp = resp\n self.content = content\n self.reason = reason\n\n def __repr__(self):\n if getattr(self.resp, \"status\", None) is None:\n return '<BatchError \"%s\">' % (self.reason)\n else:\n return '<BatchError %s \"%s\">' % (self.resp.status, self.reason)\n\n __str__ = __repr__\n\n\nclass UnexpectedMethodError(Error):\n \"\"\"Exception raised by RequestMockBuilder on unexpected calls.\"\"\"\n\n @util.positional(1)\n def __init__(self, methodId=None):\n \"\"\"Constructor for an UnexpectedMethodError.\"\"\"\n super(UnexpectedMethodError, self).__init__(\n \"Received unexpected call %s\" % methodId\n )\n\n\nclass UnexpectedBodyError(Error):\n \"\"\"Exception raised by RequestMockBuilder on unexpected bodies.\"\"\"\n\n def __init__(self, expected, provided):\n \"\"\"Constructor for an UnexpectedMethodError.\"\"\"\n super(UnexpectedBodyError, self).__init__(\n \"Expected: [%s] - Provided: [%s]\" % (expected, provided)\n )\n", "path": "googleapiclient/errors.py"}], "after_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Errors for the library.\n\nAll exceptions defined by the library\nshould be defined in this file.\n\"\"\"\nfrom __future__ import absolute_import\n\n__author__ = \"[email protected] (Joe Gregorio)\"\n\nimport json\n\nfrom googleapiclient import _helpers as util\n\n\nclass Error(Exception):\n \"\"\"Base error for this module.\"\"\"\n\n pass\n\n\nclass HttpError(Error):\n \"\"\"HTTP data was invalid or unexpected.\"\"\"\n\n @util.positional(3)\n def __init__(self, resp, content, uri=None):\n self.resp = resp\n if not isinstance(content, bytes):\n raise TypeError(\"HTTP content should be bytes\")\n self.content = content\n self.uri = uri\n self.error_details = \"\"\n\n def _get_reason(self):\n \"\"\"Calculate the reason for the error from the response content.\"\"\"\n reason = self.resp.reason\n try:\n try:\n data = json.loads(self.content.decode(\"utf-8\"))\n except json.JSONDecodeError:\n # In case it is not json\n data = self.content.decode(\"utf-8\")\n if isinstance(data, dict):\n reason = data[\"error\"][\"message\"]\n error_detail_keyword = next((kw for kw in [\"detail\", \"details\", \"message\"] if kw in data[\"error\"]), \"\")\n if error_detail_keyword:\n self.error_details = data[\"error\"][error_detail_keyword]\n elif isinstance(data, list) and len(data) > 0:\n first_error = data[0]\n reason = first_error[\"error\"][\"message\"]\n if \"details\" in first_error[\"error\"]:\n self.error_details = first_error[\"error\"][\"details\"]\n else:\n self.error_details = data\n except (ValueError, KeyError, TypeError):\n pass\n if reason is None:\n reason = \"\"\n return reason\n\n def __repr__(self):\n reason = self._get_reason()\n if self.error_details:\n return '<HttpError %s when requesting %s returned \"%s\". Details: \"%s\">' % (\n self.resp.status,\n self.uri,\n reason.strip(),\n self.error_details,\n )\n elif self.uri:\n return '<HttpError %s when requesting %s returned \"%s\">' % (\n self.resp.status,\n self.uri,\n self._get_reason().strip(),\n )\n else:\n return '<HttpError %s \"%s\">' % (self.resp.status, self._get_reason())\n\n __str__ = __repr__\n\n\nclass InvalidJsonError(Error):\n \"\"\"The JSON returned could not be parsed.\"\"\"\n\n pass\n\n\nclass UnknownFileType(Error):\n \"\"\"File type unknown or unexpected.\"\"\"\n\n pass\n\n\nclass UnknownLinkType(Error):\n \"\"\"Link type unknown or unexpected.\"\"\"\n\n pass\n\n\nclass UnknownApiNameOrVersion(Error):\n \"\"\"No API with that name and version exists.\"\"\"\n\n pass\n\n\nclass UnacceptableMimeTypeError(Error):\n \"\"\"That is an unacceptable mimetype for this operation.\"\"\"\n\n pass\n\n\nclass MediaUploadSizeError(Error):\n \"\"\"Media is larger than the method can accept.\"\"\"\n\n pass\n\n\nclass ResumableUploadError(HttpError):\n \"\"\"Error occurred during resumable upload.\"\"\"\n\n pass\n\n\nclass InvalidChunkSizeError(Error):\n \"\"\"The given chunksize is not valid.\"\"\"\n\n pass\n\n\nclass InvalidNotificationError(Error):\n \"\"\"The channel Notification is invalid.\"\"\"\n\n pass\n\n\nclass BatchError(HttpError):\n \"\"\"Error occurred during batch operations.\"\"\"\n\n @util.positional(2)\n def __init__(self, reason, resp=None, content=None):\n self.resp = resp\n self.content = content\n self.reason = reason\n\n def __repr__(self):\n if getattr(self.resp, \"status\", None) is None:\n return '<BatchError \"%s\">' % (self.reason)\n else:\n return '<BatchError %s \"%s\">' % (self.resp.status, self.reason)\n\n __str__ = __repr__\n\n\nclass UnexpectedMethodError(Error):\n \"\"\"Exception raised by RequestMockBuilder on unexpected calls.\"\"\"\n\n @util.positional(1)\n def __init__(self, methodId=None):\n \"\"\"Constructor for an UnexpectedMethodError.\"\"\"\n super(UnexpectedMethodError, self).__init__(\n \"Received unexpected call %s\" % methodId\n )\n\n\nclass UnexpectedBodyError(Error):\n \"\"\"Exception raised by RequestMockBuilder on unexpected bodies.\"\"\"\n\n def __init__(self, expected, provided):\n \"\"\"Constructor for an UnexpectedMethodError.\"\"\"\n super(UnexpectedBodyError, self).__init__(\n \"Expected: [%s] - Provided: [%s]\" % (expected, provided)\n )\n", "path": "googleapiclient/errors.py"}]}
| 2,241 | 254 |
gh_patches_debug_33997
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-1318
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve R2Score metric for DDP
## 🚀 Feature
Current implementation of `R2Score` metric is not compatible with ddp. The idea is to improve this metric to work in parallel.
Related to #1284
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/contrib/metrics/regression/r2_score.py`
Content:
```
1 import torch
2
3 from ignite.contrib.metrics.regression._base import _BaseRegression
4 from ignite.exceptions import NotComputableError
5
6
7 class R2Score(_BaseRegression):
8 r"""
9 Calculates the R-Squared, the
10 `coefficient of determination <https://en.wikipedia.org/wiki/Coefficient_of_determination>`_:
11
12 :math:`R^2 = 1 - \frac{\sum_{j=1}^n(A_j - P_j)^2}{\sum_{j=1}^n(A_j - \bar{A})^2}`,
13
14 where :math:`A_j` is the ground truth, :math:`P_j` is the predicted value and
15 :math:`\bar{A}` is the mean of the ground truth.
16
17 - ``update`` must receive output of the form ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.
18 - `y` and `y_pred` must be of same shape `(N, )` or `(N, 1)` and of type `float32`.
19 """
20
21 def reset(self):
22 self._num_examples = 0
23 self._sum_of_errors = 0
24 self._y_sq_sum = 0
25 self._y_sum = 0
26
27 def _update(self, output):
28 y_pred, y = output
29 self._num_examples += y.shape[0]
30 self._sum_of_errors += torch.sum(torch.pow(y_pred - y, 2)).item()
31
32 self._y_sum += torch.sum(y).item()
33 self._y_sq_sum += torch.sum(torch.pow(y, 2)).item()
34
35 def compute(self):
36 if self._num_examples == 0:
37 raise NotComputableError("R2Score must have at least one example before it can be computed.")
38 return 1 - self._sum_of_errors / (self._y_sq_sum - (self._y_sum ** 2) / self._num_examples)
39
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ignite/contrib/metrics/regression/r2_score.py b/ignite/contrib/metrics/regression/r2_score.py
--- a/ignite/contrib/metrics/regression/r2_score.py
+++ b/ignite/contrib/metrics/regression/r2_score.py
@@ -1,7 +1,10 @@
+from typing import Callable, Union
+
import torch
from ignite.contrib.metrics.regression._base import _BaseRegression
from ignite.exceptions import NotComputableError
+from ignite.metrics.metric import reinit__is_reduced, sync_all_reduce
class R2Score(_BaseRegression):
@@ -18,21 +21,32 @@
- `y` and `y_pred` must be of same shape `(N, )` or `(N, 1)` and of type `float32`.
"""
+ def __init__(
+ self, output_transform: Callable = lambda x: x, device: Union[str, torch.device] = torch.device("cpu")
+ ):
+ self._num_examples = None
+ self._sum_of_errors = None
+ self._y_sq_sum = None
+ self._y_sum = None
+ super(R2Score, self).__init__(output_transform, device)
+
+ @reinit__is_reduced
def reset(self):
self._num_examples = 0
- self._sum_of_errors = 0
- self._y_sq_sum = 0
- self._y_sum = 0
+ self._sum_of_errors = torch.tensor(0.0, device=self._device)
+ self._y_sq_sum = torch.tensor(0.0, device=self._device)
+ self._y_sum = torch.tensor(0.0, device=self._device)
def _update(self, output):
y_pred, y = output
self._num_examples += y.shape[0]
- self._sum_of_errors += torch.sum(torch.pow(y_pred - y, 2)).item()
+ self._sum_of_errors += torch.sum(torch.pow(y_pred - y, 2)).to(self._device)
- self._y_sum += torch.sum(y).item()
- self._y_sq_sum += torch.sum(torch.pow(y, 2)).item()
+ self._y_sum += torch.sum(y).to(self._device)
+ self._y_sq_sum += torch.sum(torch.pow(y, 2)).to(self._device)
+ @sync_all_reduce("_num_examples", "_sum_of_errors", "_y_sq_sum", "_y_sum")
def compute(self):
if self._num_examples == 0:
raise NotComputableError("R2Score must have at least one example before it can be computed.")
- return 1 - self._sum_of_errors / (self._y_sq_sum - (self._y_sum ** 2) / self._num_examples)
+ return 1 - self._sum_of_errors.item() / (self._y_sq_sum.item() - (self._y_sum.item() ** 2) / self._num_examples)
|
{"golden_diff": "diff --git a/ignite/contrib/metrics/regression/r2_score.py b/ignite/contrib/metrics/regression/r2_score.py\n--- a/ignite/contrib/metrics/regression/r2_score.py\n+++ b/ignite/contrib/metrics/regression/r2_score.py\n@@ -1,7 +1,10 @@\n+from typing import Callable, Union\n+\n import torch\n \n from ignite.contrib.metrics.regression._base import _BaseRegression\n from ignite.exceptions import NotComputableError\n+from ignite.metrics.metric import reinit__is_reduced, sync_all_reduce\n \n \n class R2Score(_BaseRegression):\n@@ -18,21 +21,32 @@\n - `y` and `y_pred` must be of same shape `(N, )` or `(N, 1)` and of type `float32`.\n \"\"\"\n \n+ def __init__(\n+ self, output_transform: Callable = lambda x: x, device: Union[str, torch.device] = torch.device(\"cpu\")\n+ ):\n+ self._num_examples = None\n+ self._sum_of_errors = None\n+ self._y_sq_sum = None\n+ self._y_sum = None\n+ super(R2Score, self).__init__(output_transform, device)\n+\n+ @reinit__is_reduced\n def reset(self):\n self._num_examples = 0\n- self._sum_of_errors = 0\n- self._y_sq_sum = 0\n- self._y_sum = 0\n+ self._sum_of_errors = torch.tensor(0.0, device=self._device)\n+ self._y_sq_sum = torch.tensor(0.0, device=self._device)\n+ self._y_sum = torch.tensor(0.0, device=self._device)\n \n def _update(self, output):\n y_pred, y = output\n self._num_examples += y.shape[0]\n- self._sum_of_errors += torch.sum(torch.pow(y_pred - y, 2)).item()\n+ self._sum_of_errors += torch.sum(torch.pow(y_pred - y, 2)).to(self._device)\n \n- self._y_sum += torch.sum(y).item()\n- self._y_sq_sum += torch.sum(torch.pow(y, 2)).item()\n+ self._y_sum += torch.sum(y).to(self._device)\n+ self._y_sq_sum += torch.sum(torch.pow(y, 2)).to(self._device)\n \n+ @sync_all_reduce(\"_num_examples\", \"_sum_of_errors\", \"_y_sq_sum\", \"_y_sum\")\n def compute(self):\n if self._num_examples == 0:\n raise NotComputableError(\"R2Score must have at least one example before it can be computed.\")\n- return 1 - self._sum_of_errors / (self._y_sq_sum - (self._y_sum ** 2) / self._num_examples)\n+ return 1 - self._sum_of_errors.item() / (self._y_sq_sum.item() - (self._y_sum.item() ** 2) / self._num_examples)\n", "issue": "Improve R2Score metric for DDP\n## \ud83d\ude80 Feature\r\n\r\nCurrent implementation of `R2Score` metric is not compatible with ddp. The idea is to improve this metric to work in parallel.\r\n\r\nRelated to #1284\n", "before_files": [{"content": "import torch\n\nfrom ignite.contrib.metrics.regression._base import _BaseRegression\nfrom ignite.exceptions import NotComputableError\n\n\nclass R2Score(_BaseRegression):\n r\"\"\"\n Calculates the R-Squared, the\n `coefficient of determination <https://en.wikipedia.org/wiki/Coefficient_of_determination>`_:\n\n :math:`R^2 = 1 - \\frac{\\sum_{j=1}^n(A_j - P_j)^2}{\\sum_{j=1}^n(A_j - \\bar{A})^2}`,\n\n where :math:`A_j` is the ground truth, :math:`P_j` is the predicted value and\n :math:`\\bar{A}` is the mean of the ground truth.\n\n - ``update`` must receive output of the form ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.\n - `y` and `y_pred` must be of same shape `(N, )` or `(N, 1)` and of type `float32`.\n \"\"\"\n\n def reset(self):\n self._num_examples = 0\n self._sum_of_errors = 0\n self._y_sq_sum = 0\n self._y_sum = 0\n\n def _update(self, output):\n y_pred, y = output\n self._num_examples += y.shape[0]\n self._sum_of_errors += torch.sum(torch.pow(y_pred - y, 2)).item()\n\n self._y_sum += torch.sum(y).item()\n self._y_sq_sum += torch.sum(torch.pow(y, 2)).item()\n\n def compute(self):\n if self._num_examples == 0:\n raise NotComputableError(\"R2Score must have at least one example before it can be computed.\")\n return 1 - self._sum_of_errors / (self._y_sq_sum - (self._y_sum ** 2) / self._num_examples)\n", "path": "ignite/contrib/metrics/regression/r2_score.py"}], "after_files": [{"content": "from typing import Callable, Union\n\nimport torch\n\nfrom ignite.contrib.metrics.regression._base import _BaseRegression\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import reinit__is_reduced, sync_all_reduce\n\n\nclass R2Score(_BaseRegression):\n r\"\"\"\n Calculates the R-Squared, the\n `coefficient of determination <https://en.wikipedia.org/wiki/Coefficient_of_determination>`_:\n\n :math:`R^2 = 1 - \\frac{\\sum_{j=1}^n(A_j - P_j)^2}{\\sum_{j=1}^n(A_j - \\bar{A})^2}`,\n\n where :math:`A_j` is the ground truth, :math:`P_j` is the predicted value and\n :math:`\\bar{A}` is the mean of the ground truth.\n\n - ``update`` must receive output of the form ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.\n - `y` and `y_pred` must be of same shape `(N, )` or `(N, 1)` and of type `float32`.\n \"\"\"\n\n def __init__(\n self, output_transform: Callable = lambda x: x, device: Union[str, torch.device] = torch.device(\"cpu\")\n ):\n self._num_examples = None\n self._sum_of_errors = None\n self._y_sq_sum = None\n self._y_sum = None\n super(R2Score, self).__init__(output_transform, device)\n\n @reinit__is_reduced\n def reset(self):\n self._num_examples = 0\n self._sum_of_errors = torch.tensor(0.0, device=self._device)\n self._y_sq_sum = torch.tensor(0.0, device=self._device)\n self._y_sum = torch.tensor(0.0, device=self._device)\n\n def _update(self, output):\n y_pred, y = output\n self._num_examples += y.shape[0]\n self._sum_of_errors += torch.sum(torch.pow(y_pred - y, 2)).to(self._device)\n\n self._y_sum += torch.sum(y).to(self._device)\n self._y_sq_sum += torch.sum(torch.pow(y, 2)).to(self._device)\n\n @sync_all_reduce(\"_num_examples\", \"_sum_of_errors\", \"_y_sq_sum\", \"_y_sum\")\n def compute(self):\n if self._num_examples == 0:\n raise NotComputableError(\"R2Score must have at least one example before it can be computed.\")\n return 1 - self._sum_of_errors.item() / (self._y_sq_sum.item() - (self._y_sum.item() ** 2) / self._num_examples)\n", "path": "ignite/contrib/metrics/regression/r2_score.py"}]}
| 816 | 672 |
gh_patches_debug_1961
|
rasdani/github-patches
|
git_diff
|
carpentries__amy-1046
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Django Debug Toolbar should have SQL panel ON by default
Turning the panel ON allows to spot bottlenecks right on - just by looking at the number of queries (>8? Investigate).
This panel is the most useful and most often used by me. The next one is template variables, which is ON by default already.
Django Debug Toolbar should have SQL panel ON by default
Turning the panel ON allows to spot bottlenecks right on - just by looking at the number of queries (>8? Investigate).
This panel is the most useful and most often used by me. The next one is template variables, which is ON by default already.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `amy/settings.py`
Content:
```
1 """
2 Django settings for amy project.
3
4 For more information on this file, see
5 https://docs.djangoproject.com/en/1.7/topics/settings/
6
7 For the full list of settings and their values, see
8 https://docs.djangoproject.com/en/1.7/ref/settings/
9 """
10
11 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
12 import json
13 import os
14 import sys
15
16 from django.utils.translation import ugettext_lazy as _
17
18 BASE_DIR = os.path.dirname(os.path.dirname(__file__))
19
20
21 # Quick-start development settings - unsuitable for production
22 # See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/
23
24
25 # SECURITY WARNING: don't run with DEBUG turned on in production!
26 DEBUG = json.loads(os.environ.get('AMY_DEBUG', 'true'))
27 # For deployment in production:
28 # AMY_DEBUG=false AMY_SECRET_KEY="..." ./manage.py runserver ...
29
30 if DEBUG:
31 SECRET_KEY = '3l$35+@a%g!(^y^98oi%ei+%+yvtl3y0k^_7-fmx2oj09-ac5@'
32 else:
33 SECRET_KEY = None
34 SECRET_KEY = os.environ.get('AMY_SECRET_KEY', SECRET_KEY)
35
36 # be sure to put these values in your envvars, even for development
37 RECAPTCHA_PUBLIC_KEY = os.environ.get('AMY_RECAPTCHA_PUBLIC_KEY', None)
38 RECAPTCHA_PRIVATE_KEY = os.environ.get('AMY_RECAPTCHA_PRIVATE_KEY', None)
39 RECAPTCHA_USE_SSL = True
40 NOCAPTCHA = True # nicer input
41
42 if DEBUG:
43 # 'PASSED' in the form will always pass the RECAPTCHA test
44 NOCAPTCHA = False # uglier input, but possible to manually enter 'PASSED'
45 os.environ['RECAPTCHA_TESTING'] = 'True'
46 else:
47 # ensure the keys are present on production
48 assert RECAPTCHA_PUBLIC_KEY, 'RECAPTCHA site key not present'
49 assert RECAPTCHA_PRIVATE_KEY, 'RECAPTCHA secure key not present'
50
51 # email settings
52 ADMINS = (
53 ('Sysadmins ML', '[email protected]'),
54 )
55 # "From:" for error messages sent out to ADMINS
56 SERVER_EMAIL = os.environ.get('AMY_SERVER_EMAIL', 'root@localhost')
57
58 # addresses to receive "New workshop request" or "New profile update request"
59 # notifications
60 REQUEST_NOTIFICATIONS_RECIPIENTS = (
61 '[email protected]',
62 )
63 EMAIL_HOST = os.environ.get('AMY_EMAIL_HOST', 'localhost')
64 EMAIL_HOST_USER = os.environ.get('AMY_EMAIL_HOST_USER', '')
65 EMAIL_HOST_PASSWORD = os.environ.get('AMY_EMAIL_HOST_PASSWORD', '')
66 EMAIL_PORT = int(os.environ.get('AMY_EMAIL_PORT', 25))
67 EMAIL_TIMEOUT = 10 # timeout for blocking email operations, in seconds
68 EMAIL_USE_TLS = json.loads(os.environ.get('AMY_EMAIL_USE_TLS', 'false'))
69 EMAIL_USE_SSL = json.loads(os.environ.get('AMY_EMAIL_USE_SSL', 'false'))
70
71 # "From:" for NOT error messages (ie. sent to whoever we want)
72 DEFAULT_FROM_EMAIL = os.environ.get('AMY_DEFAULT_FROM_EMAIL',
73 'webmaster@localhost')
74
75 if DEBUG:
76 # outgoing mails will be stored in `django.core.mail.outbox`
77 EMAIL_BACKEND = 'django.core.mail.backends.locmem.EmailBackend'
78
79 SITE_URL = 'https://amy.software-carpentry.org'
80 if DEBUG:
81 SITE_URL = 'http://127.0.0.1:8000'
82
83 # New template settings (for Django >= 1.8)
84 TEMPLATES = [
85 {
86 'BACKEND': 'django.template.backends.django.DjangoTemplates',
87 'OPTIONS': {
88 'loaders': [
89 'app_namespace.Loader',
90 'django.template.loaders.filesystem.Loader',
91 'django.template.loaders.app_directories.Loader',
92 ],
93 'debug': DEBUG,
94 'context_processors': [
95 # default processors + request processor
96 'django.contrib.auth.context_processors.auth',
97 'django.template.context_processors.debug',
98 'django.template.context_processors.i18n',
99 'django.template.context_processors.media',
100 'django.template.context_processors.request',
101 'django.template.context_processors.static',
102 'django.template.context_processors.tz',
103 'django.contrib.messages.context_processors.messages',
104 # AMY version
105 'workshops.context_processors.version',
106 # GitHub auth
107 'social.apps.django_app.context_processors.backends',
108 'social.apps.django_app.context_processors.login_redirect',
109 ],
110
111 # Warn viewers of invalid template strings
112 'string_if_invalid': 'XXX-unset-variable-XXX',
113 }
114 }
115 ]
116
117 ALLOWED_HOSTS = [
118 'amy.software-carpentry.org',
119 ]
120
121
122 # Application definition
123
124 INSTALLED_APPS = (
125 'django.contrib.auth',
126 'django.contrib.contenttypes',
127 'django.contrib.sessions',
128 'django.contrib.messages',
129 'django.contrib.staticfiles',
130 'workshops',
131 # this should be after 'workshops' because templates in
132 # 'templates/registration/' clash
133 'django.contrib.admin',
134 'crispy_forms',
135 'selectable',
136 'django_countries',
137 'django_filters',
138 'reversion',
139 'rest_framework',
140 'api',
141 'captcha',
142 'compressor',
143 'social.apps.django_app.default',
144 'debug_toolbar',
145 )
146
147 CRISPY_TEMPLATE_PACK = 'bootstrap3'
148
149 MIDDLEWARE_CLASSES = (
150 'debug_toolbar.middleware.DebugToolbarMiddleware',
151 'reversion.middleware.RevisionMiddleware',
152 'django.contrib.sessions.middleware.SessionMiddleware',
153 'django.middleware.common.CommonMiddleware',
154 'django.middleware.csrf.CsrfViewMiddleware',
155 'django.contrib.auth.middleware.AuthenticationMiddleware',
156 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
157 'django.contrib.messages.middleware.MessageMiddleware',
158 'django.middleware.clickjacking.XFrameOptionsMiddleware',
159 'workshops.github_auth.GithubAuthMiddleware',
160 )
161
162 ROOT_URLCONF = 'amy.urls'
163
164 WSGI_APPLICATION = 'amy.wsgi.application'
165
166 from django.contrib.messages import constants as message_constants
167 MESSAGE_TAGS = {
168 message_constants.INFO: 'alert-info',
169 message_constants.SUCCESS: 'alert-success',
170 message_constants.WARNING: 'alert-warning',
171 message_constants.ERROR: 'alert-danger',
172 }
173
174
175 # Database
176 # https://docs.djangoproject.com/en/1.7/ref/settings/#databases
177
178 DATABASES = {
179 'default': {
180 'ENGINE': 'django.db.backends.sqlite3',
181 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
182 'TEST': {},
183 }
184 }
185 if '--keepdb' in sys.argv:
186 # By default, Django uses in-memory sqlite3 database, which is much
187 # faster than sqlite3 database in a file. However, we may want to keep
188 # database between test launches, so that we avoid the overhead of
189 # applying migrations on each test launch.
190 DATABASES['default']['TEST']['NAME'] = 'test_db.sqlite3'
191
192 # Authentication
193 AUTH_USER_MODEL = 'workshops.Person'
194 VALIDATION = 'django.contrib.auth.password_validation.'
195 AUTH_PASSWORD_VALIDATORS = [
196 {
197 'NAME': VALIDATION + 'UserAttributeSimilarityValidator',
198 'OPTIONS': {
199 'user_attributes': ('username', 'personal', 'middle', 'family',
200 'email')
201 }
202 },
203 {
204 'NAME': VALIDATION + 'MinimumLengthValidator',
205 'OPTIONS': {
206 'min_length': 10,
207 }
208 },
209 {
210 'NAME': VALIDATION + 'CommonPasswordValidator',
211 },
212 {
213 'NAME': VALIDATION + 'NumericPasswordValidator',
214 },
215 ]
216
217 # GitHub Auth
218 AUTHENTICATION_BACKENDS = (
219 'social.backends.github.GithubOAuth2',
220 'django.contrib.auth.backends.ModelBackend',
221 )
222 SOCIAL_AUTH_ADMIN_USER_SEARCH_FIELDS = ['github']
223 SOCIAL_AUTH_GITHUB_KEY = os.environ.get('SOCIAL_AUTH_GITHUB_KEY', '').strip()
224 SOCIAL_AUTH_GITHUB_SECRET = os.environ.get('SOCIAL_AUTH_GITHUB_SECRET', '').strip()
225 if not DEBUG and not (SOCIAL_AUTH_GITHUB_KEY and SOCIAL_AUTH_GITHUB_SECRET):
226 print('Logging using github account will *not* work, '
227 'because you didn\'t set SOCIAL_AUTH_GITHUB_KEY and/or '
228 'SOCIAL_AUTH_GITHUB_SECRET environment variables.',
229 file=sys.stderr)
230
231
232 SOCIAL_AUTH_PIPELINE = (
233 'social.pipeline.social_auth.social_details',
234 'social.pipeline.social_auth.social_uid',
235 'social.pipeline.social_auth.auth_allowed',
236 'social.pipeline.social_auth.social_user',
237
238 # If we can't find Person associated with given github account, abort.
239 'workshops.github_auth.abort_if_no_user_found',
240
241 # The default pipeline includes 'social.pipeline.user.create_user' here,
242 # but we don't want to register a new Person when somebody logs in
243 # using GitHub account that is not associated with any Person.
244
245 'social.pipeline.social_auth.associate_user',
246 'social.pipeline.social_auth.load_extra_data',
247 )
248
249 SOCIAL_AUTH_USER_MODEL = 'workshops.Person'
250
251 # Github API token (optional). Setting this token reduces limits and quotes
252 # on Github API.
253
254 GITHUB_API_TOKEN = os.environ.get('GITHUB_API_TOKEN', None)
255
256 # Internationalization
257 # https://docs.djangoproject.com/en/1.7/topics/i18n/
258
259 LANGUAGE_CODE = 'en-us'
260
261 TIME_ZONE = 'EST'
262
263 USE_I18N = True
264
265 USE_L10N = True
266
267 USE_TZ = True
268
269
270 # Static files (CSS, JavaScript, Images)
271 # https://docs.djangoproject.com/en/1.7/howto/static-files/
272
273 STATIC_URL = '/static/'
274 STATIC_ROOT = os.path.join(BASE_DIR, 'static')
275 STATICFILES_DIRS = (
276 os.path.join(BASE_DIR, 'bower_components'),
277 )
278 STATICFILES_FINDERS = [
279 'django.contrib.staticfiles.finders.FileSystemFinder',
280 'django.contrib.staticfiles.finders.AppDirectoriesFinder',
281 'compressor.finders.CompressorFinder',
282 ]
283
284 # if "next" (or "?next") variable is not set when logging in, redirect to
285 # workshops
286 LOGIN_REDIRECT_URL = '/workshops/'
287
288 # here's where @login_required redirects to:
289 LOGIN_URL = '/account/login/'
290
291 # explicitely add European Union as a country
292 COUNTRIES_OVERRIDE = {
293 'EU': _('European Union'),
294 'US': _('United States'),
295 'W3': _('Online'),
296 }
297
298 # settings for REST API
299 REST_FRAMEWORK = {
300 'DEFAULT_PARSER_CLASSES': (
301 'rest_framework.parsers.JSONParser',
302 'rest_framework.parsers.FormParser',
303 'rest_framework.parsers.MultiPartParser',
304 'rest_framework_yaml.parsers.YAMLParser',
305 ),
306 'DEFAULT_RENDERER_CLASSES': (
307 'rest_framework.renderers.JSONRenderer',
308 'rest_framework.renderers.BrowsableAPIRenderer',
309 'rest_framework_yaml.renderers.YAMLRenderer',
310 ),
311
312 'DEFAULT_THROTTLE_CLASSES': (
313 'rest_framework.throttling.AnonRateThrottle',
314 'rest_framework.throttling.UserRateThrottle'
315 ),
316 'DEFAULT_THROTTLE_RATES': {
317 'anon': '50/hour',
318 'user': '200/hour'
319 }
320 }
321
322 LOGGING = {
323 'version': 1,
324 'disable_existing_loggers': False, # merge with default configuration
325 'handlers': {
326 'null': {
327 'class': 'logging.NullHandler',
328 },
329 },
330 'loggers': {
331 # disable "Invalid HTTP_HOST" notifications
332 'django.security.DisallowedHost': {
333 'handlers': ['null'],
334 'propagate': False,
335 },
336 },
337 }
338
339 # weaker hasher brings test speedup according to Django docs:
340 # https://docs.djangoproject.com/en/1.9/topics/testing/overview/#speeding-up-the-tests
341 if DEBUG and 'test' in sys.argv:
342 PASSWORD_HASHERS = (
343 'django.contrib.auth.hashers.MD5PasswordHasher',
344 )
345
346 # Debug Toolbar
347 DEBUG_TOOLBAR_PATCH_SETTINGS = False
348 INTERNAL_IPS = ['127.0.0.1', '::1']
349 DEBUG_TOOLBAR_CONFIG = {
350 # Disable all panels (except for timer) by default in order not to slow
351 # down page loading.
352 'DISABLE_PANELS': [
353 'debug_toolbar.panels.sql.SQLPanel',
354 ],
355 }
356
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/amy/settings.py b/amy/settings.py
--- a/amy/settings.py
+++ b/amy/settings.py
@@ -346,10 +346,3 @@
# Debug Toolbar
DEBUG_TOOLBAR_PATCH_SETTINGS = False
INTERNAL_IPS = ['127.0.0.1', '::1']
-DEBUG_TOOLBAR_CONFIG = {
- # Disable all panels (except for timer) by default in order not to slow
- # down page loading.
- 'DISABLE_PANELS': [
- 'debug_toolbar.panels.sql.SQLPanel',
- ],
-}
|
{"golden_diff": "diff --git a/amy/settings.py b/amy/settings.py\n--- a/amy/settings.py\n+++ b/amy/settings.py\n@@ -346,10 +346,3 @@\n # Debug Toolbar\n DEBUG_TOOLBAR_PATCH_SETTINGS = False\n INTERNAL_IPS = ['127.0.0.1', '::1']\n-DEBUG_TOOLBAR_CONFIG = {\n- # Disable all panels (except for timer) by default in order not to slow\n- # down page loading.\n- 'DISABLE_PANELS': [\n- 'debug_toolbar.panels.sql.SQLPanel',\n- ],\n-}\n", "issue": "Django Debug Toolbar should have SQL panel ON by default\nTurning the panel ON allows to spot bottlenecks right on - just by looking at the number of queries (>8? Investigate).\n\nThis panel is the most useful and most often used by me. The next one is template variables, which is ON by default already.\n\nDjango Debug Toolbar should have SQL panel ON by default\nTurning the panel ON allows to spot bottlenecks right on - just by looking at the number of queries (>8? Investigate).\n\nThis panel is the most useful and most often used by me. The next one is template variables, which is ON by default already.\n\n", "before_files": [{"content": "\"\"\"\nDjango settings for amy project.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.7/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.7/ref/settings/\n\"\"\"\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nimport json\nimport os\nimport sys\n\nfrom django.utils.translation import ugettext_lazy as _\n\nBASE_DIR = os.path.dirname(os.path.dirname(__file__))\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/\n\n\n# SECURITY WARNING: don't run with DEBUG turned on in production!\nDEBUG = json.loads(os.environ.get('AMY_DEBUG', 'true'))\n# For deployment in production:\n# AMY_DEBUG=false AMY_SECRET_KEY=\"...\" ./manage.py runserver ...\n\nif DEBUG:\n SECRET_KEY = '3l$35+@a%g!(^y^98oi%ei+%+yvtl3y0k^_7-fmx2oj09-ac5@'\nelse:\n SECRET_KEY = None\nSECRET_KEY = os.environ.get('AMY_SECRET_KEY', SECRET_KEY)\n\n# be sure to put these values in your envvars, even for development\nRECAPTCHA_PUBLIC_KEY = os.environ.get('AMY_RECAPTCHA_PUBLIC_KEY', None)\nRECAPTCHA_PRIVATE_KEY = os.environ.get('AMY_RECAPTCHA_PRIVATE_KEY', None)\nRECAPTCHA_USE_SSL = True\nNOCAPTCHA = True # nicer input\n\nif DEBUG:\n # 'PASSED' in the form will always pass the RECAPTCHA test\n NOCAPTCHA = False # uglier input, but possible to manually enter 'PASSED'\n os.environ['RECAPTCHA_TESTING'] = 'True'\nelse:\n # ensure the keys are present on production\n assert RECAPTCHA_PUBLIC_KEY, 'RECAPTCHA site key not present'\n assert RECAPTCHA_PRIVATE_KEY, 'RECAPTCHA secure key not present'\n\n# email settings\nADMINS = (\n ('Sysadmins ML', '[email protected]'),\n)\n# \"From:\" for error messages sent out to ADMINS\nSERVER_EMAIL = os.environ.get('AMY_SERVER_EMAIL', 'root@localhost')\n\n# addresses to receive \"New workshop request\" or \"New profile update request\"\n# notifications\nREQUEST_NOTIFICATIONS_RECIPIENTS = (\n '[email protected]',\n)\nEMAIL_HOST = os.environ.get('AMY_EMAIL_HOST', 'localhost')\nEMAIL_HOST_USER = os.environ.get('AMY_EMAIL_HOST_USER', '')\nEMAIL_HOST_PASSWORD = os.environ.get('AMY_EMAIL_HOST_PASSWORD', '')\nEMAIL_PORT = int(os.environ.get('AMY_EMAIL_PORT', 25))\nEMAIL_TIMEOUT = 10 # timeout for blocking email operations, in seconds\nEMAIL_USE_TLS = json.loads(os.environ.get('AMY_EMAIL_USE_TLS', 'false'))\nEMAIL_USE_SSL = json.loads(os.environ.get('AMY_EMAIL_USE_SSL', 'false'))\n\n# \"From:\" for NOT error messages (ie. sent to whoever we want)\nDEFAULT_FROM_EMAIL = os.environ.get('AMY_DEFAULT_FROM_EMAIL',\n 'webmaster@localhost')\n\nif DEBUG:\n # outgoing mails will be stored in `django.core.mail.outbox`\n EMAIL_BACKEND = 'django.core.mail.backends.locmem.EmailBackend'\n\nSITE_URL = 'https://amy.software-carpentry.org'\nif DEBUG:\n SITE_URL = 'http://127.0.0.1:8000'\n\n# New template settings (for Django >= 1.8)\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'OPTIONS': {\n 'loaders': [\n 'app_namespace.Loader',\n 'django.template.loaders.filesystem.Loader',\n 'django.template.loaders.app_directories.Loader',\n ],\n 'debug': DEBUG,\n 'context_processors': [\n # default processors + request processor\n 'django.contrib.auth.context_processors.auth',\n 'django.template.context_processors.debug',\n 'django.template.context_processors.i18n',\n 'django.template.context_processors.media',\n 'django.template.context_processors.request',\n 'django.template.context_processors.static',\n 'django.template.context_processors.tz',\n 'django.contrib.messages.context_processors.messages',\n # AMY version\n 'workshops.context_processors.version',\n # GitHub auth\n 'social.apps.django_app.context_processors.backends',\n 'social.apps.django_app.context_processors.login_redirect',\n ],\n\n # Warn viewers of invalid template strings\n 'string_if_invalid': 'XXX-unset-variable-XXX',\n }\n }\n]\n\nALLOWED_HOSTS = [\n 'amy.software-carpentry.org',\n]\n\n\n# Application definition\n\nINSTALLED_APPS = (\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'workshops',\n # this should be after 'workshops' because templates in\n # 'templates/registration/' clash\n 'django.contrib.admin',\n 'crispy_forms',\n 'selectable',\n 'django_countries',\n 'django_filters',\n 'reversion',\n 'rest_framework',\n 'api',\n 'captcha',\n 'compressor',\n 'social.apps.django_app.default',\n 'debug_toolbar',\n)\n\nCRISPY_TEMPLATE_PACK = 'bootstrap3'\n\nMIDDLEWARE_CLASSES = (\n 'debug_toolbar.middleware.DebugToolbarMiddleware',\n 'reversion.middleware.RevisionMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n 'workshops.github_auth.GithubAuthMiddleware',\n)\n\nROOT_URLCONF = 'amy.urls'\n\nWSGI_APPLICATION = 'amy.wsgi.application'\n\nfrom django.contrib.messages import constants as message_constants\nMESSAGE_TAGS = {\n message_constants.INFO: 'alert-info',\n message_constants.SUCCESS: 'alert-success',\n message_constants.WARNING: 'alert-warning',\n message_constants.ERROR: 'alert-danger',\n}\n\n\n# Database\n# https://docs.djangoproject.com/en/1.7/ref/settings/#databases\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3',\n 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),\n 'TEST': {},\n }\n}\nif '--keepdb' in sys.argv:\n # By default, Django uses in-memory sqlite3 database, which is much\n # faster than sqlite3 database in a file. However, we may want to keep\n # database between test launches, so that we avoid the overhead of\n # applying migrations on each test launch.\n DATABASES['default']['TEST']['NAME'] = 'test_db.sqlite3'\n\n# Authentication\nAUTH_USER_MODEL = 'workshops.Person'\nVALIDATION = 'django.contrib.auth.password_validation.'\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': VALIDATION + 'UserAttributeSimilarityValidator',\n 'OPTIONS': {\n 'user_attributes': ('username', 'personal', 'middle', 'family',\n 'email')\n }\n },\n {\n 'NAME': VALIDATION + 'MinimumLengthValidator',\n 'OPTIONS': {\n 'min_length': 10,\n }\n },\n {\n 'NAME': VALIDATION + 'CommonPasswordValidator',\n },\n {\n 'NAME': VALIDATION + 'NumericPasswordValidator',\n },\n]\n\n# GitHub Auth\nAUTHENTICATION_BACKENDS = (\n 'social.backends.github.GithubOAuth2',\n 'django.contrib.auth.backends.ModelBackend',\n)\nSOCIAL_AUTH_ADMIN_USER_SEARCH_FIELDS = ['github']\nSOCIAL_AUTH_GITHUB_KEY = os.environ.get('SOCIAL_AUTH_GITHUB_KEY', '').strip()\nSOCIAL_AUTH_GITHUB_SECRET = os.environ.get('SOCIAL_AUTH_GITHUB_SECRET', '').strip()\nif not DEBUG and not (SOCIAL_AUTH_GITHUB_KEY and SOCIAL_AUTH_GITHUB_SECRET):\n print('Logging using github account will *not* work, '\n 'because you didn\\'t set SOCIAL_AUTH_GITHUB_KEY and/or '\n 'SOCIAL_AUTH_GITHUB_SECRET environment variables.',\n file=sys.stderr)\n\n\nSOCIAL_AUTH_PIPELINE = (\n 'social.pipeline.social_auth.social_details',\n 'social.pipeline.social_auth.social_uid',\n 'social.pipeline.social_auth.auth_allowed',\n 'social.pipeline.social_auth.social_user',\n\n # If we can't find Person associated with given github account, abort.\n 'workshops.github_auth.abort_if_no_user_found',\n\n # The default pipeline includes 'social.pipeline.user.create_user' here,\n # but we don't want to register a new Person when somebody logs in\n # using GitHub account that is not associated with any Person.\n\n 'social.pipeline.social_auth.associate_user',\n 'social.pipeline.social_auth.load_extra_data',\n)\n\nSOCIAL_AUTH_USER_MODEL = 'workshops.Person'\n\n# Github API token (optional). Setting this token reduces limits and quotes\n# on Github API.\n\nGITHUB_API_TOKEN = os.environ.get('GITHUB_API_TOKEN', None)\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.7/topics/i18n/\n\nLANGUAGE_CODE = 'en-us'\n\nTIME_ZONE = 'EST'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.7/howto/static-files/\n\nSTATIC_URL = '/static/'\nSTATIC_ROOT = os.path.join(BASE_DIR, 'static')\nSTATICFILES_DIRS = (\n os.path.join(BASE_DIR, 'bower_components'),\n)\nSTATICFILES_FINDERS = [\n 'django.contrib.staticfiles.finders.FileSystemFinder',\n 'django.contrib.staticfiles.finders.AppDirectoriesFinder',\n 'compressor.finders.CompressorFinder',\n]\n\n# if \"next\" (or \"?next\") variable is not set when logging in, redirect to\n# workshops\nLOGIN_REDIRECT_URL = '/workshops/'\n\n# here's where @login_required redirects to:\nLOGIN_URL = '/account/login/'\n\n# explicitely add European Union as a country\nCOUNTRIES_OVERRIDE = {\n 'EU': _('European Union'),\n 'US': _('United States'),\n 'W3': _('Online'),\n}\n\n# settings for REST API\nREST_FRAMEWORK = {\n 'DEFAULT_PARSER_CLASSES': (\n 'rest_framework.parsers.JSONParser',\n 'rest_framework.parsers.FormParser',\n 'rest_framework.parsers.MultiPartParser',\n 'rest_framework_yaml.parsers.YAMLParser',\n ),\n 'DEFAULT_RENDERER_CLASSES': (\n 'rest_framework.renderers.JSONRenderer',\n 'rest_framework.renderers.BrowsableAPIRenderer',\n 'rest_framework_yaml.renderers.YAMLRenderer',\n ),\n\n 'DEFAULT_THROTTLE_CLASSES': (\n 'rest_framework.throttling.AnonRateThrottle',\n 'rest_framework.throttling.UserRateThrottle'\n ),\n 'DEFAULT_THROTTLE_RATES': {\n 'anon': '50/hour',\n 'user': '200/hour'\n }\n}\n\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False, # merge with default configuration\n 'handlers': {\n 'null': {\n 'class': 'logging.NullHandler',\n },\n },\n 'loggers': {\n # disable \"Invalid HTTP_HOST\" notifications\n 'django.security.DisallowedHost': {\n 'handlers': ['null'],\n 'propagate': False,\n },\n },\n}\n\n# weaker hasher brings test speedup according to Django docs:\n# https://docs.djangoproject.com/en/1.9/topics/testing/overview/#speeding-up-the-tests\nif DEBUG and 'test' in sys.argv:\n PASSWORD_HASHERS = (\n 'django.contrib.auth.hashers.MD5PasswordHasher',\n )\n\n# Debug Toolbar\nDEBUG_TOOLBAR_PATCH_SETTINGS = False\nINTERNAL_IPS = ['127.0.0.1', '::1']\nDEBUG_TOOLBAR_CONFIG = {\n # Disable all panels (except for timer) by default in order not to slow\n # down page loading.\n 'DISABLE_PANELS': [\n 'debug_toolbar.panels.sql.SQLPanel',\n ],\n}\n", "path": "amy/settings.py"}], "after_files": [{"content": "\"\"\"\nDjango settings for amy project.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.7/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.7/ref/settings/\n\"\"\"\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nimport json\nimport os\nimport sys\n\nfrom django.utils.translation import ugettext_lazy as _\n\nBASE_DIR = os.path.dirname(os.path.dirname(__file__))\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/\n\n\n# SECURITY WARNING: don't run with DEBUG turned on in production!\nDEBUG = json.loads(os.environ.get('AMY_DEBUG', 'true'))\n# For deployment in production:\n# AMY_DEBUG=false AMY_SECRET_KEY=\"...\" ./manage.py runserver ...\n\nif DEBUG:\n SECRET_KEY = '3l$35+@a%g!(^y^98oi%ei+%+yvtl3y0k^_7-fmx2oj09-ac5@'\nelse:\n SECRET_KEY = None\nSECRET_KEY = os.environ.get('AMY_SECRET_KEY', SECRET_KEY)\n\n# be sure to put these values in your envvars, even for development\nRECAPTCHA_PUBLIC_KEY = os.environ.get('AMY_RECAPTCHA_PUBLIC_KEY', None)\nRECAPTCHA_PRIVATE_KEY = os.environ.get('AMY_RECAPTCHA_PRIVATE_KEY', None)\nRECAPTCHA_USE_SSL = True\nNOCAPTCHA = True # nicer input\n\nif DEBUG:\n # 'PASSED' in the form will always pass the RECAPTCHA test\n NOCAPTCHA = False # uglier input, but possible to manually enter 'PASSED'\n os.environ['RECAPTCHA_TESTING'] = 'True'\nelse:\n # ensure the keys are present on production\n assert RECAPTCHA_PUBLIC_KEY, 'RECAPTCHA site key not present'\n assert RECAPTCHA_PRIVATE_KEY, 'RECAPTCHA secure key not present'\n\n# email settings\nADMINS = (\n ('Sysadmins ML', '[email protected]'),\n)\n# \"From:\" for error messages sent out to ADMINS\nSERVER_EMAIL = os.environ.get('AMY_SERVER_EMAIL', 'root@localhost')\n\n# addresses to receive \"New workshop request\" or \"New profile update request\"\n# notifications\nREQUEST_NOTIFICATIONS_RECIPIENTS = (\n '[email protected]',\n)\nEMAIL_HOST = os.environ.get('AMY_EMAIL_HOST', 'localhost')\nEMAIL_HOST_USER = os.environ.get('AMY_EMAIL_HOST_USER', '')\nEMAIL_HOST_PASSWORD = os.environ.get('AMY_EMAIL_HOST_PASSWORD', '')\nEMAIL_PORT = int(os.environ.get('AMY_EMAIL_PORT', 25))\nEMAIL_TIMEOUT = 10 # timeout for blocking email operations, in seconds\nEMAIL_USE_TLS = json.loads(os.environ.get('AMY_EMAIL_USE_TLS', 'false'))\nEMAIL_USE_SSL = json.loads(os.environ.get('AMY_EMAIL_USE_SSL', 'false'))\n\n# \"From:\" for NOT error messages (ie. sent to whoever we want)\nDEFAULT_FROM_EMAIL = os.environ.get('AMY_DEFAULT_FROM_EMAIL',\n 'webmaster@localhost')\n\nif DEBUG:\n # outgoing mails will be stored in `django.core.mail.outbox`\n EMAIL_BACKEND = 'django.core.mail.backends.locmem.EmailBackend'\n\nSITE_URL = 'https://amy.software-carpentry.org'\nif DEBUG:\n SITE_URL = 'http://127.0.0.1:8000'\n\n# New template settings (for Django >= 1.8)\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'OPTIONS': {\n 'loaders': [\n 'app_namespace.Loader',\n 'django.template.loaders.filesystem.Loader',\n 'django.template.loaders.app_directories.Loader',\n ],\n 'debug': DEBUG,\n 'context_processors': [\n # default processors + request processor\n 'django.contrib.auth.context_processors.auth',\n 'django.template.context_processors.debug',\n 'django.template.context_processors.i18n',\n 'django.template.context_processors.media',\n 'django.template.context_processors.request',\n 'django.template.context_processors.static',\n 'django.template.context_processors.tz',\n 'django.contrib.messages.context_processors.messages',\n # AMY version\n 'workshops.context_processors.version',\n # GitHub auth\n 'social.apps.django_app.context_processors.backends',\n 'social.apps.django_app.context_processors.login_redirect',\n ],\n\n # Warn viewers of invalid template strings\n 'string_if_invalid': 'XXX-unset-variable-XXX',\n }\n }\n]\n\nALLOWED_HOSTS = [\n 'amy.software-carpentry.org',\n]\n\n\n# Application definition\n\nINSTALLED_APPS = (\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'workshops',\n # this should be after 'workshops' because templates in\n # 'templates/registration/' clash\n 'django.contrib.admin',\n 'crispy_forms',\n 'selectable',\n 'django_countries',\n 'django_filters',\n 'reversion',\n 'rest_framework',\n 'api',\n 'captcha',\n 'compressor',\n 'social.apps.django_app.default',\n 'debug_toolbar',\n)\n\nCRISPY_TEMPLATE_PACK = 'bootstrap3'\n\nMIDDLEWARE_CLASSES = (\n 'debug_toolbar.middleware.DebugToolbarMiddleware',\n 'reversion.middleware.RevisionMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n 'workshops.github_auth.GithubAuthMiddleware',\n)\n\nROOT_URLCONF = 'amy.urls'\n\nWSGI_APPLICATION = 'amy.wsgi.application'\n\nfrom django.contrib.messages import constants as message_constants\nMESSAGE_TAGS = {\n message_constants.INFO: 'alert-info',\n message_constants.SUCCESS: 'alert-success',\n message_constants.WARNING: 'alert-warning',\n message_constants.ERROR: 'alert-danger',\n}\n\n\n# Database\n# https://docs.djangoproject.com/en/1.7/ref/settings/#databases\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.sqlite3',\n 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),\n 'TEST': {},\n }\n}\nif '--keepdb' in sys.argv:\n # By default, Django uses in-memory sqlite3 database, which is much\n # faster than sqlite3 database in a file. However, we may want to keep\n # database between test launches, so that we avoid the overhead of\n # applying migrations on each test launch.\n DATABASES['default']['TEST']['NAME'] = 'test_db.sqlite3'\n\n# Authentication\nAUTH_USER_MODEL = 'workshops.Person'\nVALIDATION = 'django.contrib.auth.password_validation.'\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': VALIDATION + 'UserAttributeSimilarityValidator',\n 'OPTIONS': {\n 'user_attributes': ('username', 'personal', 'middle', 'family',\n 'email')\n }\n },\n {\n 'NAME': VALIDATION + 'MinimumLengthValidator',\n 'OPTIONS': {\n 'min_length': 10,\n }\n },\n {\n 'NAME': VALIDATION + 'CommonPasswordValidator',\n },\n {\n 'NAME': VALIDATION + 'NumericPasswordValidator',\n },\n]\n\n# GitHub Auth\nAUTHENTICATION_BACKENDS = (\n 'social.backends.github.GithubOAuth2',\n 'django.contrib.auth.backends.ModelBackend',\n)\nSOCIAL_AUTH_ADMIN_USER_SEARCH_FIELDS = ['github']\nSOCIAL_AUTH_GITHUB_KEY = os.environ.get('SOCIAL_AUTH_GITHUB_KEY', '').strip()\nSOCIAL_AUTH_GITHUB_SECRET = os.environ.get('SOCIAL_AUTH_GITHUB_SECRET', '').strip()\nif not DEBUG and not (SOCIAL_AUTH_GITHUB_KEY and SOCIAL_AUTH_GITHUB_SECRET):\n print('Logging using github account will *not* work, '\n 'because you didn\\'t set SOCIAL_AUTH_GITHUB_KEY and/or '\n 'SOCIAL_AUTH_GITHUB_SECRET environment variables.',\n file=sys.stderr)\n\n\nSOCIAL_AUTH_PIPELINE = (\n 'social.pipeline.social_auth.social_details',\n 'social.pipeline.social_auth.social_uid',\n 'social.pipeline.social_auth.auth_allowed',\n 'social.pipeline.social_auth.social_user',\n\n # If we can't find Person associated with given github account, abort.\n 'workshops.github_auth.abort_if_no_user_found',\n\n # The default pipeline includes 'social.pipeline.user.create_user' here,\n # but we don't want to register a new Person when somebody logs in\n # using GitHub account that is not associated with any Person.\n\n 'social.pipeline.social_auth.associate_user',\n 'social.pipeline.social_auth.load_extra_data',\n)\n\nSOCIAL_AUTH_USER_MODEL = 'workshops.Person'\n\n# Github API token (optional). Setting this token reduces limits and quotes\n# on Github API.\n\nGITHUB_API_TOKEN = os.environ.get('GITHUB_API_TOKEN', None)\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.7/topics/i18n/\n\nLANGUAGE_CODE = 'en-us'\n\nTIME_ZONE = 'EST'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.7/howto/static-files/\n\nSTATIC_URL = '/static/'\nSTATIC_ROOT = os.path.join(BASE_DIR, 'static')\nSTATICFILES_DIRS = (\n os.path.join(BASE_DIR, 'bower_components'),\n)\nSTATICFILES_FINDERS = [\n 'django.contrib.staticfiles.finders.FileSystemFinder',\n 'django.contrib.staticfiles.finders.AppDirectoriesFinder',\n 'compressor.finders.CompressorFinder',\n]\n\n# if \"next\" (or \"?next\") variable is not set when logging in, redirect to\n# workshops\nLOGIN_REDIRECT_URL = '/workshops/'\n\n# here's where @login_required redirects to:\nLOGIN_URL = '/account/login/'\n\n# explicitely add European Union as a country\nCOUNTRIES_OVERRIDE = {\n 'EU': _('European Union'),\n 'US': _('United States'),\n 'W3': _('Online'),\n}\n\n# settings for REST API\nREST_FRAMEWORK = {\n 'DEFAULT_PARSER_CLASSES': (\n 'rest_framework.parsers.JSONParser',\n 'rest_framework.parsers.FormParser',\n 'rest_framework.parsers.MultiPartParser',\n 'rest_framework_yaml.parsers.YAMLParser',\n ),\n 'DEFAULT_RENDERER_CLASSES': (\n 'rest_framework.renderers.JSONRenderer',\n 'rest_framework.renderers.BrowsableAPIRenderer',\n 'rest_framework_yaml.renderers.YAMLRenderer',\n ),\n\n 'DEFAULT_THROTTLE_CLASSES': (\n 'rest_framework.throttling.AnonRateThrottle',\n 'rest_framework.throttling.UserRateThrottle'\n ),\n 'DEFAULT_THROTTLE_RATES': {\n 'anon': '50/hour',\n 'user': '200/hour'\n }\n}\n\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False, # merge with default configuration\n 'handlers': {\n 'null': {\n 'class': 'logging.NullHandler',\n },\n },\n 'loggers': {\n # disable \"Invalid HTTP_HOST\" notifications\n 'django.security.DisallowedHost': {\n 'handlers': ['null'],\n 'propagate': False,\n },\n },\n}\n\n# weaker hasher brings test speedup according to Django docs:\n# https://docs.djangoproject.com/en/1.9/topics/testing/overview/#speeding-up-the-tests\nif DEBUG and 'test' in sys.argv:\n PASSWORD_HASHERS = (\n 'django.contrib.auth.hashers.MD5PasswordHasher',\n )\n\n# Debug Toolbar\nDEBUG_TOOLBAR_PATCH_SETTINGS = False\nINTERNAL_IPS = ['127.0.0.1', '::1']\n", "path": "amy/settings.py"}]}
| 3,996 | 130 |
gh_patches_debug_27900
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-center-index-1707
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[request] zstd/1.4.5
### Package Details
* Package Name/Version: **zstd/1.4.5**
* Changelog: **https://github.com/facebook/zstd/releases/tag/v1.4.5**
The above mentioned version is newly released by the upstream project and not yet available as a recipe. Please add this version.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/zstd/all/conanfile.py`
Content:
```
1 import os
2 from conans import ConanFile, CMake, tools
3
4
5 class ZstdConan(ConanFile):
6 name = "zstd"
7 url = "https://github.com/conan-io/conan-center-index"
8 homepage = "https://github.com/facebook/zstd"
9 description = "Zstandard - Fast real-time compression algorithm"
10 topics = ("conan", "zstd", "compression", "algorithm", "decoder")
11 license = "BSD-3-Clause"
12 exports_sources = ['CMakeLists.txt']
13 generators = 'cmake'
14 settings = "os", "arch", "compiler", "build_type"
15 options = {"shared": [True, False], "fPIC": [True, False]}
16 default_options = {"shared": False, "fPIC": True}
17
18 @property
19 def _source_subfolder(self):
20 return "source_subfolder"
21
22 def source(self):
23 tools.get(**self.conan_data["sources"][self.version])
24 extracted_dir = self.name + "-" + self.version
25 os.rename(extracted_dir, self._source_subfolder)
26
27 def config_options(self):
28 if self.settings.os == "Windows":
29 del self.options.fPIC
30
31 def configure(self):
32 del self.settings.compiler.libcxx
33 del self.settings.compiler.cppstd
34
35 def _configure_cmake(self):
36 cmake = CMake(self)
37 cmake.definitions["ZSTD_BUILD_PROGRAMS"] = False
38 cmake.definitions["ZSTD_BUILD_STATIC"] = not self.options.shared
39 cmake.definitions["ZSTD_BUILD_SHARED"] = self.options.shared
40 cmake.configure()
41 return cmake
42
43 def build(self):
44 cmake = self._configure_cmake()
45 cmake.build()
46
47 def package(self):
48 self.copy(pattern="LICENSE", dst="licenses", src=self._source_subfolder)
49 cmake = self._configure_cmake()
50 cmake.install()
51 tools.rmdir(os.path.join(self.package_folder, "lib", "pkgconfig"))
52
53 def package_info(self):
54 self.cpp_info.libs = tools.collect_libs(self)
55 if self.settings.os == "Linux":
56 self.cpp_info.system_libs.append("pthread")
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/recipes/zstd/all/conanfile.py b/recipes/zstd/all/conanfile.py
--- a/recipes/zstd/all/conanfile.py
+++ b/recipes/zstd/all/conanfile.py
@@ -9,7 +9,7 @@
description = "Zstandard - Fast real-time compression algorithm"
topics = ("conan", "zstd", "compression", "algorithm", "decoder")
license = "BSD-3-Clause"
- exports_sources = ['CMakeLists.txt']
+ exports_sources = ['CMakeLists.txt', "patches/**"]
generators = 'cmake'
settings = "os", "arch", "compiler", "build_type"
options = {"shared": [True, False], "fPIC": [True, False]}
@@ -40,7 +40,12 @@
cmake.configure()
return cmake
+ def _patch_sources(self):
+ for patch in self.conan_data.get("patches", {}).get(self.version, []):
+ tools.patch(**patch)
+
def build(self):
+ self._patch_sources()
cmake = self._configure_cmake()
cmake.build()
@@ -48,6 +53,7 @@
self.copy(pattern="LICENSE", dst="licenses", src=self._source_subfolder)
cmake = self._configure_cmake()
cmake.install()
+ tools.rmdir(os.path.join(self.package_folder, "lib", "cmake"))
tools.rmdir(os.path.join(self.package_folder, "lib", "pkgconfig"))
def package_info(self):
|
{"golden_diff": "diff --git a/recipes/zstd/all/conanfile.py b/recipes/zstd/all/conanfile.py\n--- a/recipes/zstd/all/conanfile.py\n+++ b/recipes/zstd/all/conanfile.py\n@@ -9,7 +9,7 @@\n description = \"Zstandard - Fast real-time compression algorithm\"\n topics = (\"conan\", \"zstd\", \"compression\", \"algorithm\", \"decoder\")\n license = \"BSD-3-Clause\"\n- exports_sources = ['CMakeLists.txt']\n+ exports_sources = ['CMakeLists.txt', \"patches/**\"]\n generators = 'cmake'\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\"shared\": [True, False], \"fPIC\": [True, False]}\n@@ -40,7 +40,12 @@\n cmake.configure()\n return cmake\n \n+ def _patch_sources(self):\n+ for patch in self.conan_data.get(\"patches\", {}).get(self.version, []):\n+ tools.patch(**patch)\n+\n def build(self):\n+ self._patch_sources()\n cmake = self._configure_cmake()\n cmake.build()\n \n@@ -48,6 +53,7 @@\n self.copy(pattern=\"LICENSE\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n+ tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"pkgconfig\"))\n \n def package_info(self):\n", "issue": "[request] zstd/1.4.5\n### Package Details\r\n * Package Name/Version: **zstd/1.4.5**\r\n * Changelog: **https://github.com/facebook/zstd/releases/tag/v1.4.5**\r\n\r\n\r\nThe above mentioned version is newly released by the upstream project and not yet available as a recipe. Please add this version.\r\n\n", "before_files": [{"content": "import os\nfrom conans import ConanFile, CMake, tools\n\n\nclass ZstdConan(ConanFile):\n name = \"zstd\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/facebook/zstd\"\n description = \"Zstandard - Fast real-time compression algorithm\"\n topics = (\"conan\", \"zstd\", \"compression\", \"algorithm\", \"decoder\")\n license = \"BSD-3-Clause\"\n exports_sources = ['CMakeLists.txt']\n generators = 'cmake'\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\"shared\": [True, False], \"fPIC\": [True, False]}\n default_options = {\"shared\": False, \"fPIC\": True}\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = self.name + \"-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def _configure_cmake(self):\n cmake = CMake(self)\n cmake.definitions[\"ZSTD_BUILD_PROGRAMS\"] = False\n cmake.definitions[\"ZSTD_BUILD_STATIC\"] = not self.options.shared\n cmake.definitions[\"ZSTD_BUILD_SHARED\"] = self.options.shared\n cmake.configure()\n return cmake\n\n def build(self):\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(pattern=\"LICENSE\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"pkgconfig\"))\n\n def package_info(self):\n self.cpp_info.libs = tools.collect_libs(self)\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs.append(\"pthread\")\n", "path": "recipes/zstd/all/conanfile.py"}], "after_files": [{"content": "import os\nfrom conans import ConanFile, CMake, tools\n\n\nclass ZstdConan(ConanFile):\n name = \"zstd\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/facebook/zstd\"\n description = \"Zstandard - Fast real-time compression algorithm\"\n topics = (\"conan\", \"zstd\", \"compression\", \"algorithm\", \"decoder\")\n license = \"BSD-3-Clause\"\n exports_sources = ['CMakeLists.txt', \"patches/**\"]\n generators = 'cmake'\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\"shared\": [True, False], \"fPIC\": [True, False]}\n default_options = {\"shared\": False, \"fPIC\": True}\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = self.name + \"-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def _configure_cmake(self):\n cmake = CMake(self)\n cmake.definitions[\"ZSTD_BUILD_PROGRAMS\"] = False\n cmake.definitions[\"ZSTD_BUILD_STATIC\"] = not self.options.shared\n cmake.definitions[\"ZSTD_BUILD_SHARED\"] = self.options.shared\n cmake.configure()\n return cmake\n\n def _patch_sources(self):\n for patch in self.conan_data.get(\"patches\", {}).get(self.version, []):\n tools.patch(**patch)\n\n def build(self):\n self._patch_sources()\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(pattern=\"LICENSE\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"pkgconfig\"))\n\n def package_info(self):\n self.cpp_info.libs = tools.collect_libs(self)\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs.append(\"pthread\")\n", "path": "recipes/zstd/all/conanfile.py"}]}
| 913 | 339 |
gh_patches_debug_29656
|
rasdani/github-patches
|
git_diff
|
crytic__slither-812
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update solc-version detector with Solidity 0.7
Currently, the recommended versions are:
```
- 0.5.11 - 0.5.13,
- 0.5.15 - 0.5.17,
- 0.6.8,
- 0.6.10 - 0.6.11. Use a simple pragma version that allows any of these versions.
```
We need to review the 0.7.x branch and update the detector (including 0.6/0.8)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `slither/detectors/attributes/incorrect_solc.py`
Content:
```
1 """
2 Check if an incorrect version of solc is used
3 """
4
5 import re
6 from slither.detectors.abstract_detector import AbstractDetector, DetectorClassification
7 from slither.formatters.attributes.incorrect_solc import custom_format
8
9 # group:
10 # 0: ^ > >= < <= (optional)
11 # 1: ' ' (optional)
12 # 2: version number
13 # 3: version number
14 # 4: version number
15
16 # pylint: disable=anomalous-backslash-in-string
17 PATTERN = re.compile("(\^|>|>=|<|<=)?([ ]+)?(\d+)\.(\d+)\.(\d+)")
18
19
20 class IncorrectSolc(AbstractDetector):
21 """
22 Check if an old version of solc is used
23 """
24
25 ARGUMENT = "solc-version"
26 HELP = "Incorrect Solidity version"
27 IMPACT = DetectorClassification.INFORMATIONAL
28 CONFIDENCE = DetectorClassification.HIGH
29
30 WIKI = "https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-versions-of-solidity"
31
32 WIKI_TITLE = "Incorrect versions of Solidity"
33 WIKI_DESCRIPTION = """
34 `solc` frequently releases new compiler versions. Using an old version prevents access to new Solidity security checks.
35 We also recommend avoiding complex `pragma` statement."""
36 WIKI_RECOMMENDATION = """
37 Deploy with any of the following Solidity versions:
38 - 0.5.11 - 0.5.13,
39 - 0.5.15 - 0.5.17,
40 - 0.6.8,
41 - 0.6.10 - 0.6.11.
42 Use a simple pragma version that allows any of these versions.
43 Consider using the latest version of Solidity for testing."""
44
45 COMPLEX_PRAGMA_TXT = "is too complex"
46 OLD_VERSION_TXT = "allows old versions"
47 LESS_THAN_TXT = "uses lesser than"
48
49 TOO_RECENT_VERSION_TXT = (
50 "necessitates a version too recent to be trusted. Consider deploying with 0.6.11"
51 )
52 BUGGY_VERSION_TXT = (
53 "is known to contain severe issues (https://solidity.readthedocs.io/en/latest/bugs.html)"
54 )
55
56 # Indicates the allowed versions. Must be formatted in increasing order.
57 ALLOWED_VERSIONS = [
58 "0.5.11",
59 "0.5.12",
60 "0.5.13",
61 "0.5.15",
62 "0.5.16",
63 "0.5.17",
64 "0.6.8",
65 "0.6.10",
66 "0.6.11",
67 ]
68
69 # Indicates the versions that should not be used.
70 BUGGY_VERSIONS = [
71 "0.4.22",
72 "^0.4.22",
73 "0.5.5",
74 "^0.5.5",
75 "0.5.6",
76 "^0.5.6",
77 "0.5.14",
78 "^0.5.14",
79 "0.6.9",
80 "^0.6.9",
81 ]
82
83 def _check_version(self, version):
84 op = version[0]
85 if op and op not in [">", ">=", "^"]:
86 return self.LESS_THAN_TXT
87 version_number = ".".join(version[2:])
88 if version_number not in self.ALLOWED_VERSIONS:
89 if list(map(int, version[2:])) > list(map(int, self.ALLOWED_VERSIONS[-1].split("."))):
90 return self.TOO_RECENT_VERSION_TXT
91 return self.OLD_VERSION_TXT
92 return None
93
94 def _check_pragma(self, version):
95 if version in self.BUGGY_VERSIONS:
96 return self.BUGGY_VERSION_TXT
97 versions = PATTERN.findall(version)
98 if len(versions) == 1:
99 version = versions[0]
100 return self._check_version(version)
101 if len(versions) == 2:
102 version_left = versions[0]
103 version_right = versions[1]
104 # Only allow two elements if the second one is
105 # <0.5.0 or <0.6.0
106 if version_right not in [
107 ("<", "", "0", "5", "0"),
108 ("<", "", "0", "6", "0"),
109 ("<", "", "0", "7", "0"),
110 ]:
111 return self.COMPLEX_PRAGMA_TXT
112 return self._check_version(version_left)
113 return self.COMPLEX_PRAGMA_TXT
114
115 def _detect(self):
116 """
117 Detects pragma statements that allow for outdated solc versions.
118 :return: Returns the relevant JSON data for the findings.
119 """
120 # Detect all version related pragmas and check if they are disallowed.
121 results = []
122 pragma = self.slither.pragma_directives
123 disallowed_pragmas = []
124
125 for p in pragma:
126 # Skip any pragma directives which do not refer to version
127 if len(p.directive) < 1 or p.directive[0] != "solidity":
128 continue
129
130 # This is version, so we test if this is disallowed.
131 reason = self._check_pragma(p.version)
132 if reason:
133 disallowed_pragmas.append((reason, p))
134
135 # If we found any disallowed pragmas, we output our findings.
136 if disallowed_pragmas:
137 for (reason, p) in disallowed_pragmas:
138 info = ["Pragma version", p, f" {reason}\n"]
139
140 json = self.generate_result(info)
141
142 results.append(json)
143
144 if self.slither.crytic_compile:
145 if self.slither.crytic_compile.compiler_version:
146 if (
147 self.slither.crytic_compile.compiler_version.version
148 not in self.ALLOWED_VERSIONS
149 ):
150 info = [
151 "solc-",
152 self.slither.crytic_compile.compiler_version.version,
153 " is not recommended for deployment\n",
154 ]
155
156 json = self.generate_result(info)
157
158 # TODO: Once crytic-compile adds config file info, add a source mapping element pointing to
159 # the line in the config that specifies the problematic version of solc
160
161 results.append(json)
162
163 return results
164
165 @staticmethod
166 def _format(slither, result):
167 custom_format(slither, result)
168
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/slither/detectors/attributes/incorrect_solc.py b/slither/detectors/attributes/incorrect_solc.py
--- a/slither/detectors/attributes/incorrect_solc.py
+++ b/slither/detectors/attributes/incorrect_solc.py
@@ -35,10 +35,9 @@
We also recommend avoiding complex `pragma` statement."""
WIKI_RECOMMENDATION = """
Deploy with any of the following Solidity versions:
-- 0.5.11 - 0.5.13,
-- 0.5.15 - 0.5.17,
-- 0.6.8,
-- 0.6.10 - 0.6.11.
+- 0.5.16 - 0.5.17
+- 0.6.11 - 0.6.12
+- 0.7.5 - 0.7.6
Use a simple pragma version that allows any of these versions.
Consider using the latest version of Solidity for testing."""
@@ -47,7 +46,7 @@
LESS_THAN_TXT = "uses lesser than"
TOO_RECENT_VERSION_TXT = (
- "necessitates a version too recent to be trusted. Consider deploying with 0.6.11"
+ "necessitates a version too recent to be trusted. Consider deploying with 0.6.12/0.7.6"
)
BUGGY_VERSION_TXT = (
"is known to contain severe issues (https://solidity.readthedocs.io/en/latest/bugs.html)"
@@ -55,15 +54,12 @@
# Indicates the allowed versions. Must be formatted in increasing order.
ALLOWED_VERSIONS = [
- "0.5.11",
- "0.5.12",
- "0.5.13",
- "0.5.15",
"0.5.16",
"0.5.17",
- "0.6.8",
- "0.6.10",
"0.6.11",
+ "0.6.12",
+ "0.7.5",
+ "0.7.6",
]
# Indicates the versions that should not be used.
|
{"golden_diff": "diff --git a/slither/detectors/attributes/incorrect_solc.py b/slither/detectors/attributes/incorrect_solc.py\n--- a/slither/detectors/attributes/incorrect_solc.py\n+++ b/slither/detectors/attributes/incorrect_solc.py\n@@ -35,10 +35,9 @@\n We also recommend avoiding complex `pragma` statement.\"\"\"\n WIKI_RECOMMENDATION = \"\"\"\n Deploy with any of the following Solidity versions:\n-- 0.5.11 - 0.5.13,\n-- 0.5.15 - 0.5.17,\n-- 0.6.8,\n-- 0.6.10 - 0.6.11.\n+- 0.5.16 - 0.5.17\n+- 0.6.11 - 0.6.12\n+- 0.7.5 - 0.7.6\n Use a simple pragma version that allows any of these versions.\n Consider using the latest version of Solidity for testing.\"\"\"\n \n@@ -47,7 +46,7 @@\n LESS_THAN_TXT = \"uses lesser than\"\n \n TOO_RECENT_VERSION_TXT = (\n- \"necessitates a version too recent to be trusted. Consider deploying with 0.6.11\"\n+ \"necessitates a version too recent to be trusted. Consider deploying with 0.6.12/0.7.6\"\n )\n BUGGY_VERSION_TXT = (\n \"is known to contain severe issues (https://solidity.readthedocs.io/en/latest/bugs.html)\"\n@@ -55,15 +54,12 @@\n \n # Indicates the allowed versions. Must be formatted in increasing order.\n ALLOWED_VERSIONS = [\n- \"0.5.11\",\n- \"0.5.12\",\n- \"0.5.13\",\n- \"0.5.15\",\n \"0.5.16\",\n \"0.5.17\",\n- \"0.6.8\",\n- \"0.6.10\",\n \"0.6.11\",\n+ \"0.6.12\",\n+ \"0.7.5\",\n+ \"0.7.6\",\n ]\n \n # Indicates the versions that should not be used.\n", "issue": "Update solc-version detector with Solidity 0.7\nCurrently, the recommended versions are:\r\n```\r\n- 0.5.11 - 0.5.13,\r\n- 0.5.15 - 0.5.17,\r\n- 0.6.8,\r\n- 0.6.10 - 0.6.11. Use a simple pragma version that allows any of these versions. \r\n```\r\n\r\nWe need to review the 0.7.x branch and update the detector (including 0.6/0.8)\n", "before_files": [{"content": "\"\"\"\n Check if an incorrect version of solc is used\n\"\"\"\n\nimport re\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\nfrom slither.formatters.attributes.incorrect_solc import custom_format\n\n# group:\n# 0: ^ > >= < <= (optional)\n# 1: ' ' (optional)\n# 2: version number\n# 3: version number\n# 4: version number\n\n# pylint: disable=anomalous-backslash-in-string\nPATTERN = re.compile(\"(\\^|>|>=|<|<=)?([ ]+)?(\\d+)\\.(\\d+)\\.(\\d+)\")\n\n\nclass IncorrectSolc(AbstractDetector):\n \"\"\"\n Check if an old version of solc is used\n \"\"\"\n\n ARGUMENT = \"solc-version\"\n HELP = \"Incorrect Solidity version\"\n IMPACT = DetectorClassification.INFORMATIONAL\n CONFIDENCE = DetectorClassification.HIGH\n\n WIKI = \"https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-versions-of-solidity\"\n\n WIKI_TITLE = \"Incorrect versions of Solidity\"\n WIKI_DESCRIPTION = \"\"\"\n`solc` frequently releases new compiler versions. Using an old version prevents access to new Solidity security checks.\nWe also recommend avoiding complex `pragma` statement.\"\"\"\n WIKI_RECOMMENDATION = \"\"\"\nDeploy with any of the following Solidity versions:\n- 0.5.11 - 0.5.13,\n- 0.5.15 - 0.5.17,\n- 0.6.8,\n- 0.6.10 - 0.6.11.\nUse a simple pragma version that allows any of these versions.\nConsider using the latest version of Solidity for testing.\"\"\"\n\n COMPLEX_PRAGMA_TXT = \"is too complex\"\n OLD_VERSION_TXT = \"allows old versions\"\n LESS_THAN_TXT = \"uses lesser than\"\n\n TOO_RECENT_VERSION_TXT = (\n \"necessitates a version too recent to be trusted. Consider deploying with 0.6.11\"\n )\n BUGGY_VERSION_TXT = (\n \"is known to contain severe issues (https://solidity.readthedocs.io/en/latest/bugs.html)\"\n )\n\n # Indicates the allowed versions. Must be formatted in increasing order.\n ALLOWED_VERSIONS = [\n \"0.5.11\",\n \"0.5.12\",\n \"0.5.13\",\n \"0.5.15\",\n \"0.5.16\",\n \"0.5.17\",\n \"0.6.8\",\n \"0.6.10\",\n \"0.6.11\",\n ]\n\n # Indicates the versions that should not be used.\n BUGGY_VERSIONS = [\n \"0.4.22\",\n \"^0.4.22\",\n \"0.5.5\",\n \"^0.5.5\",\n \"0.5.6\",\n \"^0.5.6\",\n \"0.5.14\",\n \"^0.5.14\",\n \"0.6.9\",\n \"^0.6.9\",\n ]\n\n def _check_version(self, version):\n op = version[0]\n if op and op not in [\">\", \">=\", \"^\"]:\n return self.LESS_THAN_TXT\n version_number = \".\".join(version[2:])\n if version_number not in self.ALLOWED_VERSIONS:\n if list(map(int, version[2:])) > list(map(int, self.ALLOWED_VERSIONS[-1].split(\".\"))):\n return self.TOO_RECENT_VERSION_TXT\n return self.OLD_VERSION_TXT\n return None\n\n def _check_pragma(self, version):\n if version in self.BUGGY_VERSIONS:\n return self.BUGGY_VERSION_TXT\n versions = PATTERN.findall(version)\n if len(versions) == 1:\n version = versions[0]\n return self._check_version(version)\n if len(versions) == 2:\n version_left = versions[0]\n version_right = versions[1]\n # Only allow two elements if the second one is\n # <0.5.0 or <0.6.0\n if version_right not in [\n (\"<\", \"\", \"0\", \"5\", \"0\"),\n (\"<\", \"\", \"0\", \"6\", \"0\"),\n (\"<\", \"\", \"0\", \"7\", \"0\"),\n ]:\n return self.COMPLEX_PRAGMA_TXT\n return self._check_version(version_left)\n return self.COMPLEX_PRAGMA_TXT\n\n def _detect(self):\n \"\"\"\n Detects pragma statements that allow for outdated solc versions.\n :return: Returns the relevant JSON data for the findings.\n \"\"\"\n # Detect all version related pragmas and check if they are disallowed.\n results = []\n pragma = self.slither.pragma_directives\n disallowed_pragmas = []\n\n for p in pragma:\n # Skip any pragma directives which do not refer to version\n if len(p.directive) < 1 or p.directive[0] != \"solidity\":\n continue\n\n # This is version, so we test if this is disallowed.\n reason = self._check_pragma(p.version)\n if reason:\n disallowed_pragmas.append((reason, p))\n\n # If we found any disallowed pragmas, we output our findings.\n if disallowed_pragmas:\n for (reason, p) in disallowed_pragmas:\n info = [\"Pragma version\", p, f\" {reason}\\n\"]\n\n json = self.generate_result(info)\n\n results.append(json)\n\n if self.slither.crytic_compile:\n if self.slither.crytic_compile.compiler_version:\n if (\n self.slither.crytic_compile.compiler_version.version\n not in self.ALLOWED_VERSIONS\n ):\n info = [\n \"solc-\",\n self.slither.crytic_compile.compiler_version.version,\n \" is not recommended for deployment\\n\",\n ]\n\n json = self.generate_result(info)\n\n # TODO: Once crytic-compile adds config file info, add a source mapping element pointing to\n # the line in the config that specifies the problematic version of solc\n\n results.append(json)\n\n return results\n\n @staticmethod\n def _format(slither, result):\n custom_format(slither, result)\n", "path": "slither/detectors/attributes/incorrect_solc.py"}], "after_files": [{"content": "\"\"\"\n Check if an incorrect version of solc is used\n\"\"\"\n\nimport re\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\nfrom slither.formatters.attributes.incorrect_solc import custom_format\n\n# group:\n# 0: ^ > >= < <= (optional)\n# 1: ' ' (optional)\n# 2: version number\n# 3: version number\n# 4: version number\n\n# pylint: disable=anomalous-backslash-in-string\nPATTERN = re.compile(\"(\\^|>|>=|<|<=)?([ ]+)?(\\d+)\\.(\\d+)\\.(\\d+)\")\n\n\nclass IncorrectSolc(AbstractDetector):\n \"\"\"\n Check if an old version of solc is used\n \"\"\"\n\n ARGUMENT = \"solc-version\"\n HELP = \"Incorrect Solidity version\"\n IMPACT = DetectorClassification.INFORMATIONAL\n CONFIDENCE = DetectorClassification.HIGH\n\n WIKI = \"https://github.com/crytic/slither/wiki/Detector-Documentation#incorrect-versions-of-solidity\"\n\n WIKI_TITLE = \"Incorrect versions of Solidity\"\n WIKI_DESCRIPTION = \"\"\"\n`solc` frequently releases new compiler versions. Using an old version prevents access to new Solidity security checks.\nWe also recommend avoiding complex `pragma` statement.\"\"\"\n WIKI_RECOMMENDATION = \"\"\"\nDeploy with any of the following Solidity versions:\n- 0.5.16 - 0.5.17\n- 0.6.11 - 0.6.12\n- 0.7.5 - 0.7.6\nUse a simple pragma version that allows any of these versions.\nConsider using the latest version of Solidity for testing.\"\"\"\n\n COMPLEX_PRAGMA_TXT = \"is too complex\"\n OLD_VERSION_TXT = \"allows old versions\"\n LESS_THAN_TXT = \"uses lesser than\"\n\n TOO_RECENT_VERSION_TXT = (\n \"necessitates a version too recent to be trusted. Consider deploying with 0.6.12/0.7.6\"\n )\n BUGGY_VERSION_TXT = (\n \"is known to contain severe issues (https://solidity.readthedocs.io/en/latest/bugs.html)\"\n )\n\n # Indicates the allowed versions. Must be formatted in increasing order.\n ALLOWED_VERSIONS = [\n \"0.5.16\",\n \"0.5.17\",\n \"0.6.11\",\n \"0.6.12\",\n \"0.7.5\",\n \"0.7.6\",\n ]\n\n # Indicates the versions that should not be used.\n BUGGY_VERSIONS = [\n \"0.4.22\",\n \"^0.4.22\",\n \"0.5.5\",\n \"^0.5.5\",\n \"0.5.6\",\n \"^0.5.6\",\n \"0.5.14\",\n \"^0.5.14\",\n \"0.6.9\",\n \"^0.6.9\",\n ]\n\n def _check_version(self, version):\n op = version[0]\n if op and op not in [\">\", \">=\", \"^\"]:\n return self.LESS_THAN_TXT\n version_number = \".\".join(version[2:])\n if version_number not in self.ALLOWED_VERSIONS:\n if list(map(int, version[2:])) > list(map(int, self.ALLOWED_VERSIONS[-1].split(\".\"))):\n return self.TOO_RECENT_VERSION_TXT\n return self.OLD_VERSION_TXT\n return None\n\n def _check_pragma(self, version):\n if version in self.BUGGY_VERSIONS:\n return self.BUGGY_VERSION_TXT\n versions = PATTERN.findall(version)\n if len(versions) == 1:\n version = versions[0]\n return self._check_version(version)\n if len(versions) == 2:\n version_left = versions[0]\n version_right = versions[1]\n # Only allow two elements if the second one is\n # <0.5.0 or <0.6.0\n if version_right not in [\n (\"<\", \"\", \"0\", \"5\", \"0\"),\n (\"<\", \"\", \"0\", \"6\", \"0\"),\n (\"<\", \"\", \"0\", \"7\", \"0\"),\n ]:\n return self.COMPLEX_PRAGMA_TXT\n return self._check_version(version_left)\n return self.COMPLEX_PRAGMA_TXT\n\n def _detect(self):\n \"\"\"\n Detects pragma statements that allow for outdated solc versions.\n :return: Returns the relevant JSON data for the findings.\n \"\"\"\n # Detect all version related pragmas and check if they are disallowed.\n results = []\n pragma = self.slither.pragma_directives\n disallowed_pragmas = []\n\n for p in pragma:\n # Skip any pragma directives which do not refer to version\n if len(p.directive) < 1 or p.directive[0] != \"solidity\":\n continue\n\n # This is version, so we test if this is disallowed.\n reason = self._check_pragma(p.version)\n if reason:\n disallowed_pragmas.append((reason, p))\n\n # If we found any disallowed pragmas, we output our findings.\n if disallowed_pragmas:\n for (reason, p) in disallowed_pragmas:\n info = [\"Pragma version\", p, f\" {reason}\\n\"]\n\n json = self.generate_result(info)\n\n results.append(json)\n\n if self.slither.crytic_compile:\n if self.slither.crytic_compile.compiler_version:\n if (\n self.slither.crytic_compile.compiler_version.version\n not in self.ALLOWED_VERSIONS\n ):\n info = [\n \"solc-\",\n self.slither.crytic_compile.compiler_version.version,\n \" is not recommended for deployment\\n\",\n ]\n\n json = self.generate_result(info)\n\n # TODO: Once crytic-compile adds config file info, add a source mapping element pointing to\n # the line in the config that specifies the problematic version of solc\n\n results.append(json)\n\n return results\n\n @staticmethod\n def _format(slither, result):\n custom_format(slither, result)\n", "path": "slither/detectors/attributes/incorrect_solc.py"}]}
| 2,188 | 518 |
gh_patches_debug_2664
|
rasdani/github-patches
|
git_diff
|
opsdroid__opsdroid-1408
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Duplicated shell prompt
# Description
When I run the hello skill from the shell, I found duplicated shell prompt output. I think there's some issue with the shell connector.
## Steps to Reproduce
```
qidong@ubuntu:~/Documents/opsdroid$ opsdroid start
mybot> hello
Hello qidong
mybot> mybot>
```
## Expected Functionality
There should be only one prompt printed after the skill response.
## Experienced Functionality
One extra prompt is printed.
## Configuration File
```yaml
logging:
console: false
connectors:
websocket:
bot-name: "mybot"
max-connections: 10
connection-timeout: 10
shell:
bot-name: "mybot"
skills:
## Hello (https://github.com/opsdroid/skill-hello)
hello: {}
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `opsdroid/connector/shell/__init__.py`
Content:
```
1 """A connector to send messages using the command line."""
2 import logging
3 import os
4 import sys
5 import platform
6 import asyncio
7
8 from opsdroid.connector import Connector, register_event
9 from opsdroid.events import Message
10
11 _LOGGER = logging.getLogger(__name__)
12 CONFIG_SCHEMA = {"bot-name": str}
13
14
15 class ConnectorShell(Connector):
16 """A connector to send messages using the command line."""
17
18 def __init__(self, config, opsdroid=None):
19 """Create the connector."""
20 _LOGGER.debug(_("Loaded shell Connector."))
21 super().__init__(config, opsdroid=opsdroid)
22 self.name = "shell"
23 self.config = config
24 self.bot_name = config.get("bot-name", "opsdroid")
25 self.prompt_length = None
26 self.listening = True
27 self.reader = None
28 self._closing = asyncio.Event()
29 self.loop = asyncio.get_event_loop()
30
31 for name in ("LOGNAME", "USER", "LNAME", "USERNAME"):
32 user = os.environ.get(name)
33 if user:
34 self.user = user
35
36 @property
37 def is_listening(self):
38 """Get listening status."""
39 return self.listening
40
41 @is_listening.setter
42 def is_listening(self, val):
43 """Set listening status."""
44 self.listening = val
45
46 async def read_stdin(self):
47 """Create a stream reader to read stdin asynchronously.
48
49 Returns:
50 class: asyncio.streams.StreamReader
51
52 """
53 self.reader = asyncio.StreamReader(loop=self.loop)
54 reader_protocol = asyncio.StreamReaderProtocol(self.reader)
55
56 await self.loop.connect_read_pipe(lambda: reader_protocol, sys.stdin)
57
58 return self.reader
59
60 async def async_input(self):
61 """Read user input asynchronously from stdin.
62
63 Returns:
64 string: A decoded string from user input.
65
66 """
67 if not self.reader:
68 self.reader = await self.read_stdin()
69 line = await self.reader.readline()
70
71 return line.decode("utf8").replace("\r", "").replace("\n", "")
72
73 def draw_prompt(self):
74 """Draw the user input prompt."""
75 prompt = self.bot_name + "> "
76 self.prompt_length = len(prompt)
77 print(prompt, end="", flush=True)
78
79 def clear_prompt(self):
80 """Clear the prompt."""
81 print("\r" + (" " * self.prompt_length) + "\r", end="", flush=True)
82
83 async def parseloop(self):
84 """Parseloop moved out for testing."""
85 self.draw_prompt()
86 user_input = await self.async_input()
87 message = Message(text=user_input, user=self.user, target=None, connector=self)
88 await self.opsdroid.parse(message)
89
90 async def _parse_message(self):
91 """Parse user input."""
92 while self.is_listening:
93 await self.parseloop()
94
95 async def connect(self):
96 """Connect to the shell.
97
98 There is nothing to do here since stdin is already available.
99
100 Since this is the first method called when opsdroid starts, a logging
101 message is shown if the user is using windows.
102
103 """
104 if platform.system() == "Windows":
105 _LOGGER.warning(
106 "The shell connector does not work on windows. Please install the Opsdroid Desktop App."
107 )
108 pass
109
110 async def listen(self):
111 """Listen for and parse new user input."""
112 _LOGGER.debug(_("Connecting to shell."))
113 message_processor = self.loop.create_task(self._parse_message())
114 await self._closing.wait()
115 message_processor.cancel()
116
117 @register_event(Message)
118 async def respond(self, message):
119 """Respond with a message.
120
121 Args:
122 message (object): An instance of Message
123
124 """
125 _LOGGER.debug(_("Responding with: %s."), message.text)
126 self.clear_prompt()
127 print(message.text)
128 self.draw_prompt()
129
130 async def disconnect(self):
131 """Disconnects the connector."""
132 self._closing.set()
133
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/opsdroid/connector/shell/__init__.py b/opsdroid/connector/shell/__init__.py
--- a/opsdroid/connector/shell/__init__.py
+++ b/opsdroid/connector/shell/__init__.py
@@ -125,7 +125,6 @@
_LOGGER.debug(_("Responding with: %s."), message.text)
self.clear_prompt()
print(message.text)
- self.draw_prompt()
async def disconnect(self):
"""Disconnects the connector."""
|
{"golden_diff": "diff --git a/opsdroid/connector/shell/__init__.py b/opsdroid/connector/shell/__init__.py\n--- a/opsdroid/connector/shell/__init__.py\n+++ b/opsdroid/connector/shell/__init__.py\n@@ -125,7 +125,6 @@\n _LOGGER.debug(_(\"Responding with: %s.\"), message.text)\n self.clear_prompt()\n print(message.text)\n- self.draw_prompt()\n \n async def disconnect(self):\n \"\"\"Disconnects the connector.\"\"\"\n", "issue": "Duplicated shell prompt \n# Description\r\nWhen I run the hello skill from the shell, I found duplicated shell prompt output. I think there's some issue with the shell connector. \r\n\r\n\r\n## Steps to Reproduce\r\n```\r\nqidong@ubuntu:~/Documents/opsdroid$ opsdroid start\r\nmybot> hello\r\nHello qidong\r\nmybot> mybot>\r\n```\r\n\r\n\r\n## Expected Functionality\r\nThere should be only one prompt printed after the skill response. \r\n\r\n## Experienced Functionality\r\nOne extra prompt is printed. \r\n\r\n\r\n## Configuration File\r\n\r\n```yaml\r\nlogging:\r\n console: false\r\n\r\nconnectors:\r\n websocket:\r\n bot-name: \"mybot\"\r\n max-connections: 10\r\n connection-timeout: 10\r\n \r\n shell:\r\n bot-name: \"mybot\"\r\n\r\nskills:\r\n ## Hello (https://github.com/opsdroid/skill-hello)\r\n hello: {}\r\n```\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"A connector to send messages using the command line.\"\"\"\nimport logging\nimport os\nimport sys\nimport platform\nimport asyncio\n\nfrom opsdroid.connector import Connector, register_event\nfrom opsdroid.events import Message\n\n_LOGGER = logging.getLogger(__name__)\nCONFIG_SCHEMA = {\"bot-name\": str}\n\n\nclass ConnectorShell(Connector):\n \"\"\"A connector to send messages using the command line.\"\"\"\n\n def __init__(self, config, opsdroid=None):\n \"\"\"Create the connector.\"\"\"\n _LOGGER.debug(_(\"Loaded shell Connector.\"))\n super().__init__(config, opsdroid=opsdroid)\n self.name = \"shell\"\n self.config = config\n self.bot_name = config.get(\"bot-name\", \"opsdroid\")\n self.prompt_length = None\n self.listening = True\n self.reader = None\n self._closing = asyncio.Event()\n self.loop = asyncio.get_event_loop()\n\n for name in (\"LOGNAME\", \"USER\", \"LNAME\", \"USERNAME\"):\n user = os.environ.get(name)\n if user:\n self.user = user\n\n @property\n def is_listening(self):\n \"\"\"Get listening status.\"\"\"\n return self.listening\n\n @is_listening.setter\n def is_listening(self, val):\n \"\"\"Set listening status.\"\"\"\n self.listening = val\n\n async def read_stdin(self):\n \"\"\"Create a stream reader to read stdin asynchronously.\n\n Returns:\n class: asyncio.streams.StreamReader\n\n \"\"\"\n self.reader = asyncio.StreamReader(loop=self.loop)\n reader_protocol = asyncio.StreamReaderProtocol(self.reader)\n\n await self.loop.connect_read_pipe(lambda: reader_protocol, sys.stdin)\n\n return self.reader\n\n async def async_input(self):\n \"\"\"Read user input asynchronously from stdin.\n\n Returns:\n string: A decoded string from user input.\n\n \"\"\"\n if not self.reader:\n self.reader = await self.read_stdin()\n line = await self.reader.readline()\n\n return line.decode(\"utf8\").replace(\"\\r\", \"\").replace(\"\\n\", \"\")\n\n def draw_prompt(self):\n \"\"\"Draw the user input prompt.\"\"\"\n prompt = self.bot_name + \"> \"\n self.prompt_length = len(prompt)\n print(prompt, end=\"\", flush=True)\n\n def clear_prompt(self):\n \"\"\"Clear the prompt.\"\"\"\n print(\"\\r\" + (\" \" * self.prompt_length) + \"\\r\", end=\"\", flush=True)\n\n async def parseloop(self):\n \"\"\"Parseloop moved out for testing.\"\"\"\n self.draw_prompt()\n user_input = await self.async_input()\n message = Message(text=user_input, user=self.user, target=None, connector=self)\n await self.opsdroid.parse(message)\n\n async def _parse_message(self):\n \"\"\"Parse user input.\"\"\"\n while self.is_listening:\n await self.parseloop()\n\n async def connect(self):\n \"\"\"Connect to the shell.\n\n There is nothing to do here since stdin is already available.\n\n Since this is the first method called when opsdroid starts, a logging\n message is shown if the user is using windows.\n\n \"\"\"\n if platform.system() == \"Windows\":\n _LOGGER.warning(\n \"The shell connector does not work on windows. Please install the Opsdroid Desktop App.\"\n )\n pass\n\n async def listen(self):\n \"\"\"Listen for and parse new user input.\"\"\"\n _LOGGER.debug(_(\"Connecting to shell.\"))\n message_processor = self.loop.create_task(self._parse_message())\n await self._closing.wait()\n message_processor.cancel()\n\n @register_event(Message)\n async def respond(self, message):\n \"\"\"Respond with a message.\n\n Args:\n message (object): An instance of Message\n\n \"\"\"\n _LOGGER.debug(_(\"Responding with: %s.\"), message.text)\n self.clear_prompt()\n print(message.text)\n self.draw_prompt()\n\n async def disconnect(self):\n \"\"\"Disconnects the connector.\"\"\"\n self._closing.set()\n", "path": "opsdroid/connector/shell/__init__.py"}], "after_files": [{"content": "\"\"\"A connector to send messages using the command line.\"\"\"\nimport logging\nimport os\nimport sys\nimport platform\nimport asyncio\n\nfrom opsdroid.connector import Connector, register_event\nfrom opsdroid.events import Message\n\n_LOGGER = logging.getLogger(__name__)\nCONFIG_SCHEMA = {\"bot-name\": str}\n\n\nclass ConnectorShell(Connector):\n \"\"\"A connector to send messages using the command line.\"\"\"\n\n def __init__(self, config, opsdroid=None):\n \"\"\"Create the connector.\"\"\"\n _LOGGER.debug(_(\"Loaded shell Connector.\"))\n super().__init__(config, opsdroid=opsdroid)\n self.name = \"shell\"\n self.config = config\n self.bot_name = config.get(\"bot-name\", \"opsdroid\")\n self.prompt_length = None\n self.listening = True\n self.reader = None\n self._closing = asyncio.Event()\n self.loop = asyncio.get_event_loop()\n\n for name in (\"LOGNAME\", \"USER\", \"LNAME\", \"USERNAME\"):\n user = os.environ.get(name)\n if user:\n self.user = user\n\n @property\n def is_listening(self):\n \"\"\"Get listening status.\"\"\"\n return self.listening\n\n @is_listening.setter\n def is_listening(self, val):\n \"\"\"Set listening status.\"\"\"\n self.listening = val\n\n async def read_stdin(self):\n \"\"\"Create a stream reader to read stdin asynchronously.\n\n Returns:\n class: asyncio.streams.StreamReader\n\n \"\"\"\n self.reader = asyncio.StreamReader(loop=self.loop)\n reader_protocol = asyncio.StreamReaderProtocol(self.reader)\n\n await self.loop.connect_read_pipe(lambda: reader_protocol, sys.stdin)\n\n return self.reader\n\n async def async_input(self):\n \"\"\"Read user input asynchronously from stdin.\n\n Returns:\n string: A decoded string from user input.\n\n \"\"\"\n if not self.reader:\n self.reader = await self.read_stdin()\n line = await self.reader.readline()\n\n return line.decode(\"utf8\").replace(\"\\r\", \"\").replace(\"\\n\", \"\")\n\n def draw_prompt(self):\n \"\"\"Draw the user input prompt.\"\"\"\n prompt = self.bot_name + \"> \"\n self.prompt_length = len(prompt)\n print(prompt, end=\"\", flush=True)\n\n def clear_prompt(self):\n \"\"\"Clear the prompt.\"\"\"\n print(\"\\r\" + (\" \" * self.prompt_length) + \"\\r\", end=\"\", flush=True)\n\n async def parseloop(self):\n \"\"\"Parseloop moved out for testing.\"\"\"\n self.draw_prompt()\n user_input = await self.async_input()\n message = Message(text=user_input, user=self.user, target=None, connector=self)\n await self.opsdroid.parse(message)\n\n async def _parse_message(self):\n \"\"\"Parse user input.\"\"\"\n while self.is_listening:\n await self.parseloop()\n\n async def connect(self):\n \"\"\"Connect to the shell.\n\n There is nothing to do here since stdin is already available.\n\n Since this is the first method called when opsdroid starts, a logging\n message is shown if the user is using windows.\n\n \"\"\"\n if platform.system() == \"Windows\":\n _LOGGER.warning(\n \"The shell connector does not work on windows. Please install the Opsdroid Desktop App.\"\n )\n pass\n\n async def listen(self):\n \"\"\"Listen for and parse new user input.\"\"\"\n _LOGGER.debug(_(\"Connecting to shell.\"))\n message_processor = self.loop.create_task(self._parse_message())\n await self._closing.wait()\n message_processor.cancel()\n\n @register_event(Message)\n async def respond(self, message):\n \"\"\"Respond with a message.\n\n Args:\n message (object): An instance of Message\n\n \"\"\"\n _LOGGER.debug(_(\"Responding with: %s.\"), message.text)\n self.clear_prompt()\n print(message.text)\n\n async def disconnect(self):\n \"\"\"Disconnects the connector.\"\"\"\n self._closing.set()\n", "path": "opsdroid/connector/shell/__init__.py"}]}
| 1,604 | 119 |
gh_patches_debug_23127
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-center-index-21711
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[question] Inconsistent usage of versions
### What is your question?
Hi,
I noticed that dependencies of recipes are updated in isolation without doing this consistently across conan-center-index.
This leads to version conflicts if I want to fork this repo and build the projects I need.
Example:
libarchive uses version 5.4.4 of xz_utils
https://github.com/conan-io/conan-center-index/blob/262eec05f634fe235d75f9b22d434bbe3869a13b/recipes/libarchive/all/conanfile.py#L106
elfutils uses version 5.4.5 of xz_utils
https://github.com/conan-io/conan-center-index/blob/262eec05f634fe235d75f9b22d434bbe3869a13b/recipes/elfutils/all/conanfile.py#L76
The moment I use both in the same dependency tree, I have to adjust versions myself.
This creates a lot of effort to consume recipes, as this is just an example.
Can the versions be updated consistently across conan-center-index?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `recipes/libarchive/all/conanfile.py`
Content:
```
1 from conan import ConanFile
2 from conan.tools.cmake import CMake, cmake_layout, CMakeDeps, CMakeToolchain
3 from conan.tools.files import apply_conandata_patches, collect_libs, copy, export_conandata_patches, get, rmdir
4 from conan.tools.microsoft import is_msvc
5 from conan.tools.scm import Version
6 from conan.errors import ConanInvalidConfiguration
7 import os
8
9 required_conan_version = ">=1.53.0"
10
11
12 class LibarchiveConan(ConanFile):
13 name = "libarchive"
14 description = "Multi-format archive and compression library"
15 topics = "archive", "compression", "tar", "data-compressor", "file-compression"
16 url = "https://github.com/conan-io/conan-center-index"
17 homepage = "https://libarchive.org"
18 license = "BSD-2-Clause"
19 package_type = "library"
20 settings = "os", "arch", "compiler", "build_type"
21 options = {
22 "shared": [True, False],
23 "fPIC": [True, False],
24 "with_acl": [True, False],
25 "with_zlib": [True, False],
26 "with_bzip2": [True, False],
27 "with_libxml2": [True, False],
28 "with_expat": [True, False],
29 "with_iconv": [True, False],
30 "with_pcreposix": [True, False],
31 "with_cng": [True, False],
32 "with_nettle": [True, False],
33 "with_openssl": [True, False],
34 "with_libb2": [True, False],
35 "with_lz4": [True, False],
36 "with_lzo": [True, False],
37 "with_lzma": [True, False],
38 "with_zstd": [True, False],
39 "with_mbedtls": [True, False],
40 "with_xattr": [True, False],
41 }
42 default_options = {
43 "shared": False,
44 "fPIC": True,
45 "with_acl": True,
46 "with_zlib": True,
47 "with_bzip2": False,
48 "with_libxml2": False,
49 "with_expat": False,
50 "with_iconv": True,
51 "with_pcreposix": False,
52 "with_cng": False,
53 "with_nettle": False,
54 "with_openssl": False,
55 "with_libb2": False,
56 "with_lz4": False,
57 "with_lzo": False,
58 "with_lzma": False,
59 "with_zstd": False,
60 "with_mbedtls": False,
61 "with_xattr": False,
62 }
63
64 def export_sources(self):
65 export_conandata_patches(self)
66
67 def config_options(self):
68 if self.settings.os == "Windows":
69 del self.options.fPIC
70 if Version(self.version) < "3.4.2":
71 del self.options.with_mbedtls
72
73 def configure(self):
74 if self.options.shared:
75 self.options.rm_safe("fPIC")
76 self.settings.rm_safe("compiler.cppstd")
77 self.settings.rm_safe("compiler.libcxx")
78
79 def layout(self):
80 cmake_layout(self, src_folder="src")
81
82 def requirements(self):
83 if self.options.with_zlib:
84 self.requires("zlib/[>=1.2.11 <2]")
85 if self.options.with_bzip2:
86 self.requires("bzip2/1.0.8")
87 if self.options.with_libxml2:
88 self.requires("libxml2/2.11.4")
89 if self.options.with_expat:
90 self.requires("expat/2.5.0")
91 if self.options.with_iconv:
92 self.requires("libiconv/1.17")
93 if self.options.with_pcreposix:
94 self.requires("pcre2/10.42")
95 if self.options.with_nettle:
96 self.requires("nettle/3.8.1")
97 if self.options.with_openssl:
98 self.requires("openssl/[>=1.1 <4]")
99 if self.options.with_libb2:
100 self.requires("libb2/20190723")
101 if self.options.with_lz4:
102 self.requires("lz4/1.9.4")
103 if self.options.with_lzo:
104 self.requires("lzo/2.10")
105 if self.options.with_lzma:
106 self.requires("xz_utils/5.4.4")
107 if self.options.with_zstd:
108 self.requires("zstd/1.5.5")
109 if self.options.get_safe("with_mbedtls"):
110 self.requires("mbedtls/3.2.1")
111
112 def validate(self):
113 if self.settings.os != "Windows" and self.options.with_cng:
114 # TODO: add cng when available in CCI
115 raise ConanInvalidConfiguration("cng recipe not yet available in CCI.")
116 if self.options.with_expat and self.options.with_libxml2:
117 raise ConanInvalidConfiguration("libxml2 and expat options are exclusive. They cannot be used together as XML engine")
118
119 def source(self):
120 get(self, **self.conan_data["sources"][self.version], strip_root=True)
121
122 def generate(self):
123 cmake_deps = CMakeDeps(self)
124 cmake_deps.generate()
125 tc = CMakeToolchain(self)
126 # turn off deps to avoid picking up them accidentally
127 tc.variables["ENABLE_NETTLE"] = self.options.with_nettle
128 tc.variables["ENABLE_OPENSSL"] = self.options.with_openssl
129 tc.variables["ENABLE_LIBB2"] = self.options.with_libb2
130 tc.variables["ENABLE_LZ4"] = self.options.with_lz4
131 tc.variables["ENABLE_LZO"] = self.options.with_lzo
132 tc.variables["ENABLE_LZMA"] = self.options.with_lzma
133 tc.variables["ENABLE_ZSTD"] = self.options.with_zstd
134 tc.variables["ENABLE_ZLIB"] = self.options.with_zlib
135 tc.variables["ENABLE_BZip2"] = self.options.with_bzip2
136 # requires LibXml2 cmake name
137 tc.variables["ENABLE_LIBXML2"] = self.options.with_libxml2
138 tc.variables["ENABLE_ICONV"] = self.options.with_iconv
139 tc.variables["ENABLE_EXPAT"] = self.options.with_expat
140 tc.variables["ENABLE_PCREPOSIX"] = self.options.with_pcreposix
141 if self.options.with_pcreposix:
142 tc.variables["POSIX_REGEX_LIB"] = "LIBPCREPOSIX"
143 tc.variables["ENABLE_LibGCC"] = False
144 tc.variables["ENABLE_CNG"] = self.options.with_cng
145 # turn off features
146 tc.variables["ENABLE_ACL"] = self.options.with_acl
147 # turn off components
148 tc.variables["ENABLE_TAR"] = False
149 tc.variables["ENABLE_CPIO"] = False
150 tc.variables["ENABLE_CAT"] = False
151 tc.variables["ENABLE_TEST"] = False
152 tc.variables["ENABLE_UNZIP"] = False
153 # too strict check
154 tc.variables["ENABLE_WERROR"] = False
155 if Version(self.version) >= "3.4.2":
156 tc.variables["ENABLE_MBEDTLS"] = self.options.with_mbedtls
157 tc.variables["ENABLE_XATTR"] = self.options.with_xattr
158 # TODO: Remove after fixing https://github.com/conan-io/conan/issues/12012
159 if is_msvc(self):
160 tc.cache_variables["CMAKE_TRY_COMPILE_CONFIGURATION"] = str(self.settings.build_type)
161 tc.generate()
162
163 def build(self):
164 apply_conandata_patches(self)
165 cmake = CMake(self)
166 cmake.configure()
167 cmake.build()
168
169 def package(self):
170 copy(self, "COPYING", self.source_folder, os.path.join(self.package_folder, "licenses"))
171 cmake = CMake(self)
172 cmake.install()
173 rmdir(self, os.path.join(self.package_folder, "lib", "pkgconfig"))
174 rmdir(self, os.path.join(self.package_folder, "share"))
175
176 def package_info(self):
177 self.cpp_info.set_property("cmake_find_mode", "both")
178 self.cpp_info.set_property("cmake_file_name", "LibArchive")
179 self.cpp_info.set_property("cmake_target_name", "LibArchive::LibArchive")
180 self.cpp_info.set_property("pkg_config_name", "libarchive")
181
182 self.cpp_info.names["cmake_find_package"] = "LibArchive"
183 self.cpp_info.names["cmake_find_package_multi"] = "LibArchive"
184
185 self.cpp_info.libs = collect_libs(self)
186 if self.settings.os == "Windows" and self.options.with_cng:
187 self.cpp_info.system_libs.append("bcrypt")
188 if is_msvc(self) and not self.options.shared:
189 self.cpp_info.defines = ["LIBARCHIVE_STATIC"]
190
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/recipes/libarchive/all/conanfile.py b/recipes/libarchive/all/conanfile.py
--- a/recipes/libarchive/all/conanfile.py
+++ b/recipes/libarchive/all/conanfile.py
@@ -85,7 +85,7 @@
if self.options.with_bzip2:
self.requires("bzip2/1.0.8")
if self.options.with_libxml2:
- self.requires("libxml2/2.11.4")
+ self.requires("libxml2/2.12.2")
if self.options.with_expat:
self.requires("expat/2.5.0")
if self.options.with_iconv:
@@ -103,11 +103,11 @@
if self.options.with_lzo:
self.requires("lzo/2.10")
if self.options.with_lzma:
- self.requires("xz_utils/5.4.4")
+ self.requires("xz_utils/5.4.5")
if self.options.with_zstd:
self.requires("zstd/1.5.5")
if self.options.get_safe("with_mbedtls"):
- self.requires("mbedtls/3.2.1")
+ self.requires("mbedtls/3.5.0")
def validate(self):
if self.settings.os != "Windows" and self.options.with_cng:
|
{"golden_diff": "diff --git a/recipes/libarchive/all/conanfile.py b/recipes/libarchive/all/conanfile.py\n--- a/recipes/libarchive/all/conanfile.py\n+++ b/recipes/libarchive/all/conanfile.py\n@@ -85,7 +85,7 @@\n if self.options.with_bzip2:\n self.requires(\"bzip2/1.0.8\")\n if self.options.with_libxml2:\n- self.requires(\"libxml2/2.11.4\")\n+ self.requires(\"libxml2/2.12.2\")\n if self.options.with_expat:\n self.requires(\"expat/2.5.0\")\n if self.options.with_iconv:\n@@ -103,11 +103,11 @@\n if self.options.with_lzo:\n self.requires(\"lzo/2.10\")\n if self.options.with_lzma:\n- self.requires(\"xz_utils/5.4.4\")\n+ self.requires(\"xz_utils/5.4.5\")\n if self.options.with_zstd:\n self.requires(\"zstd/1.5.5\")\n if self.options.get_safe(\"with_mbedtls\"):\n- self.requires(\"mbedtls/3.2.1\")\n+ self.requires(\"mbedtls/3.5.0\")\n \n def validate(self):\n if self.settings.os != \"Windows\" and self.options.with_cng:\n", "issue": "[question] Inconsistent usage of versions\n### What is your question?\n\nHi,\r\n\r\nI noticed that dependencies of recipes are updated in isolation without doing this consistently across conan-center-index.\r\nThis leads to version conflicts if I want to fork this repo and build the projects I need.\r\n\r\nExample: \r\nlibarchive uses version 5.4.4 of xz_utils\r\nhttps://github.com/conan-io/conan-center-index/blob/262eec05f634fe235d75f9b22d434bbe3869a13b/recipes/libarchive/all/conanfile.py#L106\r\n\r\nelfutils uses version 5.4.5 of xz_utils\r\nhttps://github.com/conan-io/conan-center-index/blob/262eec05f634fe235d75f9b22d434bbe3869a13b/recipes/elfutils/all/conanfile.py#L76\r\n\r\nThe moment I use both in the same dependency tree, I have to adjust versions myself.\r\nThis creates a lot of effort to consume recipes, as this is just an example.\r\n\r\nCan the versions be updated consistently across conan-center-index?\n", "before_files": [{"content": "from conan import ConanFile\nfrom conan.tools.cmake import CMake, cmake_layout, CMakeDeps, CMakeToolchain\nfrom conan.tools.files import apply_conandata_patches, collect_libs, copy, export_conandata_patches, get, rmdir\nfrom conan.tools.microsoft import is_msvc\nfrom conan.tools.scm import Version\nfrom conan.errors import ConanInvalidConfiguration\nimport os\n\nrequired_conan_version = \">=1.53.0\"\n\n\nclass LibarchiveConan(ConanFile):\n name = \"libarchive\"\n description = \"Multi-format archive and compression library\"\n topics = \"archive\", \"compression\", \"tar\", \"data-compressor\", \"file-compression\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://libarchive.org\"\n license = \"BSD-2-Clause\"\n package_type = \"library\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"with_acl\": [True, False],\n \"with_zlib\": [True, False],\n \"with_bzip2\": [True, False],\n \"with_libxml2\": [True, False],\n \"with_expat\": [True, False],\n \"with_iconv\": [True, False],\n \"with_pcreposix\": [True, False],\n \"with_cng\": [True, False],\n \"with_nettle\": [True, False],\n \"with_openssl\": [True, False],\n \"with_libb2\": [True, False],\n \"with_lz4\": [True, False],\n \"with_lzo\": [True, False],\n \"with_lzma\": [True, False],\n \"with_zstd\": [True, False],\n \"with_mbedtls\": [True, False],\n \"with_xattr\": [True, False],\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"with_acl\": True,\n \"with_zlib\": True,\n \"with_bzip2\": False,\n \"with_libxml2\": False,\n \"with_expat\": False,\n \"with_iconv\": True,\n \"with_pcreposix\": False,\n \"with_cng\": False,\n \"with_nettle\": False,\n \"with_openssl\": False,\n \"with_libb2\": False,\n \"with_lz4\": False,\n \"with_lzo\": False,\n \"with_lzma\": False,\n \"with_zstd\": False,\n \"with_mbedtls\": False,\n \"with_xattr\": False,\n }\n\n def export_sources(self):\n export_conandata_patches(self)\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n if Version(self.version) < \"3.4.2\":\n del self.options.with_mbedtls\n\n def configure(self):\n if self.options.shared:\n self.options.rm_safe(\"fPIC\")\n self.settings.rm_safe(\"compiler.cppstd\")\n self.settings.rm_safe(\"compiler.libcxx\")\n\n def layout(self):\n cmake_layout(self, src_folder=\"src\")\n\n def requirements(self):\n if self.options.with_zlib:\n self.requires(\"zlib/[>=1.2.11 <2]\")\n if self.options.with_bzip2:\n self.requires(\"bzip2/1.0.8\")\n if self.options.with_libxml2:\n self.requires(\"libxml2/2.11.4\")\n if self.options.with_expat:\n self.requires(\"expat/2.5.0\")\n if self.options.with_iconv:\n self.requires(\"libiconv/1.17\")\n if self.options.with_pcreposix:\n self.requires(\"pcre2/10.42\")\n if self.options.with_nettle:\n self.requires(\"nettle/3.8.1\")\n if self.options.with_openssl:\n self.requires(\"openssl/[>=1.1 <4]\")\n if self.options.with_libb2:\n self.requires(\"libb2/20190723\")\n if self.options.with_lz4:\n self.requires(\"lz4/1.9.4\")\n if self.options.with_lzo:\n self.requires(\"lzo/2.10\")\n if self.options.with_lzma:\n self.requires(\"xz_utils/5.4.4\")\n if self.options.with_zstd:\n self.requires(\"zstd/1.5.5\")\n if self.options.get_safe(\"with_mbedtls\"):\n self.requires(\"mbedtls/3.2.1\")\n\n def validate(self):\n if self.settings.os != \"Windows\" and self.options.with_cng:\n # TODO: add cng when available in CCI\n raise ConanInvalidConfiguration(\"cng recipe not yet available in CCI.\")\n if self.options.with_expat and self.options.with_libxml2:\n raise ConanInvalidConfiguration(\"libxml2 and expat options are exclusive. They cannot be used together as XML engine\")\n\n def source(self):\n get(self, **self.conan_data[\"sources\"][self.version], strip_root=True)\n\n def generate(self):\n cmake_deps = CMakeDeps(self)\n cmake_deps.generate()\n tc = CMakeToolchain(self)\n # turn off deps to avoid picking up them accidentally\n tc.variables[\"ENABLE_NETTLE\"] = self.options.with_nettle\n tc.variables[\"ENABLE_OPENSSL\"] = self.options.with_openssl\n tc.variables[\"ENABLE_LIBB2\"] = self.options.with_libb2\n tc.variables[\"ENABLE_LZ4\"] = self.options.with_lz4\n tc.variables[\"ENABLE_LZO\"] = self.options.with_lzo\n tc.variables[\"ENABLE_LZMA\"] = self.options.with_lzma\n tc.variables[\"ENABLE_ZSTD\"] = self.options.with_zstd\n tc.variables[\"ENABLE_ZLIB\"] = self.options.with_zlib\n tc.variables[\"ENABLE_BZip2\"] = self.options.with_bzip2\n # requires LibXml2 cmake name\n tc.variables[\"ENABLE_LIBXML2\"] = self.options.with_libxml2\n tc.variables[\"ENABLE_ICONV\"] = self.options.with_iconv\n tc.variables[\"ENABLE_EXPAT\"] = self.options.with_expat\n tc.variables[\"ENABLE_PCREPOSIX\"] = self.options.with_pcreposix\n if self.options.with_pcreposix:\n tc.variables[\"POSIX_REGEX_LIB\"] = \"LIBPCREPOSIX\"\n tc.variables[\"ENABLE_LibGCC\"] = False\n tc.variables[\"ENABLE_CNG\"] = self.options.with_cng\n # turn off features\n tc.variables[\"ENABLE_ACL\"] = self.options.with_acl\n # turn off components\n tc.variables[\"ENABLE_TAR\"] = False\n tc.variables[\"ENABLE_CPIO\"] = False\n tc.variables[\"ENABLE_CAT\"] = False\n tc.variables[\"ENABLE_TEST\"] = False\n tc.variables[\"ENABLE_UNZIP\"] = False\n # too strict check\n tc.variables[\"ENABLE_WERROR\"] = False\n if Version(self.version) >= \"3.4.2\":\n tc.variables[\"ENABLE_MBEDTLS\"] = self.options.with_mbedtls\n tc.variables[\"ENABLE_XATTR\"] = self.options.with_xattr\n # TODO: Remove after fixing https://github.com/conan-io/conan/issues/12012\n if is_msvc(self):\n tc.cache_variables[\"CMAKE_TRY_COMPILE_CONFIGURATION\"] = str(self.settings.build_type)\n tc.generate()\n\n def build(self):\n apply_conandata_patches(self)\n cmake = CMake(self)\n cmake.configure()\n cmake.build()\n\n def package(self):\n copy(self, \"COPYING\", self.source_folder, os.path.join(self.package_folder, \"licenses\"))\n cmake = CMake(self)\n cmake.install()\n rmdir(self, os.path.join(self.package_folder, \"lib\", \"pkgconfig\"))\n rmdir(self, os.path.join(self.package_folder, \"share\"))\n\n def package_info(self):\n self.cpp_info.set_property(\"cmake_find_mode\", \"both\")\n self.cpp_info.set_property(\"cmake_file_name\", \"LibArchive\")\n self.cpp_info.set_property(\"cmake_target_name\", \"LibArchive::LibArchive\")\n self.cpp_info.set_property(\"pkg_config_name\", \"libarchive\")\n\n self.cpp_info.names[\"cmake_find_package\"] = \"LibArchive\"\n self.cpp_info.names[\"cmake_find_package_multi\"] = \"LibArchive\"\n\n self.cpp_info.libs = collect_libs(self)\n if self.settings.os == \"Windows\" and self.options.with_cng:\n self.cpp_info.system_libs.append(\"bcrypt\")\n if is_msvc(self) and not self.options.shared:\n self.cpp_info.defines = [\"LIBARCHIVE_STATIC\"]\n", "path": "recipes/libarchive/all/conanfile.py"}], "after_files": [{"content": "from conan import ConanFile\nfrom conan.tools.cmake import CMake, cmake_layout, CMakeDeps, CMakeToolchain\nfrom conan.tools.files import apply_conandata_patches, collect_libs, copy, export_conandata_patches, get, rm, rmdir\nfrom conan.tools.microsoft import is_msvc\nfrom conan.tools.scm import Version\nfrom conan.errors import ConanInvalidConfiguration\nimport os\n\nrequired_conan_version = \">=1.53.0\"\n\n\nclass LibarchiveConan(ConanFile):\n name = \"libarchive\"\n description = \"Multi-format archive and compression library\"\n topics = \"archive\", \"compression\", \"tar\", \"data-compressor\", \"file-compression\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://libarchive.org\"\n license = \"BSD-2-Clause\"\n package_type = \"library\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"with_acl\": [True, False],\n \"with_zlib\": [True, False],\n \"with_bzip2\": [True, False],\n \"with_libxml2\": [True, False],\n \"with_expat\": [True, False],\n \"with_iconv\": [True, False],\n \"with_pcreposix\": [True, False],\n \"with_cng\": [True, False],\n \"with_nettle\": [True, False],\n \"with_openssl\": [True, False],\n \"with_libb2\": [True, False],\n \"with_lz4\": [True, False],\n \"with_lzo\": [True, False],\n \"with_lzma\": [True, False],\n \"with_zstd\": [True, False],\n \"with_mbedtls\": [True, False],\n \"with_xattr\": [True, False],\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"with_acl\": True,\n \"with_zlib\": True,\n \"with_bzip2\": False,\n \"with_libxml2\": False,\n \"with_expat\": False,\n \"with_iconv\": True,\n \"with_pcreposix\": False,\n \"with_cng\": False,\n \"with_nettle\": False,\n \"with_openssl\": False,\n \"with_libb2\": False,\n \"with_lz4\": False,\n \"with_lzo\": False,\n \"with_lzma\": False,\n \"with_zstd\": False,\n \"with_mbedtls\": False,\n \"with_xattr\": False,\n }\n\n def export_sources(self):\n export_conandata_patches(self)\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n if Version(self.version) < \"3.4.2\":\n del self.options.with_mbedtls\n\n def configure(self):\n if self.options.shared:\n self.options.rm_safe(\"fPIC\")\n self.settings.rm_safe(\"compiler.cppstd\")\n self.settings.rm_safe(\"compiler.libcxx\")\n\n def layout(self):\n cmake_layout(self, src_folder=\"src\")\n\n def requirements(self):\n if self.options.with_zlib:\n self.requires(\"zlib/[>=1.2.11 <2]\")\n if self.options.with_bzip2:\n self.requires(\"bzip2/1.0.8\")\n if self.options.with_libxml2:\n self.requires(\"libxml2/2.12.2\")\n if self.options.with_expat:\n self.requires(\"expat/2.5.0\")\n if self.options.with_iconv:\n self.requires(\"libiconv/1.17\")\n if self.options.with_pcreposix:\n self.requires(\"pcre2/10.42\")\n if self.options.with_nettle:\n self.requires(\"nettle/3.8.1\")\n if self.options.with_openssl:\n self.requires(\"openssl/[>=1.1 <4]\")\n if self.options.with_libb2:\n self.requires(\"libb2/20190723\")\n if self.options.with_lz4:\n self.requires(\"lz4/1.9.4\")\n if self.options.with_lzo:\n self.requires(\"lzo/2.10\")\n if self.options.with_lzma:\n self.requires(\"xz_utils/5.4.5\")\n if self.options.with_zstd:\n self.requires(\"zstd/1.5.5\")\n if self.options.get_safe(\"with_mbedtls\"):\n self.requires(\"mbedtls/3.5.0\")\n\n def validate(self):\n if self.settings.os != \"Windows\" and self.options.with_cng:\n # TODO: add cng when available in CCI\n raise ConanInvalidConfiguration(\"cng recipe not yet available in CCI.\")\n if self.options.with_expat and self.options.with_libxml2:\n raise ConanInvalidConfiguration(\"libxml2 and expat options are exclusive. They cannot be used together as XML engine\")\n\n def source(self):\n get(self, **self.conan_data[\"sources\"][self.version], strip_root=True)\n\n def generate(self):\n cmake_deps = CMakeDeps(self)\n cmake_deps.generate()\n tc = CMakeToolchain(self)\n # turn off deps to avoid picking up them accidentally\n tc.variables[\"ENABLE_NETTLE\"] = self.options.with_nettle\n tc.variables[\"ENABLE_OPENSSL\"] = self.options.with_openssl\n tc.variables[\"ENABLE_LIBB2\"] = self.options.with_libb2\n tc.variables[\"ENABLE_LZ4\"] = self.options.with_lz4\n tc.variables[\"ENABLE_LZO\"] = self.options.with_lzo\n tc.variables[\"ENABLE_LZMA\"] = self.options.with_lzma\n tc.variables[\"ENABLE_ZSTD\"] = self.options.with_zstd\n tc.variables[\"ENABLE_ZLIB\"] = self.options.with_zlib\n tc.variables[\"ENABLE_BZip2\"] = self.options.with_bzip2\n # requires LibXml2 cmake name\n tc.variables[\"ENABLE_LIBXML2\"] = self.options.with_libxml2\n tc.variables[\"ENABLE_ICONV\"] = self.options.with_iconv\n tc.variables[\"ENABLE_EXPAT\"] = self.options.with_expat\n tc.variables[\"ENABLE_PCREPOSIX\"] = self.options.with_pcreposix\n if self.options.with_pcreposix:\n tc.variables[\"POSIX_REGEX_LIB\"] = \"LIBPCREPOSIX\"\n tc.variables[\"ENABLE_LibGCC\"] = False\n tc.variables[\"ENABLE_CNG\"] = self.options.with_cng\n # turn off features\n tc.variables[\"ENABLE_ACL\"] = self.options.with_acl\n # turn off components\n tc.variables[\"ENABLE_TAR\"] = False\n tc.variables[\"ENABLE_CPIO\"] = False\n tc.variables[\"ENABLE_CAT\"] = False\n tc.variables[\"ENABLE_TEST\"] = False\n # too strict check\n tc.variables[\"ENABLE_WERROR\"] = False\n if Version(self.version) >= \"3.4.2\":\n tc.variables[\"ENABLE_MBEDTLS\"] = self.options.with_mbedtls\n tc.variables[\"ENABLE_XATTR\"] = self.options.with_xattr\n # TODO: Remove after fixing https://github.com/conan-io/conan/issues/12012\n if is_msvc(self):\n tc.cache_variables[\"CMAKE_TRY_COMPILE_CONFIGURATION\"] = str(self.settings.build_type)\n tc.generate()\n\n def build(self):\n apply_conandata_patches(self)\n cmake = CMake(self)\n cmake.configure()\n cmake.build()\n\n def package(self):\n copy(self, \"COPYING\", self.source_folder, os.path.join(self.package_folder, \"licenses\"))\n cmake = CMake(self)\n cmake.install()\n rmdir(self, os.path.join(self.package_folder, \"lib\", \"pkgconfig\"))\n rmdir(self, os.path.join(self.package_folder, \"share\"))\n\n if self.options.shared:\n rm(self, \"*.a\", os.path.join(self.package_folder, \"lib\"))\n\n def package_info(self):\n self.cpp_info.set_property(\"cmake_find_mode\", \"both\")\n self.cpp_info.set_property(\"cmake_file_name\", \"LibArchive\")\n self.cpp_info.set_property(\"cmake_target_name\", \"LibArchive::LibArchive\")\n self.cpp_info.set_property(\"pkg_config_name\", \"libarchive\")\n\n self.cpp_info.names[\"cmake_find_package\"] = \"LibArchive\"\n self.cpp_info.names[\"cmake_find_package_multi\"] = \"LibArchive\"\n\n self.cpp_info.libs = collect_libs(self)\n if self.settings.os == \"Windows\" and self.options.with_cng:\n self.cpp_info.system_libs.append(\"bcrypt\")\n if is_msvc(self) and not self.options.shared:\n self.cpp_info.defines = [\"LIBARCHIVE_STATIC\"]\n", "path": "recipes/libarchive/all/conanfile.py"}]}
| 2,915 | 308 |
gh_patches_debug_13612
|
rasdani/github-patches
|
git_diff
|
sublimelsp__LSP-202
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Autocompletion hints fail if CompletionItem label is empty
This could happen when an user requests autocompletion hints for all the members and functions of some package like OpenCV (``cv2``), using python-language-server as LSP backend.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugin/completion.py`
Content:
```
1 import sublime
2 import sublime_plugin
3
4 try:
5 from typing import Any, List, Dict, Tuple, Callable, Optional
6 assert Any and List and Dict and Tuple and Callable and Optional
7 except ImportError:
8 pass
9
10 from .core.protocol import Request
11 from .core.settings import settings
12 from .core.logging import debug, exception_log
13 from .core.protocol import CompletionItemKind
14 from .core.clients import client_for_view
15 from .core.configurations import is_supported_syntax
16 from .core.documents import get_document_position, purge_did_change
17
18
19 NO_COMPLETION_SCOPES = 'comment, string'
20 completion_item_kind_names = {v: k for k, v in CompletionItemKind.__dict__.items()}
21
22
23 class CompletionState(object):
24 IDLE = 0
25 REQUESTING = 1
26 APPLYING = 2
27 CANCELLING = 3
28
29
30 resolvable_completion_items = [] # type: List[Any]
31
32
33 def find_completion_item(label: str) -> 'Optional[Any]':
34 matches = list(filter(lambda i: i.get("label") == label, resolvable_completion_items))
35 return matches[0] if matches else None
36
37
38 class CompletionContext(object):
39
40 def __init__(self, begin):
41 self.begin = begin # type: Optional[int]
42 self.end = None # type: Optional[int]
43 self.region = None # type: Optional[sublime.Region]
44 self.committing = False
45
46 def committed_at(self, end):
47 self.end = end
48 self.region = sublime.Region(self.begin, self.end)
49 self.committing = False
50
51
52 current_completion = None # type: Optional[CompletionContext]
53
54
55 def has_resolvable_completions(view):
56 client = client_for_view(view)
57 if client:
58 completionProvider = client.get_capability(
59 'completionProvider')
60 if completionProvider:
61 if completionProvider.get('resolveProvider', False):
62 return True
63 return False
64
65
66 class CompletionSnippetHandler(sublime_plugin.EventListener):
67
68 def on_query_completions(self, view, prefix, locations):
69 global current_completion
70 if settings.resolve_completion_for_snippets and has_resolvable_completions(view):
71 current_completion = CompletionContext(view.sel()[0].begin())
72
73 def on_text_command(self, view, command_name, args):
74 if settings.resolve_completion_for_snippets and current_completion:
75 current_completion.committing = command_name in ('commit_completion', 'insert_best_completion')
76
77 def on_modified(self, view):
78 global current_completion
79
80 if settings.resolve_completion_for_snippets and view.file_name():
81 if current_completion and current_completion.committing:
82 current_completion.committed_at(view.sel()[0].end())
83 inserted = view.substr(current_completion.region)
84 item = find_completion_item(inserted)
85 if item:
86 self.resolve_completion(item, view)
87 else:
88 current_completion = None
89
90 def resolve_completion(self, item, view):
91 client = client_for_view(view)
92 if not client:
93 return
94
95 client.send_request(
96 Request.resolveCompletionItem(item),
97 lambda response: self.handle_resolve_response(response, view))
98
99 def handle_resolve_response(self, response, view):
100 # replace inserted text if a snippet was returned.
101 if current_completion and response.get('insertTextFormat') == 2: # snippet
102 insertText = response.get('insertText')
103 try:
104 sel = view.sel()
105 sel.clear()
106 sel.add(current_completion.region)
107 view.run_command("insert_snippet", {"contents": insertText})
108 except Exception as err:
109 exception_log("Error inserting snippet: " + insertText, err)
110
111
112 class CompletionHandler(sublime_plugin.ViewEventListener):
113 def __init__(self, view):
114 self.view = view
115 self.initialized = False
116 self.enabled = False
117 self.trigger_chars = [] # type: List[str]
118 self.resolve = False
119 self.resolve_details = [] # type: List[Tuple[str, str]]
120 self.state = CompletionState.IDLE
121 self.completions = [] # type: List[Any]
122 self.next_request = None # type: Optional[Tuple[str, List[int]]]
123 self.last_prefix = ""
124 self.last_location = 0
125
126 @classmethod
127 def is_applicable(cls, settings):
128 syntax = settings.get('syntax')
129 if syntax is not None:
130 return is_supported_syntax(syntax)
131 else:
132 return False
133
134 def initialize(self):
135 self.initialized = True
136 client = client_for_view(self.view)
137 if client:
138 completionProvider = client.get_capability(
139 'completionProvider')
140 if completionProvider:
141 self.enabled = True
142 self.trigger_chars = completionProvider.get(
143 'triggerCharacters') or []
144 self.has_resolve_provider = completionProvider.get('resolveProvider', False)
145
146 def is_after_trigger_character(self, location):
147 if location > 0:
148 prev_char = self.view.substr(location - 1)
149 return prev_char in self.trigger_chars
150
151 def is_same_completion(self, prefix, locations):
152 # completion requests from the same location with the same prefix are cached.
153 current_start = locations[0] - len(prefix)
154 last_start = self.last_location - len(self.last_prefix)
155 return prefix.startswith(self.last_prefix) and current_start == last_start
156
157 def on_modified(self):
158 # hide completion when backspacing past last completion.
159 if self.view.sel()[0].begin() < self.last_location:
160 self.last_location = 0
161 self.view.run_command("hide_auto_complete")
162
163 def on_query_completions(self, prefix, locations):
164 if self.view.match_selector(locations[0], NO_COMPLETION_SCOPES):
165 return (
166 [],
167 sublime.INHIBIT_WORD_COMPLETIONS | sublime.INHIBIT_EXPLICIT_COMPLETIONS
168 )
169
170 if not self.initialized:
171 self.initialize()
172
173 if self.enabled:
174 reuse_completion = self.is_same_completion(prefix, locations)
175 if self.state == CompletionState.IDLE:
176 if not reuse_completion:
177 self.last_prefix = prefix
178 self.last_location = locations[0]
179 self.do_request(prefix, locations)
180 self.completions = []
181
182 elif self.state in (CompletionState.REQUESTING, CompletionState.CANCELLING):
183 self.next_request = (prefix, locations)
184 self.state = CompletionState.CANCELLING
185
186 elif self.state == CompletionState.APPLYING:
187 self.state = CompletionState.IDLE
188
189 return (
190 self.completions,
191 0 if not settings.only_show_lsp_completions
192 else sublime.INHIBIT_WORD_COMPLETIONS | sublime.INHIBIT_EXPLICIT_COMPLETIONS
193 )
194
195 def do_request(self, prefix: str, locations: 'List[int]'):
196 self.next_request = None
197 view = self.view
198
199 # don't store client so we can handle restarts
200 client = client_for_view(view)
201 if not client:
202 return
203
204 if settings.complete_all_chars or self.is_after_trigger_character(locations[0]):
205 purge_did_change(view.buffer_id())
206 document_position = get_document_position(view, locations[0])
207 if document_position:
208 client.send_request(
209 Request.complete(document_position),
210 self.handle_response,
211 self.handle_error)
212 self.state = CompletionState.REQUESTING
213
214 def format_completion(self, item: dict) -> 'Tuple[str, str]':
215 # Sublime handles snippets automatically, so we don't have to care about insertTextFormat.
216 label = item["label"]
217 # choose hint based on availability and user preference
218 hint = None
219 if settings.completion_hint_type == "auto":
220 hint = item.get("detail")
221 if not hint:
222 kind = item.get("kind")
223 if kind:
224 hint = completion_item_kind_names[kind]
225 elif settings.completion_hint_type == "detail":
226 hint = item.get("detail")
227 elif settings.completion_hint_type == "kind":
228 kind = item.get("kind")
229 if kind:
230 hint = completion_item_kind_names[kind]
231 # label is an alternative for insertText if insertText not provided
232 insert_text = item.get("insertText") or label
233 if insert_text[0] == '$': # sublime needs leading '$' escaped.
234 insert_text = '\$' + insert_text[1:]
235 # only return label with a hint if available
236 return "\t ".join((label, hint)) if hint else label, insert_text
237
238 def handle_response(self, response: dict):
239 global resolvable_completion_items
240
241 if self.state == CompletionState.REQUESTING:
242 items = response["items"] if isinstance(response,
243 dict) else response
244 if len(items) > 1 and "sortText" in items[0]:
245 # If the first item has a sortText key, assume all of them have a sortText key.
246 items = sorted(items, key=lambda item: item["sortText"])
247 self.completions = list(self.format_completion(item) for item in items)
248
249 if self.has_resolve_provider:
250 resolvable_completion_items = items
251
252 # if insert_best_completion was just ran, undo it before presenting new completions.
253 prev_char = self.view.substr(self.view.sel()[0].begin() - 1)
254 if prev_char.isspace():
255 self.view.run_command("undo")
256
257 self.state = CompletionState.APPLYING
258 self.view.run_command("hide_auto_complete")
259 self.run_auto_complete()
260 elif self.state == CompletionState.CANCELLING:
261 if self.next_request:
262 prefix, locations = self.next_request
263 self.do_request(prefix, locations)
264 else:
265 debug('Got unexpected response while in state {}'.format(self.state))
266
267 def handle_error(self, error: dict):
268 sublime.status_message('Completion error: ' + str(error.get('message')))
269 self.state = CompletionState.IDLE
270
271 def run_auto_complete(self):
272 self.view.run_command(
273 "auto_complete", {
274 'disable_auto_insert': True,
275 'api_completions_only': settings.only_show_lsp_completions,
276 'next_completion_if_showing': False
277 })
278
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plugin/completion.py b/plugin/completion.py
--- a/plugin/completion.py
+++ b/plugin/completion.py
@@ -230,7 +230,7 @@
hint = completion_item_kind_names[kind]
# label is an alternative for insertText if insertText not provided
insert_text = item.get("insertText") or label
- if insert_text[0] == '$': # sublime needs leading '$' escaped.
+ if len(insert_text) > 0 and insert_text[0] == '$': # sublime needs leading '$' escaped.
insert_text = '\$' + insert_text[1:]
# only return label with a hint if available
return "\t ".join((label, hint)) if hint else label, insert_text
|
{"golden_diff": "diff --git a/plugin/completion.py b/plugin/completion.py\n--- a/plugin/completion.py\n+++ b/plugin/completion.py\n@@ -230,7 +230,7 @@\n hint = completion_item_kind_names[kind]\n # label is an alternative for insertText if insertText not provided\n insert_text = item.get(\"insertText\") or label\n- if insert_text[0] == '$': # sublime needs leading '$' escaped.\n+ if len(insert_text) > 0 and insert_text[0] == '$': # sublime needs leading '$' escaped.\n insert_text = '\\$' + insert_text[1:]\n # only return label with a hint if available\n return \"\\t \".join((label, hint)) if hint else label, insert_text\n", "issue": "Autocompletion hints fail if CompletionItem label is empty \nThis could happen when an user requests autocompletion hints for all the members and functions of some package like OpenCV (``cv2``), using python-language-server as LSP backend.\n", "before_files": [{"content": "import sublime\nimport sublime_plugin\n\ntry:\n from typing import Any, List, Dict, Tuple, Callable, Optional\n assert Any and List and Dict and Tuple and Callable and Optional\nexcept ImportError:\n pass\n\nfrom .core.protocol import Request\nfrom .core.settings import settings\nfrom .core.logging import debug, exception_log\nfrom .core.protocol import CompletionItemKind\nfrom .core.clients import client_for_view\nfrom .core.configurations import is_supported_syntax\nfrom .core.documents import get_document_position, purge_did_change\n\n\nNO_COMPLETION_SCOPES = 'comment, string'\ncompletion_item_kind_names = {v: k for k, v in CompletionItemKind.__dict__.items()}\n\n\nclass CompletionState(object):\n IDLE = 0\n REQUESTING = 1\n APPLYING = 2\n CANCELLING = 3\n\n\nresolvable_completion_items = [] # type: List[Any]\n\n\ndef find_completion_item(label: str) -> 'Optional[Any]':\n matches = list(filter(lambda i: i.get(\"label\") == label, resolvable_completion_items))\n return matches[0] if matches else None\n\n\nclass CompletionContext(object):\n\n def __init__(self, begin):\n self.begin = begin # type: Optional[int]\n self.end = None # type: Optional[int]\n self.region = None # type: Optional[sublime.Region]\n self.committing = False\n\n def committed_at(self, end):\n self.end = end\n self.region = sublime.Region(self.begin, self.end)\n self.committing = False\n\n\ncurrent_completion = None # type: Optional[CompletionContext]\n\n\ndef has_resolvable_completions(view):\n client = client_for_view(view)\n if client:\n completionProvider = client.get_capability(\n 'completionProvider')\n if completionProvider:\n if completionProvider.get('resolveProvider', False):\n return True\n return False\n\n\nclass CompletionSnippetHandler(sublime_plugin.EventListener):\n\n def on_query_completions(self, view, prefix, locations):\n global current_completion\n if settings.resolve_completion_for_snippets and has_resolvable_completions(view):\n current_completion = CompletionContext(view.sel()[0].begin())\n\n def on_text_command(self, view, command_name, args):\n if settings.resolve_completion_for_snippets and current_completion:\n current_completion.committing = command_name in ('commit_completion', 'insert_best_completion')\n\n def on_modified(self, view):\n global current_completion\n\n if settings.resolve_completion_for_snippets and view.file_name():\n if current_completion and current_completion.committing:\n current_completion.committed_at(view.sel()[0].end())\n inserted = view.substr(current_completion.region)\n item = find_completion_item(inserted)\n if item:\n self.resolve_completion(item, view)\n else:\n current_completion = None\n\n def resolve_completion(self, item, view):\n client = client_for_view(view)\n if not client:\n return\n\n client.send_request(\n Request.resolveCompletionItem(item),\n lambda response: self.handle_resolve_response(response, view))\n\n def handle_resolve_response(self, response, view):\n # replace inserted text if a snippet was returned.\n if current_completion and response.get('insertTextFormat') == 2: # snippet\n insertText = response.get('insertText')\n try:\n sel = view.sel()\n sel.clear()\n sel.add(current_completion.region)\n view.run_command(\"insert_snippet\", {\"contents\": insertText})\n except Exception as err:\n exception_log(\"Error inserting snippet: \" + insertText, err)\n\n\nclass CompletionHandler(sublime_plugin.ViewEventListener):\n def __init__(self, view):\n self.view = view\n self.initialized = False\n self.enabled = False\n self.trigger_chars = [] # type: List[str]\n self.resolve = False\n self.resolve_details = [] # type: List[Tuple[str, str]]\n self.state = CompletionState.IDLE\n self.completions = [] # type: List[Any]\n self.next_request = None # type: Optional[Tuple[str, List[int]]]\n self.last_prefix = \"\"\n self.last_location = 0\n\n @classmethod\n def is_applicable(cls, settings):\n syntax = settings.get('syntax')\n if syntax is not None:\n return is_supported_syntax(syntax)\n else:\n return False\n\n def initialize(self):\n self.initialized = True\n client = client_for_view(self.view)\n if client:\n completionProvider = client.get_capability(\n 'completionProvider')\n if completionProvider:\n self.enabled = True\n self.trigger_chars = completionProvider.get(\n 'triggerCharacters') or []\n self.has_resolve_provider = completionProvider.get('resolveProvider', False)\n\n def is_after_trigger_character(self, location):\n if location > 0:\n prev_char = self.view.substr(location - 1)\n return prev_char in self.trigger_chars\n\n def is_same_completion(self, prefix, locations):\n # completion requests from the same location with the same prefix are cached.\n current_start = locations[0] - len(prefix)\n last_start = self.last_location - len(self.last_prefix)\n return prefix.startswith(self.last_prefix) and current_start == last_start\n\n def on_modified(self):\n # hide completion when backspacing past last completion.\n if self.view.sel()[0].begin() < self.last_location:\n self.last_location = 0\n self.view.run_command(\"hide_auto_complete\")\n\n def on_query_completions(self, prefix, locations):\n if self.view.match_selector(locations[0], NO_COMPLETION_SCOPES):\n return (\n [],\n sublime.INHIBIT_WORD_COMPLETIONS | sublime.INHIBIT_EXPLICIT_COMPLETIONS\n )\n\n if not self.initialized:\n self.initialize()\n\n if self.enabled:\n reuse_completion = self.is_same_completion(prefix, locations)\n if self.state == CompletionState.IDLE:\n if not reuse_completion:\n self.last_prefix = prefix\n self.last_location = locations[0]\n self.do_request(prefix, locations)\n self.completions = []\n\n elif self.state in (CompletionState.REQUESTING, CompletionState.CANCELLING):\n self.next_request = (prefix, locations)\n self.state = CompletionState.CANCELLING\n\n elif self.state == CompletionState.APPLYING:\n self.state = CompletionState.IDLE\n\n return (\n self.completions,\n 0 if not settings.only_show_lsp_completions\n else sublime.INHIBIT_WORD_COMPLETIONS | sublime.INHIBIT_EXPLICIT_COMPLETIONS\n )\n\n def do_request(self, prefix: str, locations: 'List[int]'):\n self.next_request = None\n view = self.view\n\n # don't store client so we can handle restarts\n client = client_for_view(view)\n if not client:\n return\n\n if settings.complete_all_chars or self.is_after_trigger_character(locations[0]):\n purge_did_change(view.buffer_id())\n document_position = get_document_position(view, locations[0])\n if document_position:\n client.send_request(\n Request.complete(document_position),\n self.handle_response,\n self.handle_error)\n self.state = CompletionState.REQUESTING\n\n def format_completion(self, item: dict) -> 'Tuple[str, str]':\n # Sublime handles snippets automatically, so we don't have to care about insertTextFormat.\n label = item[\"label\"]\n # choose hint based on availability and user preference\n hint = None\n if settings.completion_hint_type == \"auto\":\n hint = item.get(\"detail\")\n if not hint:\n kind = item.get(\"kind\")\n if kind:\n hint = completion_item_kind_names[kind]\n elif settings.completion_hint_type == \"detail\":\n hint = item.get(\"detail\")\n elif settings.completion_hint_type == \"kind\":\n kind = item.get(\"kind\")\n if kind:\n hint = completion_item_kind_names[kind]\n # label is an alternative for insertText if insertText not provided\n insert_text = item.get(\"insertText\") or label\n if insert_text[0] == '$': # sublime needs leading '$' escaped.\n insert_text = '\\$' + insert_text[1:]\n # only return label with a hint if available\n return \"\\t \".join((label, hint)) if hint else label, insert_text\n\n def handle_response(self, response: dict):\n global resolvable_completion_items\n\n if self.state == CompletionState.REQUESTING:\n items = response[\"items\"] if isinstance(response,\n dict) else response\n if len(items) > 1 and \"sortText\" in items[0]:\n # If the first item has a sortText key, assume all of them have a sortText key.\n items = sorted(items, key=lambda item: item[\"sortText\"])\n self.completions = list(self.format_completion(item) for item in items)\n\n if self.has_resolve_provider:\n resolvable_completion_items = items\n\n # if insert_best_completion was just ran, undo it before presenting new completions.\n prev_char = self.view.substr(self.view.sel()[0].begin() - 1)\n if prev_char.isspace():\n self.view.run_command(\"undo\")\n\n self.state = CompletionState.APPLYING\n self.view.run_command(\"hide_auto_complete\")\n self.run_auto_complete()\n elif self.state == CompletionState.CANCELLING:\n if self.next_request:\n prefix, locations = self.next_request\n self.do_request(prefix, locations)\n else:\n debug('Got unexpected response while in state {}'.format(self.state))\n\n def handle_error(self, error: dict):\n sublime.status_message('Completion error: ' + str(error.get('message')))\n self.state = CompletionState.IDLE\n\n def run_auto_complete(self):\n self.view.run_command(\n \"auto_complete\", {\n 'disable_auto_insert': True,\n 'api_completions_only': settings.only_show_lsp_completions,\n 'next_completion_if_showing': False\n })\n", "path": "plugin/completion.py"}], "after_files": [{"content": "import sublime\nimport sublime_plugin\n\ntry:\n from typing import Any, List, Dict, Tuple, Callable, Optional\n assert Any and List and Dict and Tuple and Callable and Optional\nexcept ImportError:\n pass\n\nfrom .core.protocol import Request\nfrom .core.settings import settings\nfrom .core.logging import debug, exception_log\nfrom .core.protocol import CompletionItemKind\nfrom .core.clients import client_for_view\nfrom .core.configurations import is_supported_syntax\nfrom .core.documents import get_document_position, purge_did_change\n\n\nNO_COMPLETION_SCOPES = 'comment, string'\ncompletion_item_kind_names = {v: k for k, v in CompletionItemKind.__dict__.items()}\n\n\nclass CompletionState(object):\n IDLE = 0\n REQUESTING = 1\n APPLYING = 2\n CANCELLING = 3\n\n\nresolvable_completion_items = [] # type: List[Any]\n\n\ndef find_completion_item(label: str) -> 'Optional[Any]':\n matches = list(filter(lambda i: i.get(\"label\") == label, resolvable_completion_items))\n return matches[0] if matches else None\n\n\nclass CompletionContext(object):\n\n def __init__(self, begin):\n self.begin = begin # type: Optional[int]\n self.end = None # type: Optional[int]\n self.region = None # type: Optional[sublime.Region]\n self.committing = False\n\n def committed_at(self, end):\n self.end = end\n self.region = sublime.Region(self.begin, self.end)\n self.committing = False\n\n\ncurrent_completion = None # type: Optional[CompletionContext]\n\n\ndef has_resolvable_completions(view):\n client = client_for_view(view)\n if client:\n completionProvider = client.get_capability(\n 'completionProvider')\n if completionProvider:\n if completionProvider.get('resolveProvider', False):\n return True\n return False\n\n\nclass CompletionSnippetHandler(sublime_plugin.EventListener):\n\n def on_query_completions(self, view, prefix, locations):\n global current_completion\n if settings.resolve_completion_for_snippets and has_resolvable_completions(view):\n current_completion = CompletionContext(view.sel()[0].begin())\n\n def on_text_command(self, view, command_name, args):\n if settings.resolve_completion_for_snippets and current_completion:\n current_completion.committing = command_name in ('commit_completion', 'insert_best_completion')\n\n def on_modified(self, view):\n global current_completion\n\n if settings.resolve_completion_for_snippets and view.file_name():\n if current_completion and current_completion.committing:\n current_completion.committed_at(view.sel()[0].end())\n inserted = view.substr(current_completion.region)\n item = find_completion_item(inserted)\n if item:\n self.resolve_completion(item, view)\n else:\n current_completion = None\n\n def resolve_completion(self, item, view):\n client = client_for_view(view)\n if not client:\n return\n\n client.send_request(\n Request.resolveCompletionItem(item),\n lambda response: self.handle_resolve_response(response, view))\n\n def handle_resolve_response(self, response, view):\n # replace inserted text if a snippet was returned.\n if current_completion and response.get('insertTextFormat') == 2: # snippet\n insertText = response.get('insertText')\n try:\n sel = view.sel()\n sel.clear()\n sel.add(current_completion.region)\n view.run_command(\"insert_snippet\", {\"contents\": insertText})\n except Exception as err:\n exception_log(\"Error inserting snippet: \" + insertText, err)\n\n\nclass CompletionHandler(sublime_plugin.ViewEventListener):\n def __init__(self, view):\n self.view = view\n self.initialized = False\n self.enabled = False\n self.trigger_chars = [] # type: List[str]\n self.resolve = False\n self.resolve_details = [] # type: List[Tuple[str, str]]\n self.state = CompletionState.IDLE\n self.completions = [] # type: List[Any]\n self.next_request = None # type: Optional[Tuple[str, List[int]]]\n self.last_prefix = \"\"\n self.last_location = 0\n\n @classmethod\n def is_applicable(cls, settings):\n syntax = settings.get('syntax')\n if syntax is not None:\n return is_supported_syntax(syntax)\n else:\n return False\n\n def initialize(self):\n self.initialized = True\n client = client_for_view(self.view)\n if client:\n completionProvider = client.get_capability(\n 'completionProvider')\n if completionProvider:\n self.enabled = True\n self.trigger_chars = completionProvider.get(\n 'triggerCharacters') or []\n self.has_resolve_provider = completionProvider.get('resolveProvider', False)\n\n def is_after_trigger_character(self, location):\n if location > 0:\n prev_char = self.view.substr(location - 1)\n return prev_char in self.trigger_chars\n\n def is_same_completion(self, prefix, locations):\n # completion requests from the same location with the same prefix are cached.\n current_start = locations[0] - len(prefix)\n last_start = self.last_location - len(self.last_prefix)\n return prefix.startswith(self.last_prefix) and current_start == last_start\n\n def on_modified(self):\n # hide completion when backspacing past last completion.\n if self.view.sel()[0].begin() < self.last_location:\n self.last_location = 0\n self.view.run_command(\"hide_auto_complete\")\n\n def on_query_completions(self, prefix, locations):\n if self.view.match_selector(locations[0], NO_COMPLETION_SCOPES):\n return (\n [],\n sublime.INHIBIT_WORD_COMPLETIONS | sublime.INHIBIT_EXPLICIT_COMPLETIONS\n )\n\n if not self.initialized:\n self.initialize()\n\n if self.enabled:\n reuse_completion = self.is_same_completion(prefix, locations)\n if self.state == CompletionState.IDLE:\n if not reuse_completion:\n self.last_prefix = prefix\n self.last_location = locations[0]\n self.do_request(prefix, locations)\n self.completions = []\n\n elif self.state in (CompletionState.REQUESTING, CompletionState.CANCELLING):\n self.next_request = (prefix, locations)\n self.state = CompletionState.CANCELLING\n\n elif self.state == CompletionState.APPLYING:\n self.state = CompletionState.IDLE\n\n return (\n self.completions,\n 0 if not settings.only_show_lsp_completions\n else sublime.INHIBIT_WORD_COMPLETIONS | sublime.INHIBIT_EXPLICIT_COMPLETIONS\n )\n\n def do_request(self, prefix: str, locations: 'List[int]'):\n self.next_request = None\n view = self.view\n\n # don't store client so we can handle restarts\n client = client_for_view(view)\n if not client:\n return\n\n if settings.complete_all_chars or self.is_after_trigger_character(locations[0]):\n purge_did_change(view.buffer_id())\n document_position = get_document_position(view, locations[0])\n if document_position:\n client.send_request(\n Request.complete(document_position),\n self.handle_response,\n self.handle_error)\n self.state = CompletionState.REQUESTING\n\n def format_completion(self, item: dict) -> 'Tuple[str, str]':\n # Sublime handles snippets automatically, so we don't have to care about insertTextFormat.\n label = item[\"label\"]\n # choose hint based on availability and user preference\n hint = None\n if settings.completion_hint_type == \"auto\":\n hint = item.get(\"detail\")\n if not hint:\n kind = item.get(\"kind\")\n if kind:\n hint = completion_item_kind_names[kind]\n elif settings.completion_hint_type == \"detail\":\n hint = item.get(\"detail\")\n elif settings.completion_hint_type == \"kind\":\n kind = item.get(\"kind\")\n if kind:\n hint = completion_item_kind_names[kind]\n # label is an alternative for insertText if insertText not provided\n insert_text = item.get(\"insertText\") or label\n if len(insert_text) > 0 and insert_text[0] == '$': # sublime needs leading '$' escaped.\n insert_text = '\\$' + insert_text[1:]\n # only return label with a hint if available\n return \"\\t \".join((label, hint)) if hint else label, insert_text\n\n def handle_response(self, response: dict):\n global resolvable_completion_items\n\n if self.state == CompletionState.REQUESTING:\n items = response[\"items\"] if isinstance(response,\n dict) else response\n if len(items) > 1 and \"sortText\" in items[0]:\n # If the first item has a sortText key, assume all of them have a sortText key.\n items = sorted(items, key=lambda item: item[\"sortText\"])\n self.completions = list(self.format_completion(item) for item in items)\n\n if self.has_resolve_provider:\n resolvable_completion_items = items\n\n # if insert_best_completion was just ran, undo it before presenting new completions.\n prev_char = self.view.substr(self.view.sel()[0].begin() - 1)\n if prev_char.isspace():\n self.view.run_command(\"undo\")\n\n self.state = CompletionState.APPLYING\n self.view.run_command(\"hide_auto_complete\")\n self.run_auto_complete()\n elif self.state == CompletionState.CANCELLING:\n if self.next_request:\n prefix, locations = self.next_request\n self.do_request(prefix, locations)\n else:\n debug('Got unexpected response while in state {}'.format(self.state))\n\n def handle_error(self, error: dict):\n sublime.status_message('Completion error: ' + str(error.get('message')))\n self.state = CompletionState.IDLE\n\n def run_auto_complete(self):\n self.view.run_command(\n \"auto_complete\", {\n 'disable_auto_insert': True,\n 'api_completions_only': settings.only_show_lsp_completions,\n 'next_completion_if_showing': False\n })\n", "path": "plugin/completion.py"}]}
| 3,224 | 172 |
gh_patches_debug_10776
|
rasdani/github-patches
|
git_diff
|
marshmallow-code__webargs-123
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
ValidationError causes HTTP/500 in Flask instead of HTTP/422 in Python 2
Here are the symptoms: http://stackoverflow.com/questions/37321835/flask-error-handler-not-able-to-handle-assertion-error/39624893#39624893
Here is the case where I personally hit the issue with Python 2.x and PyPy in Python 2.x mode only: https://travis-ci.org/frol/flask-restplus-server-example/builds/161685419
After troubleshooting, I discovered that this issue stems from the `FlaskParser.handle_error` which calls `abort` function with `exc=` argument, which then [is set as "data"](https://github.com/sloria/webargs/blob/dev/webargs/flaskparser.py#L38) on the raised `HTTPException`, which in its turn is tried to be json-dumped and fails with `TypeError`, and now the new exception is in `sys.exc_info()`, which causes Flask to [assert here](https://github.com/pallets/flask/blob/01081dbe6cdfa3fc43d8e1fff708d4ed95e1be7e/flask/app.py#L1515).
Searching git history for the introduced `exc=` in `FlaskParser`, I found it here: https://github.com/sloria/webargs/commit/6f8088c7c85c05d2891e81ee97d38f2ae801159d
Is there a reason for `exc=` to be passed? How can we fix this?
Python 3 seems to have changed something with `sys.exc_info()` behaviour since even after the `TypeError` caused by `json.dumps()`, `sys.exc_info()` still reports `HTTPException` while in Python 2 it returns `TypeError`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `webargs/flaskparser.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Flask request argument parsing module.
3
4 Example: ::
5
6 from flask import Flask
7
8 from webargs import fields
9 from webargs.flaskparser import use_args
10
11 app = Flask(__name__)
12
13 hello_args = {
14 'name': fields.Str(required=True)
15 }
16
17 @app.route('/')
18 @use_args(hello_args)
19 def index(args):
20 return 'Hello ' + args['name']
21 """
22 import flask
23 from werkzeug.exceptions import HTTPException
24
25 from webargs import core
26
27
28 def abort(http_status_code, **kwargs):
29 """Raise a HTTPException for the given http_status_code. Attach any keyword
30 arguments to the exception for later processing.
31
32 From Flask-Restful. See NOTICE file for license information.
33 """
34 try:
35 flask.abort(http_status_code)
36 except HTTPException as err:
37 if len(kwargs):
38 err.data = kwargs
39 raise err
40
41
42 def is_json_request(req):
43 return core.is_json(req.mimetype)
44
45
46 class FlaskParser(core.Parser):
47 """Flask request argument parser."""
48
49 __location_map__ = dict(
50 view_args='parse_view_args',
51 **core.Parser.__location_map__
52 )
53
54 def parse_view_args(self, req, name, field):
55 """Pull a value from the request's ``view_args``."""
56 return core.get_value(req.view_args, name, field)
57
58 def parse_json(self, req, name, field):
59 """Pull a json value from the request."""
60 # Pass force in order to handle vendor media types,
61 # e.g. applications/vnd.json+api
62 # this should be unnecessary in Flask 1.0
63 force = is_json_request(req)
64 # Fail silently so that the webargs parser can handle the error
65 if hasattr(req, 'get_json'):
66 # Flask >= 0.10.x
67 json_data = req.get_json(force=force, silent=True)
68 else:
69 # Flask <= 0.9.x
70 json_data = req.json
71 if json_data is None:
72 return core.missing
73 return core.get_value(json_data, name, field, allow_many_nested=True)
74
75 def parse_querystring(self, req, name, field):
76 """Pull a querystring value from the request."""
77 return core.get_value(req.args, name, field)
78
79 def parse_form(self, req, name, field):
80 """Pull a form value from the request."""
81 try:
82 return core.get_value(req.form, name, field)
83 except AttributeError:
84 pass
85 return core.missing
86
87 def parse_headers(self, req, name, field):
88 """Pull a value from the header data."""
89 return core.get_value(req.headers, name, field)
90
91 def parse_cookies(self, req, name, field):
92 """Pull a value from the cookiejar."""
93 return core.get_value(req.cookies, name, field)
94
95 def parse_files(self, req, name, field):
96 """Pull a file from the request."""
97 return core.get_value(req.files, name, field)
98
99 def handle_error(self, error):
100 """Handles errors during parsing. Aborts the current HTTP request and
101 responds with a 422 error.
102 """
103 status_code = getattr(error, 'status_code', self.DEFAULT_VALIDATION_STATUS)
104 abort(status_code, messages=error.messages, exc=error)
105
106 def get_default_request(self):
107 """Override to use Flask's thread-local request objec by default"""
108 return flask.request
109
110 parser = FlaskParser()
111 use_args = parser.use_args
112 use_kwargs = parser.use_kwargs
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/webargs/flaskparser.py b/webargs/flaskparser.py
--- a/webargs/flaskparser.py
+++ b/webargs/flaskparser.py
@@ -25,7 +25,7 @@
from webargs import core
-def abort(http_status_code, **kwargs):
+def abort(http_status_code, exc=None, **kwargs):
"""Raise a HTTPException for the given http_status_code. Attach any keyword
arguments to the exception for later processing.
@@ -36,6 +36,7 @@
except HTTPException as err:
if len(kwargs):
err.data = kwargs
+ err.exc = exc
raise err
|
{"golden_diff": "diff --git a/webargs/flaskparser.py b/webargs/flaskparser.py\n--- a/webargs/flaskparser.py\n+++ b/webargs/flaskparser.py\n@@ -25,7 +25,7 @@\n from webargs import core\n \n \n-def abort(http_status_code, **kwargs):\n+def abort(http_status_code, exc=None, **kwargs):\n \"\"\"Raise a HTTPException for the given http_status_code. Attach any keyword\n arguments to the exception for later processing.\n \n@@ -36,6 +36,7 @@\n except HTTPException as err:\n if len(kwargs):\n err.data = kwargs\n+ err.exc = exc\n raise err\n", "issue": "ValidationError causes HTTP/500 in Flask instead of HTTP/422 in Python 2\nHere are the symptoms: http://stackoverflow.com/questions/37321835/flask-error-handler-not-able-to-handle-assertion-error/39624893#39624893\n\nHere is the case where I personally hit the issue with Python 2.x and PyPy in Python 2.x mode only: https://travis-ci.org/frol/flask-restplus-server-example/builds/161685419\n\nAfter troubleshooting, I discovered that this issue stems from the `FlaskParser.handle_error` which calls `abort` function with `exc=` argument, which then [is set as \"data\"](https://github.com/sloria/webargs/blob/dev/webargs/flaskparser.py#L38) on the raised `HTTPException`, which in its turn is tried to be json-dumped and fails with `TypeError`, and now the new exception is in `sys.exc_info()`, which causes Flask to [assert here](https://github.com/pallets/flask/blob/01081dbe6cdfa3fc43d8e1fff708d4ed95e1be7e/flask/app.py#L1515).\n\nSearching git history for the introduced `exc=` in `FlaskParser`, I found it here: https://github.com/sloria/webargs/commit/6f8088c7c85c05d2891e81ee97d38f2ae801159d\n\nIs there a reason for `exc=` to be passed? How can we fix this?\n\nPython 3 seems to have changed something with `sys.exc_info()` behaviour since even after the `TypeError` caused by `json.dumps()`, `sys.exc_info()` still reports `HTTPException` while in Python 2 it returns `TypeError`.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Flask request argument parsing module.\n\nExample: ::\n\n from flask import Flask\n\n from webargs import fields\n from webargs.flaskparser import use_args\n\n app = Flask(__name__)\n\n hello_args = {\n 'name': fields.Str(required=True)\n }\n\n @app.route('/')\n @use_args(hello_args)\n def index(args):\n return 'Hello ' + args['name']\n\"\"\"\nimport flask\nfrom werkzeug.exceptions import HTTPException\n\nfrom webargs import core\n\n\ndef abort(http_status_code, **kwargs):\n \"\"\"Raise a HTTPException for the given http_status_code. Attach any keyword\n arguments to the exception for later processing.\n\n From Flask-Restful. See NOTICE file for license information.\n \"\"\"\n try:\n flask.abort(http_status_code)\n except HTTPException as err:\n if len(kwargs):\n err.data = kwargs\n raise err\n\n\ndef is_json_request(req):\n return core.is_json(req.mimetype)\n\n\nclass FlaskParser(core.Parser):\n \"\"\"Flask request argument parser.\"\"\"\n\n __location_map__ = dict(\n view_args='parse_view_args',\n **core.Parser.__location_map__\n )\n\n def parse_view_args(self, req, name, field):\n \"\"\"Pull a value from the request's ``view_args``.\"\"\"\n return core.get_value(req.view_args, name, field)\n\n def parse_json(self, req, name, field):\n \"\"\"Pull a json value from the request.\"\"\"\n # Pass force in order to handle vendor media types,\n # e.g. applications/vnd.json+api\n # this should be unnecessary in Flask 1.0\n force = is_json_request(req)\n # Fail silently so that the webargs parser can handle the error\n if hasattr(req, 'get_json'):\n # Flask >= 0.10.x\n json_data = req.get_json(force=force, silent=True)\n else:\n # Flask <= 0.9.x\n json_data = req.json\n if json_data is None:\n return core.missing\n return core.get_value(json_data, name, field, allow_many_nested=True)\n\n def parse_querystring(self, req, name, field):\n \"\"\"Pull a querystring value from the request.\"\"\"\n return core.get_value(req.args, name, field)\n\n def parse_form(self, req, name, field):\n \"\"\"Pull a form value from the request.\"\"\"\n try:\n return core.get_value(req.form, name, field)\n except AttributeError:\n pass\n return core.missing\n\n def parse_headers(self, req, name, field):\n \"\"\"Pull a value from the header data.\"\"\"\n return core.get_value(req.headers, name, field)\n\n def parse_cookies(self, req, name, field):\n \"\"\"Pull a value from the cookiejar.\"\"\"\n return core.get_value(req.cookies, name, field)\n\n def parse_files(self, req, name, field):\n \"\"\"Pull a file from the request.\"\"\"\n return core.get_value(req.files, name, field)\n\n def handle_error(self, error):\n \"\"\"Handles errors during parsing. Aborts the current HTTP request and\n responds with a 422 error.\n \"\"\"\n status_code = getattr(error, 'status_code', self.DEFAULT_VALIDATION_STATUS)\n abort(status_code, messages=error.messages, exc=error)\n\n def get_default_request(self):\n \"\"\"Override to use Flask's thread-local request objec by default\"\"\"\n return flask.request\n\nparser = FlaskParser()\nuse_args = parser.use_args\nuse_kwargs = parser.use_kwargs\n", "path": "webargs/flaskparser.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Flask request argument parsing module.\n\nExample: ::\n\n from flask import Flask\n\n from webargs import fields\n from webargs.flaskparser import use_args\n\n app = Flask(__name__)\n\n hello_args = {\n 'name': fields.Str(required=True)\n }\n\n @app.route('/')\n @use_args(hello_args)\n def index(args):\n return 'Hello ' + args['name']\n\"\"\"\nimport flask\nfrom werkzeug.exceptions import HTTPException\n\nfrom webargs import core\n\n\ndef abort(http_status_code, exc=None, **kwargs):\n \"\"\"Raise a HTTPException for the given http_status_code. Attach any keyword\n arguments to the exception for later processing.\n\n From Flask-Restful. See NOTICE file for license information.\n \"\"\"\n try:\n flask.abort(http_status_code)\n except HTTPException as err:\n if len(kwargs):\n err.data = kwargs\n err.exc = exc\n raise err\n\n\ndef is_json_request(req):\n return core.is_json(req.mimetype)\n\n\nclass FlaskParser(core.Parser):\n \"\"\"Flask request argument parser.\"\"\"\n\n __location_map__ = dict(\n view_args='parse_view_args',\n **core.Parser.__location_map__\n )\n\n def parse_view_args(self, req, name, field):\n \"\"\"Pull a value from the request's ``view_args``.\"\"\"\n return core.get_value(req.view_args, name, field)\n\n def parse_json(self, req, name, field):\n \"\"\"Pull a json value from the request.\"\"\"\n # Pass force in order to handle vendor media types,\n # e.g. applications/vnd.json+api\n # this should be unnecessary in Flask 1.0\n force = is_json_request(req)\n # Fail silently so that the webargs parser can handle the error\n if hasattr(req, 'get_json'):\n # Flask >= 0.10.x\n json_data = req.get_json(force=force, silent=True)\n else:\n # Flask <= 0.9.x\n json_data = req.json\n if json_data is None:\n return core.missing\n return core.get_value(json_data, name, field, allow_many_nested=True)\n\n def parse_querystring(self, req, name, field):\n \"\"\"Pull a querystring value from the request.\"\"\"\n return core.get_value(req.args, name, field)\n\n def parse_form(self, req, name, field):\n \"\"\"Pull a form value from the request.\"\"\"\n try:\n return core.get_value(req.form, name, field)\n except AttributeError:\n pass\n return core.missing\n\n def parse_headers(self, req, name, field):\n \"\"\"Pull a value from the header data.\"\"\"\n return core.get_value(req.headers, name, field)\n\n def parse_cookies(self, req, name, field):\n \"\"\"Pull a value from the cookiejar.\"\"\"\n return core.get_value(req.cookies, name, field)\n\n def parse_files(self, req, name, field):\n \"\"\"Pull a file from the request.\"\"\"\n return core.get_value(req.files, name, field)\n\n def handle_error(self, error):\n \"\"\"Handles errors during parsing. Aborts the current HTTP request and\n responds with a 422 error.\n \"\"\"\n status_code = getattr(error, 'status_code', self.DEFAULT_VALIDATION_STATUS)\n abort(status_code, messages=error.messages, exc=error)\n\n def get_default_request(self):\n \"\"\"Override to use Flask's thread-local request objec by default\"\"\"\n return flask.request\n\nparser = FlaskParser()\nuse_args = parser.use_args\nuse_kwargs = parser.use_kwargs\n", "path": "webargs/flaskparser.py"}]}
| 1,694 | 145 |
gh_patches_debug_4694
|
rasdani/github-patches
|
git_diff
|
scikit-hep__pyhf-206
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add link back to GitHub on webpage
At the moment there is no link on [the docs webpage](http://diana-hep.org/pyhf/) back to the pyhf GitHub page. It would be nice to add one to make navigation between the two easier.
Perhaps a ["Fork me on GitHub" ribbon](https://blog.github.com/2008-12-19-github-ribbons/)?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # pyhf documentation build configuration file, created by
4 # sphinx-quickstart on Fri Feb 9 11:58:49 2018.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 # If extensions (or modules to document with autodoc) are in another directory,
16 # add these directories to sys.path here. If the directory is relative to the
17 # documentation root, use os.path.abspath to make it absolute, like shown here.
18 #
19 import os
20 import sys
21 sys.path.insert(0, os.path.abspath('..'))
22
23 # -- General configuration ------------------------------------------------
24
25 # If your documentation needs a minimal Sphinx version, state it here.
26 #
27 # needs_sphinx = '1.0'
28
29 # Add any Sphinx extension module names here, as strings. They can be
30 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
31 # ones.
32 extensions = [
33 'sphinx.ext.autodoc',
34 'sphinx.ext.autosummary',
35 'sphinx.ext.coverage',
36 'sphinx.ext.mathjax',
37 'sphinx.ext.ifconfig',
38 'sphinx.ext.viewcode',
39 'sphinx.ext.githubpages',
40 'sphinxcontrib.bibtex',
41 'sphinxcontrib.napoleon',
42 'nbsphinx',
43 ]
44
45 # Generate the API documentation when building
46 autosummary_generate = True
47 numpydoc_show_class_members = False
48
49 # Add any paths that contain templates here, relative to this directory.
50 templates_path = ['_templates']
51
52 # The suffix(es) of source filenames.
53 # You can specify multiple suffix as a list of string:
54 #
55 # source_suffix = ['.rst', '.md']
56 source_suffix = '.rst'
57
58 # The encoding of source files.
59 #
60 # source_encoding = 'utf-8-sig'
61
62 # The master toctree document.
63 master_doc = 'index'
64
65 # General information about the project.
66 project = u'pyhf'
67 copyright = u'2018, Lukas Heinrich, Matthew Feickert, Giordon Stark'
68 author = u'Lukas Heinrich, Matthew Feickert, Giordon Stark'
69
70 # The version info for the project you're documenting, acts as replacement for
71 # |version| and |release|, also used in various other places throughout the
72 # built documents.
73 #
74 # The short X.Y version.
75 version = u'0.0.4'
76 # The full version, including alpha/beta/rc tags.
77 release = u'0.0.4'
78
79 # The language for content autogenerated by Sphinx. Refer to documentation
80 # for a list of supported languages.
81 #
82 # This is also used if you do content translation via gettext catalogs.
83 # Usually you set "language" from the command line for these cases.
84 language = None
85
86 # There are two options for replacing |today|: either, you set today to some
87 # non-false value, then it is used:
88 #
89 # today = ''
90 #
91 # Else, today_fmt is used as the format for a strftime call.
92 #
93 # today_fmt = '%B %d, %Y'
94
95 # List of patterns, relative to source directory, that match files and
96 # directories to ignore when looking for source files.
97 # This patterns also effect to html_static_path and html_extra_path
98 exclude_patterns = ['_build', '**.ipynb_checkpoints', 'examples/experiments']
99
100 # The reST default role (used for this markup: `text`) to use for all
101 # documents.
102 #
103 # default_role = None
104
105 # If true, '()' will be appended to :func: etc. cross-reference text.
106 #
107 # add_function_parentheses = True
108
109 # If true, the current module name will be prepended to all description
110 # unit titles (such as .. function::).
111 #
112 # add_module_names = True
113
114 # If true, sectionauthor and moduleauthor directives will be shown in the
115 # output. They are ignored by default.
116 #
117 # show_authors = False
118
119 # The name of the Pygments (syntax highlighting) style to use.
120 pygments_style = 'sphinx'
121
122 # A list of ignored prefixes for module index sorting.
123 # modindex_common_prefix = []
124
125 # If true, keep warnings as "system message" paragraphs in the built documents.
126 # keep_warnings = False
127
128 # If true, `todo` and `todoList` produce output, else they produce nothing.
129 todo_include_todos = False
130
131
132 # -- Options for HTML output ----------------------------------------------
133
134 # The theme to use for HTML and HTML Help pages. See the documentation for
135 # a list of builtin themes.
136 #
137 html_theme = 'sphinx_rtd_theme'
138
139 # Theme options are theme-specific and customize the look and feel of a theme
140 # further. For a list of options available for each theme, see the
141 # documentation.
142 #
143 html_theme_options = {}
144
145 # Add any paths that contain custom themes here, relative to this directory.
146 html_theme_path = []
147
148 # The name for this set of Sphinx documents.
149 # "<project> v<release> documentation" by default.
150 #
151 # html_title = u'pyhf v0.0.4'
152
153 # A shorter title for the navigation bar. Default is the same as html_title.
154 #
155 # html_short_title = None
156
157 # The name of an image file (relative to this directory) to place at the top
158 # of the sidebar.
159 #
160 # html_logo = None
161
162 # The name of an image file (relative to this directory) to use as a favicon of
163 # the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
164 # pixels large.
165 #
166 # html_favicon = None
167
168 # Add any paths that contain custom static files (such as style sheets) here,
169 # relative to this directory. They are copied after the builtin static files,
170 # so a file named "default.css" will overwrite the builtin "default.css".
171 html_static_path = ['_static']
172
173 # Add any extra paths that contain custom files (such as robots.txt or
174 # .htaccess) here, relative to this directory. These files are copied
175 # directly to the root of the documentation.
176 #
177 # html_extra_path = []
178
179 # If not None, a 'Last updated on:' timestamp is inserted at every page
180 # bottom, using the given strftime format.
181 # The empty string is equivalent to '%b %d, %Y'.
182 #
183 # html_last_updated_fmt = None
184
185 # If true, SmartyPants will be used to convert quotes and dashes to
186 # typographically correct entities.
187 #
188 # html_use_smartypants = True
189
190 # Custom sidebar templates, maps document names to template names.
191 #
192 # html_sidebars = {}
193
194 # Additional templates that should be rendered to pages, maps page names to
195 # template names.
196 #
197 # html_additional_pages = {}
198
199 # If false, no module index is generated.
200 #
201 # html_domain_indices = True
202
203 # If false, no index is generated.
204 #
205 # html_use_index = True
206
207 # If true, the index is split into individual pages for each letter.
208 #
209 # html_split_index = False
210
211 # If true, links to the reST sources are added to the pages.
212 #
213 # html_show_sourcelink = True
214
215 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
216 #
217 # html_show_sphinx = True
218
219 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
220 #
221 # html_show_copyright = True
222
223 # If true, an OpenSearch description file will be output, and all pages will
224 # contain a <link> tag referring to it. The value of this option must be the
225 # base URL from which the finished HTML is served.
226 #
227 # html_use_opensearch = ''
228
229 # This is the file name suffix for HTML files (e.g. ".xhtml").
230 # html_file_suffix = None
231
232 # Language to be used for generating the HTML full-text search index.
233 # Sphinx supports the following languages:
234 # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
235 # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr', 'zh'
236 #
237 # html_search_language = 'en'
238
239 # A dictionary with options for the search language support, empty by default.
240 # 'ja' uses this config value.
241 # 'zh' user can custom change `jieba` dictionary path.
242 #
243 # html_search_options = {'type': 'default'}
244
245 # The name of a javascript file (relative to the configuration directory) that
246 # implements a search results scorer. If empty, the default will be used.
247 #
248 # html_search_scorer = 'scorer.js'
249
250 # Output file base name for HTML help builder.
251 htmlhelp_basename = 'pyhfdoc'
252
253 # -- Options for LaTeX output ---------------------------------------------
254
255 latex_elements = {
256 # The paper size ('letterpaper' or 'a4paper').
257 #
258 # 'papersize': 'letterpaper',
259
260 # The font size ('10pt', '11pt' or '12pt').
261 #
262 # 'pointsize': '10pt',
263
264 # Additional stuff for the LaTeX preamble.
265 #
266 # 'preamble': '',
267
268 # Latex figure (float) alignment
269 #
270 # 'figure_align': 'htbp',
271 }
272
273 # Grouping the document tree into LaTeX files. List of tuples
274 # (source start file, target name, title,
275 # author, documentclass [howto, manual, or own class]).
276 latex_documents = [
277 (master_doc, 'pyhf.tex', u'pyhf Documentation',
278 u'Lukas Heinrich, Matthew Feickert', 'manual'),
279 ]
280
281 # The name of an image file (relative to this directory) to place at the top of
282 # the title page.
283 #
284 # latex_logo = None
285
286 # For "manual" documents, if this is true, then toplevel headings are parts,
287 # not chapters.
288 #
289 # latex_use_parts = False
290
291 # If true, show page references after internal links.
292 #
293 # latex_show_pagerefs = False
294
295 # If true, show URL addresses after external links.
296 #
297 # latex_show_urls = False
298
299 # Documents to append as an appendix to all manuals.
300 #
301 # latex_appendices = []
302
303 # It false, will not define \strong, \code, itleref, \crossref ... but only
304 # \sphinxstrong, ..., \sphinxtitleref, ... To help avoid clash with user added
305 # packages.
306 #
307 # latex_keep_old_macro_names = True
308
309 # If false, no module index is generated.
310 #
311 # latex_domain_indices = True
312
313
314 # -- Options for manual page output ---------------------------------------
315
316 # One entry per manual page. List of tuples
317 # (source start file, name, description, authors, manual section).
318 man_pages = [
319 (master_doc, 'pyhf', u'pyhf Documentation',
320 [author], 1)
321 ]
322
323 # If true, show URL addresses after external links.
324 #
325 # man_show_urls = False
326
327
328 # -- Options for Texinfo output -------------------------------------------
329
330 # Grouping the document tree into Texinfo files. List of tuples
331 # (source start file, target name, title, author,
332 # dir menu entry, description, category)
333 texinfo_documents = [
334 (master_doc, 'pyhf', u'pyhf Documentation',
335 author, 'pyhf', 'One line description of project.',
336 'Miscellaneous'),
337 ]
338
339 # Documents to append as an appendix to all manuals.
340 #
341 # texinfo_appendices = []
342
343 # If false, no module index is generated.
344 #
345 # texinfo_domain_indices = True
346
347 # How to display URL addresses: 'footnote', 'no', or 'inline'.
348 #
349 # texinfo_show_urls = 'footnote'
350
351 # If true, do not generate a @detailmenu in the "Top" node's menu.
352 #
353 # texinfo_no_detailmenu = False
354
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -20,6 +20,9 @@
import sys
sys.path.insert(0, os.path.abspath('..'))
+def setup(app):
+ app.add_stylesheet('https://cdnjs.cloudflare.com/ajax/libs/github-fork-ribbon-css/0.2.2/gh-fork-ribbon.min.css')
+
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -20,6 +20,9 @@\n import sys\n sys.path.insert(0, os.path.abspath('..'))\n \n+def setup(app):\n+ app.add_stylesheet('https://cdnjs.cloudflare.com/ajax/libs/github-fork-ribbon-css/0.2.2/gh-fork-ribbon.min.css')\n+\n # -- General configuration ------------------------------------------------\n \n # If your documentation needs a minimal Sphinx version, state it here.\n", "issue": "Add link back to GitHub on webpage\nAt the moment there is no link on [the docs webpage](http://diana-hep.org/pyhf/) back to the pyhf GitHub page. It would be nice to add one to make navigation between the two easier.\r\n\r\nPerhaps a [\"Fork me on GitHub\" ribbon](https://blog.github.com/2008-12-19-github-ribbons/)?\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# pyhf documentation build configuration file, created by\n# sphinx-quickstart on Fri Feb 9 11:58:49 2018.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('..'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.coverage',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.ifconfig',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.githubpages',\n 'sphinxcontrib.bibtex',\n 'sphinxcontrib.napoleon',\n 'nbsphinx',\n]\n\n# Generate the API documentation when building\nautosummary_generate = True\nnumpydoc_show_class_members = False\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#\n# source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'pyhf'\ncopyright = u'2018, Lukas Heinrich, Matthew Feickert, Giordon Stark'\nauthor = u'Lukas Heinrich, Matthew Feickert, Giordon Stark'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = u'0.0.4'\n# The full version, including alpha/beta/rc tags.\nrelease = u'0.0.4'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#\n# today = ''\n#\n# Else, today_fmt is used as the format for a strftime call.\n#\n# today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = ['_build', '**.ipynb_checkpoints', 'examples/experiments']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#\n# default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#\n# add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#\n# add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#\n# show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n# modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n# keep_warnings = False\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\nhtml_theme_path = []\n\n# The name for this set of Sphinx documents.\n# \"<project> v<release> documentation\" by default.\n#\n# html_title = u'pyhf v0.0.4'\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#\n# html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#\n# html_logo = None\n\n# The name of an image file (relative to this directory) to use as a favicon of\n# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#\n# html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#\n# html_extra_path = []\n\n# If not None, a 'Last updated on:' timestamp is inserted at every page\n# bottom, using the given strftime format.\n# The empty string is equivalent to '%b %d, %Y'.\n#\n# html_last_updated_fmt = None\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#\n# html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#\n# html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#\n# html_additional_pages = {}\n\n# If false, no module index is generated.\n#\n# html_domain_indices = True\n\n# If false, no index is generated.\n#\n# html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#\n# html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#\n# html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#\n# html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#\n# html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#\n# html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr', 'zh'\n#\n# html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# 'ja' uses this config value.\n# 'zh' user can custom change `jieba` dictionary path.\n#\n# html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n#\n# html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'pyhfdoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'pyhf.tex', u'pyhf Documentation',\n u'Lukas Heinrich, Matthew Feickert', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#\n# latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#\n# latex_use_parts = False\n\n# If true, show page references after internal links.\n#\n# latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#\n# latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#\n# latex_appendices = []\n\n# It false, will not define \\strong, \\code, \titleref, \\crossref ... but only\n# \\sphinxstrong, ..., \\sphinxtitleref, ... To help avoid clash with user added\n# packages.\n#\n# latex_keep_old_macro_names = True\n\n# If false, no module index is generated.\n#\n# latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'pyhf', u'pyhf Documentation',\n [author], 1)\n]\n\n# If true, show URL addresses after external links.\n#\n# man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'pyhf', u'pyhf Documentation',\n author, 'pyhf', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#\n# texinfo_appendices = []\n\n# If false, no module index is generated.\n#\n# texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#\n# texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#\n# texinfo_no_detailmenu = False\n", "path": "docs/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# pyhf documentation build configuration file, created by\n# sphinx-quickstart on Fri Feb 9 11:58:49 2018.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('..'))\n\ndef setup(app):\n app.add_stylesheet('https://cdnjs.cloudflare.com/ajax/libs/github-fork-ribbon-css/0.2.2/gh-fork-ribbon.min.css')\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.coverage',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.ifconfig',\n 'sphinx.ext.viewcode',\n 'sphinx.ext.githubpages',\n 'sphinxcontrib.bibtex',\n 'sphinxcontrib.napoleon',\n 'nbsphinx',\n]\n\n# Generate the API documentation when building\nautosummary_generate = True\nnumpydoc_show_class_members = False\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#\n# source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'pyhf'\ncopyright = u'2018, Lukas Heinrich, Matthew Feickert, Giordon Stark'\nauthor = u'Lukas Heinrich, Matthew Feickert, Giordon Stark'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = u'0.0.4'\n# The full version, including alpha/beta/rc tags.\nrelease = u'0.0.4'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#\n# today = ''\n#\n# Else, today_fmt is used as the format for a strftime call.\n#\n# today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = ['_build', '**.ipynb_checkpoints', 'examples/experiments']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#\n# default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#\n# add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#\n# add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#\n# show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n# modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n# keep_warnings = False\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\nhtml_theme_path = []\n\n# The name for this set of Sphinx documents.\n# \"<project> v<release> documentation\" by default.\n#\n# html_title = u'pyhf v0.0.4'\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#\n# html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#\n# html_logo = None\n\n# The name of an image file (relative to this directory) to use as a favicon of\n# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#\n# html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#\n# html_extra_path = []\n\n# If not None, a 'Last updated on:' timestamp is inserted at every page\n# bottom, using the given strftime format.\n# The empty string is equivalent to '%b %d, %Y'.\n#\n# html_last_updated_fmt = None\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#\n# html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#\n# html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#\n# html_additional_pages = {}\n\n# If false, no module index is generated.\n#\n# html_domain_indices = True\n\n# If false, no index is generated.\n#\n# html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#\n# html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#\n# html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#\n# html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#\n# html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#\n# html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr', 'zh'\n#\n# html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# 'ja' uses this config value.\n# 'zh' user can custom change `jieba` dictionary path.\n#\n# html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n#\n# html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'pyhfdoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'pyhf.tex', u'pyhf Documentation',\n u'Lukas Heinrich, Matthew Feickert', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#\n# latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#\n# latex_use_parts = False\n\n# If true, show page references after internal links.\n#\n# latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#\n# latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#\n# latex_appendices = []\n\n# It false, will not define \\strong, \\code, \titleref, \\crossref ... but only\n# \\sphinxstrong, ..., \\sphinxtitleref, ... To help avoid clash with user added\n# packages.\n#\n# latex_keep_old_macro_names = True\n\n# If false, no module index is generated.\n#\n# latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'pyhf', u'pyhf Documentation',\n [author], 1)\n]\n\n# If true, show URL addresses after external links.\n#\n# man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'pyhf', u'pyhf Documentation',\n author, 'pyhf', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#\n# texinfo_appendices = []\n\n# If false, no module index is generated.\n#\n# texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#\n# texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#\n# texinfo_no_detailmenu = False\n", "path": "docs/conf.py"}]}
| 3,935 | 113 |
gh_patches_debug_37547
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-3679
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Replace 'backoff' dependency with a local implementation
The 'backoff' dependency can be replaced with a few lines of code so I propose we remove it.
**Is your feature request related to a problem?**
There have been issues with the backoff dependency in the past, in particular relating to the transition from v1 to v2 and we have code and tests to deal with this transition.
**Describe the solution you'd like**
Replace the dependency with a function.
**Describe alternatives you've considered**
Keeping the status quo would also work, but the dependency appears to be more trouble than it's worth.
**Additional context**
PR [here](https://github.com/open-telemetry/opentelemetry-python/pull/3679)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/__init__.py`
Content:
```
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 import logging
17 from collections.abc import Sequence
18 from typing import Any, Mapping, Optional, List, Callable, TypeVar, Dict
19
20 import backoff
21
22 from opentelemetry.sdk.util.instrumentation import InstrumentationScope
23 from opentelemetry.proto.common.v1.common_pb2 import (
24 InstrumentationScope as PB2InstrumentationScope,
25 )
26 from opentelemetry.proto.resource.v1.resource_pb2 import (
27 Resource as PB2Resource,
28 )
29 from opentelemetry.proto.common.v1.common_pb2 import AnyValue as PB2AnyValue
30 from opentelemetry.proto.common.v1.common_pb2 import KeyValue as PB2KeyValue
31 from opentelemetry.proto.common.v1.common_pb2 import (
32 KeyValueList as PB2KeyValueList,
33 )
34 from opentelemetry.proto.common.v1.common_pb2 import (
35 ArrayValue as PB2ArrayValue,
36 )
37 from opentelemetry.sdk.trace import Resource
38 from opentelemetry.util.types import Attributes
39
40
41 _logger = logging.getLogger(__name__)
42
43 _TypingResourceT = TypeVar("_TypingResourceT")
44 _ResourceDataT = TypeVar("_ResourceDataT")
45
46
47 def _encode_instrumentation_scope(
48 instrumentation_scope: InstrumentationScope,
49 ) -> PB2InstrumentationScope:
50 if instrumentation_scope is None:
51 return PB2InstrumentationScope()
52 return PB2InstrumentationScope(
53 name=instrumentation_scope.name,
54 version=instrumentation_scope.version,
55 )
56
57
58 def _encode_resource(resource: Resource) -> PB2Resource:
59 return PB2Resource(attributes=_encode_attributes(resource.attributes))
60
61
62 def _encode_value(value: Any) -> PB2AnyValue:
63 if isinstance(value, bool):
64 return PB2AnyValue(bool_value=value)
65 if isinstance(value, str):
66 return PB2AnyValue(string_value=value)
67 if isinstance(value, int):
68 return PB2AnyValue(int_value=value)
69 if isinstance(value, float):
70 return PB2AnyValue(double_value=value)
71 if isinstance(value, Sequence):
72 return PB2AnyValue(
73 array_value=PB2ArrayValue(values=[_encode_value(v) for v in value])
74 )
75 elif isinstance(value, Mapping):
76 return PB2AnyValue(
77 kvlist_value=PB2KeyValueList(
78 values=[_encode_key_value(str(k), v) for k, v in value.items()]
79 )
80 )
81 raise Exception(f"Invalid type {type(value)} of value {value}")
82
83
84 def _encode_key_value(key: str, value: Any) -> PB2KeyValue:
85 return PB2KeyValue(key=key, value=_encode_value(value))
86
87
88 def _encode_span_id(span_id: int) -> bytes:
89 return span_id.to_bytes(length=8, byteorder="big", signed=False)
90
91
92 def _encode_trace_id(trace_id: int) -> bytes:
93 return trace_id.to_bytes(length=16, byteorder="big", signed=False)
94
95
96 def _encode_attributes(
97 attributes: Attributes,
98 ) -> Optional[List[PB2KeyValue]]:
99 if attributes:
100 pb2_attributes = []
101 for key, value in attributes.items():
102 try:
103 pb2_attributes.append(_encode_key_value(key, value))
104 except Exception as error: # pylint: disable=broad-except
105 _logger.exception(error)
106 else:
107 pb2_attributes = None
108 return pb2_attributes
109
110
111 def _get_resource_data(
112 sdk_resource_scope_data: Dict[Resource, _ResourceDataT],
113 resource_class: Callable[..., _TypingResourceT],
114 name: str,
115 ) -> List[_TypingResourceT]:
116
117 resource_data = []
118
119 for (
120 sdk_resource,
121 scope_data,
122 ) in sdk_resource_scope_data.items():
123 collector_resource = PB2Resource(
124 attributes=_encode_attributes(sdk_resource.attributes)
125 )
126 resource_data.append(
127 resource_class(
128 **{
129 "resource": collector_resource,
130 "scope_{}".format(name): scope_data.values(),
131 }
132 )
133 )
134 return resource_data
135
136
137 # Work around API change between backoff 1.x and 2.x. Since 2.0.0 the backoff
138 # wait generator API requires a first .send(None) before reading the backoff
139 # values from the generator.
140 _is_backoff_v2 = next(backoff.expo()) is None
141
142
143 def _create_exp_backoff_generator(*args, **kwargs):
144 gen = backoff.expo(*args, **kwargs)
145 if _is_backoff_v2:
146 gen.send(None)
147 return gen
148
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/__init__.py
--- a/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/__init__.py
+++ b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/__init__.py
@@ -15,9 +15,17 @@
import logging
from collections.abc import Sequence
-from typing import Any, Mapping, Optional, List, Callable, TypeVar, Dict
-
-import backoff
+from itertools import count
+from typing import (
+ Any,
+ Mapping,
+ Optional,
+ List,
+ Callable,
+ TypeVar,
+ Dict,
+ Iterator,
+)
from opentelemetry.sdk.util.instrumentation import InstrumentationScope
from opentelemetry.proto.common.v1.common_pb2 import (
@@ -37,7 +45,6 @@
from opentelemetry.sdk.trace import Resource
from opentelemetry.util.types import Attributes
-
_logger = logging.getLogger(__name__)
_TypingResourceT = TypeVar("_TypingResourceT")
@@ -113,7 +120,6 @@
resource_class: Callable[..., _TypingResourceT],
name: str,
) -> List[_TypingResourceT]:
-
resource_data = []
for (
@@ -134,14 +140,36 @@
return resource_data
-# Work around API change between backoff 1.x and 2.x. Since 2.0.0 the backoff
-# wait generator API requires a first .send(None) before reading the backoff
-# values from the generator.
-_is_backoff_v2 = next(backoff.expo()) is None
-
-
-def _create_exp_backoff_generator(*args, **kwargs):
- gen = backoff.expo(*args, **kwargs)
- if _is_backoff_v2:
- gen.send(None)
- return gen
+def _create_exp_backoff_generator(max_value: int = 0) -> Iterator[int]:
+ """
+ Generates an infinite sequence of exponential backoff values. The sequence starts
+ from 1 (2^0) and doubles each time (2^1, 2^2, 2^3, ...). If a max_value is specified
+ and non-zero, the generated values will not exceed this maximum, capping at max_value
+ instead of growing indefinitely.
+
+ Parameters:
+ - max_value (int, optional): The maximum value to yield. If 0 or not provided, the
+ sequence grows without bound.
+
+ Returns:
+ Iterator[int]: An iterator that yields the exponential backoff values, either uncapped or
+ capped at max_value.
+
+ Example:
+ ```
+ gen = _create_exp_backoff_generator(max_value=10)
+ for _ in range(5):
+ print(next(gen))
+ ```
+ This will print:
+ 1
+ 2
+ 4
+ 8
+ 10
+
+ Note: this functionality used to be handled by the 'backoff' package.
+ """
+ for i in count(0):
+ out = 2**i
+ yield min(out, max_value) if max_value else out
|
{"golden_diff": "diff --git a/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/__init__.py b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/__init__.py\n--- a/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/__init__.py\n+++ b/exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/__init__.py\n@@ -15,9 +15,17 @@\n \n import logging\n from collections.abc import Sequence\n-from typing import Any, Mapping, Optional, List, Callable, TypeVar, Dict\n-\n-import backoff\n+from itertools import count\n+from typing import (\n+ Any,\n+ Mapping,\n+ Optional,\n+ List,\n+ Callable,\n+ TypeVar,\n+ Dict,\n+ Iterator,\n+)\n \n from opentelemetry.sdk.util.instrumentation import InstrumentationScope\n from opentelemetry.proto.common.v1.common_pb2 import (\n@@ -37,7 +45,6 @@\n from opentelemetry.sdk.trace import Resource\n from opentelemetry.util.types import Attributes\n \n-\n _logger = logging.getLogger(__name__)\n \n _TypingResourceT = TypeVar(\"_TypingResourceT\")\n@@ -113,7 +120,6 @@\n resource_class: Callable[..., _TypingResourceT],\n name: str,\n ) -> List[_TypingResourceT]:\n-\n resource_data = []\n \n for (\n@@ -134,14 +140,36 @@\n return resource_data\n \n \n-# Work around API change between backoff 1.x and 2.x. Since 2.0.0 the backoff\n-# wait generator API requires a first .send(None) before reading the backoff\n-# values from the generator.\n-_is_backoff_v2 = next(backoff.expo()) is None\n-\n-\n-def _create_exp_backoff_generator(*args, **kwargs):\n- gen = backoff.expo(*args, **kwargs)\n- if _is_backoff_v2:\n- gen.send(None)\n- return gen\n+def _create_exp_backoff_generator(max_value: int = 0) -> Iterator[int]:\n+ \"\"\"\n+ Generates an infinite sequence of exponential backoff values. The sequence starts\n+ from 1 (2^0) and doubles each time (2^1, 2^2, 2^3, ...). If a max_value is specified\n+ and non-zero, the generated values will not exceed this maximum, capping at max_value\n+ instead of growing indefinitely.\n+\n+ Parameters:\n+ - max_value (int, optional): The maximum value to yield. If 0 or not provided, the\n+ sequence grows without bound.\n+\n+ Returns:\n+ Iterator[int]: An iterator that yields the exponential backoff values, either uncapped or\n+ capped at max_value.\n+\n+ Example:\n+ ```\n+ gen = _create_exp_backoff_generator(max_value=10)\n+ for _ in range(5):\n+ print(next(gen))\n+ ```\n+ This will print:\n+ 1\n+ 2\n+ 4\n+ 8\n+ 10\n+\n+ Note: this functionality used to be handled by the 'backoff' package.\n+ \"\"\"\n+ for i in count(0):\n+ out = 2**i\n+ yield min(out, max_value) if max_value else out\n", "issue": "Replace 'backoff' dependency with a local implementation\nThe 'backoff' dependency can be replaced with a few lines of code so I propose we remove it.\r\n\r\n**Is your feature request related to a problem?**\r\nThere have been issues with the backoff dependency in the past, in particular relating to the transition from v1 to v2 and we have code and tests to deal with this transition.\r\n\r\n**Describe the solution you'd like**\r\nReplace the dependency with a function.\r\n\r\n**Describe alternatives you've considered**\r\nKeeping the status quo would also work, but the dependency appears to be more trouble than it's worth.\r\n\r\n**Additional context**\r\nPR [here](https://github.com/open-telemetry/opentelemetry-python/pull/3679)\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nimport logging\nfrom collections.abc import Sequence\nfrom typing import Any, Mapping, Optional, List, Callable, TypeVar, Dict\n\nimport backoff\n\nfrom opentelemetry.sdk.util.instrumentation import InstrumentationScope\nfrom opentelemetry.proto.common.v1.common_pb2 import (\n InstrumentationScope as PB2InstrumentationScope,\n)\nfrom opentelemetry.proto.resource.v1.resource_pb2 import (\n Resource as PB2Resource,\n)\nfrom opentelemetry.proto.common.v1.common_pb2 import AnyValue as PB2AnyValue\nfrom opentelemetry.proto.common.v1.common_pb2 import KeyValue as PB2KeyValue\nfrom opentelemetry.proto.common.v1.common_pb2 import (\n KeyValueList as PB2KeyValueList,\n)\nfrom opentelemetry.proto.common.v1.common_pb2 import (\n ArrayValue as PB2ArrayValue,\n)\nfrom opentelemetry.sdk.trace import Resource\nfrom opentelemetry.util.types import Attributes\n\n\n_logger = logging.getLogger(__name__)\n\n_TypingResourceT = TypeVar(\"_TypingResourceT\")\n_ResourceDataT = TypeVar(\"_ResourceDataT\")\n\n\ndef _encode_instrumentation_scope(\n instrumentation_scope: InstrumentationScope,\n) -> PB2InstrumentationScope:\n if instrumentation_scope is None:\n return PB2InstrumentationScope()\n return PB2InstrumentationScope(\n name=instrumentation_scope.name,\n version=instrumentation_scope.version,\n )\n\n\ndef _encode_resource(resource: Resource) -> PB2Resource:\n return PB2Resource(attributes=_encode_attributes(resource.attributes))\n\n\ndef _encode_value(value: Any) -> PB2AnyValue:\n if isinstance(value, bool):\n return PB2AnyValue(bool_value=value)\n if isinstance(value, str):\n return PB2AnyValue(string_value=value)\n if isinstance(value, int):\n return PB2AnyValue(int_value=value)\n if isinstance(value, float):\n return PB2AnyValue(double_value=value)\n if isinstance(value, Sequence):\n return PB2AnyValue(\n array_value=PB2ArrayValue(values=[_encode_value(v) for v in value])\n )\n elif isinstance(value, Mapping):\n return PB2AnyValue(\n kvlist_value=PB2KeyValueList(\n values=[_encode_key_value(str(k), v) for k, v in value.items()]\n )\n )\n raise Exception(f\"Invalid type {type(value)} of value {value}\")\n\n\ndef _encode_key_value(key: str, value: Any) -> PB2KeyValue:\n return PB2KeyValue(key=key, value=_encode_value(value))\n\n\ndef _encode_span_id(span_id: int) -> bytes:\n return span_id.to_bytes(length=8, byteorder=\"big\", signed=False)\n\n\ndef _encode_trace_id(trace_id: int) -> bytes:\n return trace_id.to_bytes(length=16, byteorder=\"big\", signed=False)\n\n\ndef _encode_attributes(\n attributes: Attributes,\n) -> Optional[List[PB2KeyValue]]:\n if attributes:\n pb2_attributes = []\n for key, value in attributes.items():\n try:\n pb2_attributes.append(_encode_key_value(key, value))\n except Exception as error: # pylint: disable=broad-except\n _logger.exception(error)\n else:\n pb2_attributes = None\n return pb2_attributes\n\n\ndef _get_resource_data(\n sdk_resource_scope_data: Dict[Resource, _ResourceDataT],\n resource_class: Callable[..., _TypingResourceT],\n name: str,\n) -> List[_TypingResourceT]:\n\n resource_data = []\n\n for (\n sdk_resource,\n scope_data,\n ) in sdk_resource_scope_data.items():\n collector_resource = PB2Resource(\n attributes=_encode_attributes(sdk_resource.attributes)\n )\n resource_data.append(\n resource_class(\n **{\n \"resource\": collector_resource,\n \"scope_{}\".format(name): scope_data.values(),\n }\n )\n )\n return resource_data\n\n\n# Work around API change between backoff 1.x and 2.x. Since 2.0.0 the backoff\n# wait generator API requires a first .send(None) before reading the backoff\n# values from the generator.\n_is_backoff_v2 = next(backoff.expo()) is None\n\n\ndef _create_exp_backoff_generator(*args, **kwargs):\n gen = backoff.expo(*args, **kwargs)\n if _is_backoff_v2:\n gen.send(None)\n return gen\n", "path": "exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/__init__.py"}], "after_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nimport logging\nfrom collections.abc import Sequence\nfrom itertools import count\nfrom typing import (\n Any,\n Mapping,\n Optional,\n List,\n Callable,\n TypeVar,\n Dict,\n Iterator,\n)\n\nfrom opentelemetry.sdk.util.instrumentation import InstrumentationScope\nfrom opentelemetry.proto.common.v1.common_pb2 import (\n InstrumentationScope as PB2InstrumentationScope,\n)\nfrom opentelemetry.proto.resource.v1.resource_pb2 import (\n Resource as PB2Resource,\n)\nfrom opentelemetry.proto.common.v1.common_pb2 import AnyValue as PB2AnyValue\nfrom opentelemetry.proto.common.v1.common_pb2 import KeyValue as PB2KeyValue\nfrom opentelemetry.proto.common.v1.common_pb2 import (\n KeyValueList as PB2KeyValueList,\n)\nfrom opentelemetry.proto.common.v1.common_pb2 import (\n ArrayValue as PB2ArrayValue,\n)\nfrom opentelemetry.sdk.trace import Resource\nfrom opentelemetry.util.types import Attributes\n\n_logger = logging.getLogger(__name__)\n\n_TypingResourceT = TypeVar(\"_TypingResourceT\")\n_ResourceDataT = TypeVar(\"_ResourceDataT\")\n\n\ndef _encode_instrumentation_scope(\n instrumentation_scope: InstrumentationScope,\n) -> PB2InstrumentationScope:\n if instrumentation_scope is None:\n return PB2InstrumentationScope()\n return PB2InstrumentationScope(\n name=instrumentation_scope.name,\n version=instrumentation_scope.version,\n )\n\n\ndef _encode_resource(resource: Resource) -> PB2Resource:\n return PB2Resource(attributes=_encode_attributes(resource.attributes))\n\n\ndef _encode_value(value: Any) -> PB2AnyValue:\n if isinstance(value, bool):\n return PB2AnyValue(bool_value=value)\n if isinstance(value, str):\n return PB2AnyValue(string_value=value)\n if isinstance(value, int):\n return PB2AnyValue(int_value=value)\n if isinstance(value, float):\n return PB2AnyValue(double_value=value)\n if isinstance(value, Sequence):\n return PB2AnyValue(\n array_value=PB2ArrayValue(values=[_encode_value(v) for v in value])\n )\n elif isinstance(value, Mapping):\n return PB2AnyValue(\n kvlist_value=PB2KeyValueList(\n values=[_encode_key_value(str(k), v) for k, v in value.items()]\n )\n )\n raise Exception(f\"Invalid type {type(value)} of value {value}\")\n\n\ndef _encode_key_value(key: str, value: Any) -> PB2KeyValue:\n return PB2KeyValue(key=key, value=_encode_value(value))\n\n\ndef _encode_span_id(span_id: int) -> bytes:\n return span_id.to_bytes(length=8, byteorder=\"big\", signed=False)\n\n\ndef _encode_trace_id(trace_id: int) -> bytes:\n return trace_id.to_bytes(length=16, byteorder=\"big\", signed=False)\n\n\ndef _encode_attributes(\n attributes: Attributes,\n) -> Optional[List[PB2KeyValue]]:\n if attributes:\n pb2_attributes = []\n for key, value in attributes.items():\n try:\n pb2_attributes.append(_encode_key_value(key, value))\n except Exception as error: # pylint: disable=broad-except\n _logger.exception(error)\n else:\n pb2_attributes = None\n return pb2_attributes\n\n\ndef _get_resource_data(\n sdk_resource_scope_data: Dict[Resource, _ResourceDataT],\n resource_class: Callable[..., _TypingResourceT],\n name: str,\n) -> List[_TypingResourceT]:\n resource_data = []\n\n for (\n sdk_resource,\n scope_data,\n ) in sdk_resource_scope_data.items():\n collector_resource = PB2Resource(\n attributes=_encode_attributes(sdk_resource.attributes)\n )\n resource_data.append(\n resource_class(\n **{\n \"resource\": collector_resource,\n \"scope_{}\".format(name): scope_data.values(),\n }\n )\n )\n return resource_data\n\n\ndef _create_exp_backoff_generator(max_value: int = 0) -> Iterator[int]:\n \"\"\"\n Generates an infinite sequence of exponential backoff values. The sequence starts\n from 1 (2^0) and doubles each time (2^1, 2^2, 2^3, ...). If a max_value is specified\n and non-zero, the generated values will not exceed this maximum, capping at max_value\n instead of growing indefinitely.\n\n Parameters:\n - max_value (int, optional): The maximum value to yield. If 0 or not provided, the\n sequence grows without bound.\n\n Returns:\n Iterator[int]: An iterator that yields the exponential backoff values, either uncapped or\n capped at max_value.\n\n Example:\n ```\n gen = _create_exp_backoff_generator(max_value=10)\n for _ in range(5):\n print(next(gen))\n ```\n This will print:\n 1\n 2\n 4\n 8\n 10\n\n Note: this functionality used to be handled by the 'backoff' package.\n \"\"\"\n for i in count(0):\n out = 2**i\n yield min(out, max_value) if max_value else out\n", "path": "exporter/opentelemetry-exporter-otlp-proto-common/src/opentelemetry/exporter/otlp/proto/common/_internal/__init__.py"}]}
| 1,858 | 800 |
gh_patches_debug_29343
|
rasdani/github-patches
|
git_diff
|
pypa__pipenv-5875
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pipfile.lock is updated even with --outdated or --dry-run options.
### Issue description
Pipfile.lock is updated even with `--outdated` or `--dry-run` options.
### Expected result
I would like to check for updates without updating Pipfile.lock.
I am aware that past versions have been able to do that.
### Actual result
Excerpt from `pipenv update --outdated`
```
Loading .env environment variables...
Courtesy Notice: Pipenv found itself running within a virtual environment, so it will automatically use that environment, instead of creating its own for any project. You can set PIPENV_IGNORE_VIRTUALENVS=1 to force pipenv to ignore that environment and create its own instead. You can set PIPENV_VERBOSITY=-1 to suppress this warning.
Locking [packages] dependencies...
Locking [dev-packages] dependencies...
Updated Pipfile.lock (a84a8f578f467dfabb16e4965a8aa3184bbd06893ff0db5f7d152cf646843c34)!
```
-------------------------------------------------------------------------------
<details><summary>$ pipenv --support</summary>
Pipenv version: `'2023.8.26'`
Pipenv location: `'/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv'`
Python location: `'/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/bin/python'`
OS Name: `'posix'`
User pip version: `'23.2.1'`
user Python installations found:
PEP 508 Information:
```
{'implementation_name': 'cpython',
'implementation_version': '3.11.5',
'os_name': 'posix',
'platform_machine': 'arm64',
'platform_python_implementation': 'CPython',
'platform_release': '22.6.0',
'platform_system': 'Darwin',
'platform_version': 'Darwin Kernel Version 22.6.0: Wed Jul 5 22:21:53 PDT '
'2023; root:xnu-8796.141.3~6/RELEASE_ARM64_T6020',
'python_full_version': '3.11.5',
'python_version': '3.11',
'sys_platform': 'darwin'}
```
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pipenv/routines/update.py`
Content:
```
1 import sys
2 from collections import defaultdict
3
4 from pipenv.routines.install import do_sync
5 from pipenv.routines.lock import do_lock
6 from pipenv.routines.outdated import do_outdated
7 from pipenv.utils.dependencies import (
8 expansive_install_req_from_line,
9 get_pipfile_category_using_lockfile_section,
10 )
11 from pipenv.utils.project import ensure_project
12 from pipenv.utils.requirements import add_index_to_pipfile
13 from pipenv.utils.resolver import venv_resolve_deps
14 from pipenv.vendor import click
15
16
17 def do_update(
18 project,
19 python=None,
20 pre=False,
21 system=False,
22 packages=None,
23 editable_packages=None,
24 site_packages=False,
25 pypi_mirror=None,
26 dev=False,
27 categories=None,
28 index_url=None,
29 extra_pip_args=None,
30 quiet=False,
31 bare=False,
32 dry_run=None,
33 outdated=False,
34 clear=False,
35 lock_only=False,
36 ):
37 ensure_project(
38 project,
39 python=python,
40 pypi_mirror=pypi_mirror,
41 warn=(not quiet),
42 site_packages=site_packages,
43 clear=clear,
44 )
45 packages = [p for p in packages if p]
46 editable = [p for p in editable_packages if p]
47 if not packages:
48 click.echo(
49 "{} {} {} {}{}".format(
50 click.style("Running", bold=True),
51 click.style("$ pipenv lock", fg="yellow", bold=True),
52 click.style("then", bold=True),
53 click.style("$ pipenv sync", fg="yellow", bold=True),
54 click.style(".", bold=True),
55 )
56 )
57 do_lock(
58 project,
59 clear=clear,
60 pre=pre,
61 pypi_mirror=pypi_mirror,
62 write=not quiet,
63 )
64 else:
65 upgrade(
66 project,
67 pre=pre,
68 system=system,
69 packages=packages,
70 editable_packages=editable,
71 pypi_mirror=pypi_mirror,
72 categories=categories,
73 index_url=index_url,
74 dev=dev,
75 lock_only=lock_only,
76 )
77
78 if not outdated:
79 outdated = bool(dry_run)
80 if outdated:
81 do_outdated(
82 project,
83 clear=clear,
84 pre=pre,
85 pypi_mirror=pypi_mirror,
86 )
87
88 do_sync(
89 project,
90 dev=dev,
91 categories=categories,
92 python=python,
93 bare=bare,
94 user=False,
95 clear=clear,
96 unused=False,
97 pypi_mirror=pypi_mirror,
98 extra_pip_args=extra_pip_args,
99 )
100
101
102 def upgrade(
103 project,
104 pre=False,
105 system=False,
106 packages=None,
107 editable_packages=None,
108 pypi_mirror=None,
109 index_url=None,
110 categories=None,
111 dev=False,
112 lock_only=False,
113 ):
114 lockfile = project.lockfile()
115 if not pre:
116 pre = project.settings.get("allow_prereleases")
117 if dev:
118 categories = ["develop"]
119 elif not categories:
120 categories = ["default"]
121
122 index_name = None
123 if index_url:
124 index_name = add_index_to_pipfile(project, index_url)
125
126 package_args = list(packages) + [f"-e {pkg}" for pkg in editable_packages]
127
128 requested_install_reqs = defaultdict(dict)
129 requested_packages = defaultdict(dict)
130 for category in categories:
131 pipfile_category = get_pipfile_category_using_lockfile_section(category)
132
133 for package in package_args[:]:
134 install_req = expansive_install_req_from_line(package, expand_env=True)
135 if index_name:
136 install_req.index = index_name
137 name, normalized_name, pipfile_entry = project.generate_package_pipfile_entry(
138 install_req, package, category=pipfile_category
139 )
140 project.add_pipfile_entry_to_pipfile(
141 name, normalized_name, pipfile_entry, category=pipfile_category
142 )
143 requested_packages[pipfile_category][normalized_name] = pipfile_entry
144 requested_install_reqs[pipfile_category][normalized_name] = install_req
145
146 if project.pipfile_exists:
147 packages = project.parsed_pipfile.get(pipfile_category, {})
148 else:
149 packages = project.get_pipfile_section(pipfile_category)
150
151 if not package_args:
152 click.echo("Nothing to upgrade!")
153 sys.exit(0)
154
155 # Resolve package to generate constraints of new package data
156 upgrade_lock_data = venv_resolve_deps(
157 requested_packages[pipfile_category],
158 which=project._which,
159 project=project,
160 lockfile={},
161 category="default",
162 pre=pre,
163 allow_global=system,
164 pypi_mirror=pypi_mirror,
165 )
166 if not upgrade_lock_data:
167 click.echo("Nothing to upgrade!")
168 sys.exit(0)
169
170 for package_name, pipfile_entry in requested_packages[pipfile_category].items():
171 if package_name not in packages:
172 packages.append(package_name, pipfile_entry)
173 else:
174 packages[package_name] = pipfile_entry
175
176 full_lock_resolution = venv_resolve_deps(
177 packages,
178 which=project._which,
179 project=project,
180 lockfile={},
181 category=pipfile_category,
182 pre=pre,
183 allow_global=system,
184 pypi_mirror=pypi_mirror,
185 )
186 # Mutate the existing lockfile with the upgrade data for the categories
187 for package_name in upgrade_lock_data:
188 correct_package_lock = full_lock_resolution.get(package_name)
189 if correct_package_lock:
190 lockfile[category][package_name] = correct_package_lock
191
192 lockfile.update({"_meta": project.get_lockfile_meta()})
193 project.write_lockfile(lockfile)
194
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pipenv/routines/update.py b/pipenv/routines/update.py
--- a/pipenv/routines/update.py
+++ b/pipenv/routines/update.py
@@ -44,6 +44,8 @@
)
packages = [p for p in packages if p]
editable = [p for p in editable_packages if p]
+ if not outdated:
+ outdated = bool(dry_run)
if not packages:
click.echo(
"{} {} {} {}{}".format(
@@ -59,7 +61,7 @@
clear=clear,
pre=pre,
pypi_mirror=pypi_mirror,
- write=not quiet,
+ write=not outdated,
)
else:
upgrade(
@@ -75,8 +77,6 @@
lock_only=lock_only,
)
- if not outdated:
- outdated = bool(dry_run)
if outdated:
do_outdated(
project,
@@ -84,19 +84,19 @@
pre=pre,
pypi_mirror=pypi_mirror,
)
-
- do_sync(
- project,
- dev=dev,
- categories=categories,
- python=python,
- bare=bare,
- user=False,
- clear=clear,
- unused=False,
- pypi_mirror=pypi_mirror,
- extra_pip_args=extra_pip_args,
- )
+ else:
+ do_sync(
+ project,
+ dev=dev,
+ categories=categories,
+ python=python,
+ bare=bare,
+ user=False,
+ clear=clear,
+ unused=False,
+ pypi_mirror=pypi_mirror,
+ extra_pip_args=extra_pip_args,
+ )
def upgrade(
|
{"golden_diff": "diff --git a/pipenv/routines/update.py b/pipenv/routines/update.py\n--- a/pipenv/routines/update.py\n+++ b/pipenv/routines/update.py\n@@ -44,6 +44,8 @@\n )\n packages = [p for p in packages if p]\n editable = [p for p in editable_packages if p]\n+ if not outdated:\n+ outdated = bool(dry_run)\n if not packages:\n click.echo(\n \"{} {} {} {}{}\".format(\n@@ -59,7 +61,7 @@\n clear=clear,\n pre=pre,\n pypi_mirror=pypi_mirror,\n- write=not quiet,\n+ write=not outdated,\n )\n else:\n upgrade(\n@@ -75,8 +77,6 @@\n lock_only=lock_only,\n )\n \n- if not outdated:\n- outdated = bool(dry_run)\n if outdated:\n do_outdated(\n project,\n@@ -84,19 +84,19 @@\n pre=pre,\n pypi_mirror=pypi_mirror,\n )\n-\n- do_sync(\n- project,\n- dev=dev,\n- categories=categories,\n- python=python,\n- bare=bare,\n- user=False,\n- clear=clear,\n- unused=False,\n- pypi_mirror=pypi_mirror,\n- extra_pip_args=extra_pip_args,\n- )\n+ else:\n+ do_sync(\n+ project,\n+ dev=dev,\n+ categories=categories,\n+ python=python,\n+ bare=bare,\n+ user=False,\n+ clear=clear,\n+ unused=False,\n+ pypi_mirror=pypi_mirror,\n+ extra_pip_args=extra_pip_args,\n+ )\n \n \n def upgrade(\n", "issue": "Pipfile.lock is updated even with --outdated or --dry-run options.\n### Issue description\r\n\r\nPipfile.lock is updated even with `--outdated` or `--dry-run` options.\r\n\r\n### Expected result\r\n\r\nI would like to check for updates without updating Pipfile.lock.\r\nI am aware that past versions have been able to do that.\r\n\r\n### Actual result\r\n\r\nExcerpt from `pipenv update --outdated`\r\n\r\n```\r\nLoading .env environment variables...\r\nCourtesy Notice: Pipenv found itself running within a virtual environment, so it will automatically use that environment, instead of creating its own for any project. You can set PIPENV_IGNORE_VIRTUALENVS=1 to force pipenv to ignore that environment and create its own instead. You can set PIPENV_VERBOSITY=-1 to suppress this warning.\r\nLocking [packages] dependencies...\r\n\r\nLocking [dev-packages] dependencies...\r\n\r\nUpdated Pipfile.lock (a84a8f578f467dfabb16e4965a8aa3184bbd06893ff0db5f7d152cf646843c34)!\r\n\r\n```\r\n\r\n-------------------------------------------------------------------------------\r\n\r\n\r\n<details><summary>$ pipenv --support</summary>\r\n\r\nPipenv version: `'2023.8.26'`\r\n\r\nPipenv location: `'/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/lib/python3.11/site-packages/pipenv'`\r\n\r\nPython location: `'/opt/homebrew/Cellar/pipenv/2023.8.26/libexec/bin/python'`\r\n\r\nOS Name: `'posix'`\r\n\r\nUser pip version: `'23.2.1'`\r\n\r\nuser Python installations found:\r\n\r\n\r\nPEP 508 Information:\r\n\r\n```\r\n{'implementation_name': 'cpython',\r\n 'implementation_version': '3.11.5',\r\n 'os_name': 'posix',\r\n 'platform_machine': 'arm64',\r\n 'platform_python_implementation': 'CPython',\r\n 'platform_release': '22.6.0',\r\n 'platform_system': 'Darwin',\r\n 'platform_version': 'Darwin Kernel Version 22.6.0: Wed Jul 5 22:21:53 PDT '\r\n '2023; root:xnu-8796.141.3~6/RELEASE_ARM64_T6020',\r\n 'python_full_version': '3.11.5',\r\n 'python_version': '3.11',\r\n 'sys_platform': 'darwin'}\r\n```\r\n</details>\n", "before_files": [{"content": "import sys\nfrom collections import defaultdict\n\nfrom pipenv.routines.install import do_sync\nfrom pipenv.routines.lock import do_lock\nfrom pipenv.routines.outdated import do_outdated\nfrom pipenv.utils.dependencies import (\n expansive_install_req_from_line,\n get_pipfile_category_using_lockfile_section,\n)\nfrom pipenv.utils.project import ensure_project\nfrom pipenv.utils.requirements import add_index_to_pipfile\nfrom pipenv.utils.resolver import venv_resolve_deps\nfrom pipenv.vendor import click\n\n\ndef do_update(\n project,\n python=None,\n pre=False,\n system=False,\n packages=None,\n editable_packages=None,\n site_packages=False,\n pypi_mirror=None,\n dev=False,\n categories=None,\n index_url=None,\n extra_pip_args=None,\n quiet=False,\n bare=False,\n dry_run=None,\n outdated=False,\n clear=False,\n lock_only=False,\n):\n ensure_project(\n project,\n python=python,\n pypi_mirror=pypi_mirror,\n warn=(not quiet),\n site_packages=site_packages,\n clear=clear,\n )\n packages = [p for p in packages if p]\n editable = [p for p in editable_packages if p]\n if not packages:\n click.echo(\n \"{} {} {} {}{}\".format(\n click.style(\"Running\", bold=True),\n click.style(\"$ pipenv lock\", fg=\"yellow\", bold=True),\n click.style(\"then\", bold=True),\n click.style(\"$ pipenv sync\", fg=\"yellow\", bold=True),\n click.style(\".\", bold=True),\n )\n )\n do_lock(\n project,\n clear=clear,\n pre=pre,\n pypi_mirror=pypi_mirror,\n write=not quiet,\n )\n else:\n upgrade(\n project,\n pre=pre,\n system=system,\n packages=packages,\n editable_packages=editable,\n pypi_mirror=pypi_mirror,\n categories=categories,\n index_url=index_url,\n dev=dev,\n lock_only=lock_only,\n )\n\n if not outdated:\n outdated = bool(dry_run)\n if outdated:\n do_outdated(\n project,\n clear=clear,\n pre=pre,\n pypi_mirror=pypi_mirror,\n )\n\n do_sync(\n project,\n dev=dev,\n categories=categories,\n python=python,\n bare=bare,\n user=False,\n clear=clear,\n unused=False,\n pypi_mirror=pypi_mirror,\n extra_pip_args=extra_pip_args,\n )\n\n\ndef upgrade(\n project,\n pre=False,\n system=False,\n packages=None,\n editable_packages=None,\n pypi_mirror=None,\n index_url=None,\n categories=None,\n dev=False,\n lock_only=False,\n):\n lockfile = project.lockfile()\n if not pre:\n pre = project.settings.get(\"allow_prereleases\")\n if dev:\n categories = [\"develop\"]\n elif not categories:\n categories = [\"default\"]\n\n index_name = None\n if index_url:\n index_name = add_index_to_pipfile(project, index_url)\n\n package_args = list(packages) + [f\"-e {pkg}\" for pkg in editable_packages]\n\n requested_install_reqs = defaultdict(dict)\n requested_packages = defaultdict(dict)\n for category in categories:\n pipfile_category = get_pipfile_category_using_lockfile_section(category)\n\n for package in package_args[:]:\n install_req = expansive_install_req_from_line(package, expand_env=True)\n if index_name:\n install_req.index = index_name\n name, normalized_name, pipfile_entry = project.generate_package_pipfile_entry(\n install_req, package, category=pipfile_category\n )\n project.add_pipfile_entry_to_pipfile(\n name, normalized_name, pipfile_entry, category=pipfile_category\n )\n requested_packages[pipfile_category][normalized_name] = pipfile_entry\n requested_install_reqs[pipfile_category][normalized_name] = install_req\n\n if project.pipfile_exists:\n packages = project.parsed_pipfile.get(pipfile_category, {})\n else:\n packages = project.get_pipfile_section(pipfile_category)\n\n if not package_args:\n click.echo(\"Nothing to upgrade!\")\n sys.exit(0)\n\n # Resolve package to generate constraints of new package data\n upgrade_lock_data = venv_resolve_deps(\n requested_packages[pipfile_category],\n which=project._which,\n project=project,\n lockfile={},\n category=\"default\",\n pre=pre,\n allow_global=system,\n pypi_mirror=pypi_mirror,\n )\n if not upgrade_lock_data:\n click.echo(\"Nothing to upgrade!\")\n sys.exit(0)\n\n for package_name, pipfile_entry in requested_packages[pipfile_category].items():\n if package_name not in packages:\n packages.append(package_name, pipfile_entry)\n else:\n packages[package_name] = pipfile_entry\n\n full_lock_resolution = venv_resolve_deps(\n packages,\n which=project._which,\n project=project,\n lockfile={},\n category=pipfile_category,\n pre=pre,\n allow_global=system,\n pypi_mirror=pypi_mirror,\n )\n # Mutate the existing lockfile with the upgrade data for the categories\n for package_name in upgrade_lock_data:\n correct_package_lock = full_lock_resolution.get(package_name)\n if correct_package_lock:\n lockfile[category][package_name] = correct_package_lock\n\n lockfile.update({\"_meta\": project.get_lockfile_meta()})\n project.write_lockfile(lockfile)\n", "path": "pipenv/routines/update.py"}], "after_files": [{"content": "import sys\nfrom collections import defaultdict\n\nfrom pipenv.routines.install import do_sync\nfrom pipenv.routines.lock import do_lock\nfrom pipenv.routines.outdated import do_outdated\nfrom pipenv.utils.dependencies import (\n expansive_install_req_from_line,\n get_pipfile_category_using_lockfile_section,\n)\nfrom pipenv.utils.project import ensure_project\nfrom pipenv.utils.requirements import add_index_to_pipfile\nfrom pipenv.utils.resolver import venv_resolve_deps\nfrom pipenv.vendor import click\n\n\ndef do_update(\n project,\n python=None,\n pre=False,\n system=False,\n packages=None,\n editable_packages=None,\n site_packages=False,\n pypi_mirror=None,\n dev=False,\n categories=None,\n index_url=None,\n extra_pip_args=None,\n quiet=False,\n bare=False,\n dry_run=None,\n outdated=False,\n clear=False,\n lock_only=False,\n):\n ensure_project(\n project,\n python=python,\n pypi_mirror=pypi_mirror,\n warn=(not quiet),\n site_packages=site_packages,\n clear=clear,\n )\n packages = [p for p in packages if p]\n editable = [p for p in editable_packages if p]\n if not outdated:\n outdated = bool(dry_run)\n if not packages:\n click.echo(\n \"{} {} {} {}{}\".format(\n click.style(\"Running\", bold=True),\n click.style(\"$ pipenv lock\", fg=\"yellow\", bold=True),\n click.style(\"then\", bold=True),\n click.style(\"$ pipenv sync\", fg=\"yellow\", bold=True),\n click.style(\".\", bold=True),\n )\n )\n do_lock(\n project,\n clear=clear,\n pre=pre,\n pypi_mirror=pypi_mirror,\n write=not outdated,\n )\n else:\n upgrade(\n project,\n pre=pre,\n system=system,\n packages=packages,\n editable_packages=editable,\n pypi_mirror=pypi_mirror,\n categories=categories,\n index_url=index_url,\n dev=dev,\n lock_only=lock_only,\n )\n\n if outdated:\n do_outdated(\n project,\n clear=clear,\n pre=pre,\n pypi_mirror=pypi_mirror,\n )\n else:\n do_sync(\n project,\n dev=dev,\n categories=categories,\n python=python,\n bare=bare,\n user=False,\n clear=clear,\n unused=False,\n pypi_mirror=pypi_mirror,\n extra_pip_args=extra_pip_args,\n )\n\n\ndef upgrade(\n project,\n pre=False,\n system=False,\n packages=None,\n editable_packages=None,\n pypi_mirror=None,\n index_url=None,\n categories=None,\n dev=False,\n lock_only=False,\n):\n lockfile = project.lockfile()\n if not pre:\n pre = project.settings.get(\"allow_prereleases\")\n if dev:\n categories = [\"develop\"]\n elif not categories:\n categories = [\"default\"]\n\n index_name = None\n if index_url:\n index_name = add_index_to_pipfile(project, index_url)\n\n package_args = list(packages) + [f\"-e {pkg}\" for pkg in editable_packages]\n\n requested_install_reqs = defaultdict(dict)\n requested_packages = defaultdict(dict)\n for category in categories:\n pipfile_category = get_pipfile_category_using_lockfile_section(category)\n\n for package in package_args[:]:\n install_req = expansive_install_req_from_line(package, expand_env=True)\n if index_name:\n install_req.index = index_name\n name, normalized_name, pipfile_entry = project.generate_package_pipfile_entry(\n install_req, package, category=pipfile_category\n )\n project.add_pipfile_entry_to_pipfile(\n name, normalized_name, pipfile_entry, category=pipfile_category\n )\n requested_packages[pipfile_category][normalized_name] = pipfile_entry\n requested_install_reqs[pipfile_category][normalized_name] = install_req\n\n if project.pipfile_exists:\n packages = project.parsed_pipfile.get(pipfile_category, {})\n else:\n packages = project.get_pipfile_section(pipfile_category)\n\n if not package_args:\n click.echo(\"Nothing to upgrade!\")\n sys.exit(0)\n\n # Resolve package to generate constraints of new package data\n upgrade_lock_data = venv_resolve_deps(\n requested_packages[pipfile_category],\n which=project._which,\n project=project,\n lockfile={},\n category=\"default\",\n pre=pre,\n allow_global=system,\n pypi_mirror=pypi_mirror,\n )\n if not upgrade_lock_data:\n click.echo(\"Nothing to upgrade!\")\n sys.exit(0)\n\n for package_name, pipfile_entry in requested_packages[pipfile_category].items():\n if package_name not in packages:\n packages.append(package_name, pipfile_entry)\n else:\n packages[package_name] = pipfile_entry\n\n full_lock_resolution = venv_resolve_deps(\n packages,\n which=project._which,\n project=project,\n lockfile={},\n category=pipfile_category,\n pre=pre,\n allow_global=system,\n pypi_mirror=pypi_mirror,\n )\n # Mutate the existing lockfile with the upgrade data for the categories\n for package_name in upgrade_lock_data:\n correct_package_lock = full_lock_resolution.get(package_name)\n if correct_package_lock:\n lockfile[category][package_name] = correct_package_lock\n\n lockfile.update({\"_meta\": project.get_lockfile_meta()})\n project.write_lockfile(lockfile)\n", "path": "pipenv/routines/update.py"}]}
| 2,507 | 396 |
gh_patches_debug_34525
|
rasdani/github-patches
|
git_diff
|
mdn__kuma-6660
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
T - Create landing page (FAQ) for Subscriptions
## 💬 Description
Create a landing page for Subscriptions in React, so we can reuse components. The page will live at /payments.
## ✅ Acceptance Criteria
- [ ] User can navigate to /payments page
- [ ] User can read FAQ about why we are doing this and what to expect
- [ ] User viewing on a smaller screen does not have a terrible experience
_Note: The "Help grow MDN" callout in the header is **not** expected to be completed during this sprint_
## 👷♀ Tasks
- [x] Add a new route and view in Django for /payments
- [x] Create React component, borrowing content from existing [Thank You page](https://wiki.developer.mozilla.org/en-US/payments/recurring)
- [x] Update copy to match what's in [latest copy edits](https://docs.google.com/document/d/1TH-P-hcZY1UzBtXDGGZnF3jTCy7b-cgC3TUo4Vwxfdc/edit?usp=sharing )
## 🎨 Design
https://app.abstract.com/projects/9decc480-70fc-11e9-9453-4f3fbffea7ce/branches/master/commits/5eb3004bb0e0aeb362aae44b3da2fa7d15d893cb/files/96E76C7B-2AF8-4473-A39E-E81890F91955/layers/0A8C3971-7D07-40BF-96B5-3B467D450CC4?mode=build&selected=734966551-1AF136E1-5788-4D35-A3CF-0E452D009A7B
~~https://mozilla.invisionapp.com/share/TDVED747SB9#/screens/399165366~~
## 🔦 Context
Please see Schalk's comment https://github.com/mdn/kuma/issues/6581#issuecomment-591376908
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kuma/payments/urls.py`
Content:
```
1 from django.urls import re_path
2
3 from . import views
4
5 lang_urlpatterns = [
6 re_path(r"^recurring/?$", views.contribute, name="recurring_payment_initial"),
7 re_path(
8 r"^recurring/subscription/?$",
9 views.contribute,
10 name="recurring_payment_subscription",
11 ),
12 re_path(r"^terms/?$", views.payment_terms, name="payment_terms"),
13 re_path(
14 r"^recurring/management/?$",
15 views.recurring_payment_management,
16 name="recurring_payment_management",
17 ),
18 ]
19
```
Path: `kuma/payments/views.py`
Content:
```
1 import logging
2
3 from django.conf import settings
4 from django.shortcuts import render
5 from django.views.decorators.cache import never_cache
6 from stripe.error import StripeError
7 from waffle.decorators import waffle_flag
8
9 from kuma.core.decorators import ensure_wiki_domain, login_required
10
11 from .utils import (
12 cancel_stripe_customer_subscription,
13 get_stripe_customer_data,
14 )
15
16 log = logging.getLogger("kuma.payments.views")
17
18
19 @waffle_flag("subscription")
20 @ensure_wiki_domain
21 @never_cache
22 def contribute(request):
23 return render(request, "payments/payments.html")
24
25
26 @waffle_flag("subscription")
27 @ensure_wiki_domain
28 @never_cache
29 def payment_terms(request):
30 return render(request, "payments/terms.html")
31
32
33 @waffle_flag("subscription")
34 @ensure_wiki_domain
35 @login_required
36 @never_cache
37 def recurring_payment_management(request):
38 context = {
39 "support_mail_link": "mailto:"
40 + settings.CONTRIBUTION_SUPPORT_EMAIL
41 + "?Subject=Recurring%20payment%20support",
42 "support_mail": settings.CONTRIBUTION_SUPPORT_EMAIL,
43 "cancel_request": False,
44 "cancel_success": False,
45 }
46
47 if request.user.stripe_customer_id and "stripe_cancel_subscription" in request.POST:
48 context["cancel_request"] = True
49 cancel_success = False
50 try:
51 cancel_stripe_customer_subscription(request.user.stripe_customer_id)
52 except StripeError:
53 log.exception(
54 "Stripe subscription cancellation: Stripe error for %s [%s]",
55 request.user.username,
56 request.user.email,
57 )
58 else:
59 cancel_success = True
60 context["cancel_success"] = cancel_success
61
62 if request.user.stripe_customer_id:
63 data = {"active_subscriptions": False}
64 try:
65 data = get_stripe_customer_data(request.user.stripe_customer_id)
66 except StripeError:
67 log.exception(
68 "Stripe subscription data: Stripe error for %s [%s]",
69 request.user.username,
70 request.user.email,
71 )
72 context.update(data)
73
74 return render(request, "payments/management.html", context)
75
```
Path: `kuma/urls.py`
Content:
```
1 from decorator_include import decorator_include
2 from django.conf import settings
3 from django.contrib import admin
4 from django.shortcuts import render
5 from django.urls import include, re_path, reverse_lazy
6 from django.views.decorators.cache import never_cache
7 from django.views.generic import RedirectView
8 from django.views.static import serve
9
10 from kuma.attachments import views as attachment_views
11 from kuma.core import views as core_views
12 from kuma.core.decorators import ensure_wiki_domain, shared_cache_control
13 from kuma.core.urlresolvers import i18n_patterns
14 from kuma.dashboards.urls import lang_urlpatterns as dashboards_lang_urlpatterns
15 from kuma.dashboards.views import index as dashboards_index
16 from kuma.landing.urls import lang_urlpatterns as landing_lang_urlpatterns
17 from kuma.payments import views as payment_views
18 from kuma.payments.urls import lang_urlpatterns as payments_lang_urlpatterns
19 from kuma.search.urls import (
20 lang_base_urlpatterns as search_lang_base_urlpatterns,
21 lang_urlpatterns as search_lang_urlpatterns,
22 )
23 from kuma.users.urls import lang_urlpatterns as users_lang_urlpatterns
24 from kuma.views import serve_from_media_root
25 from kuma.wiki.admin import purge_view
26 from kuma.wiki.urls import lang_urlpatterns as wiki_lang_urlpatterns
27 from kuma.wiki.views.document import as_json as document_as_json
28 from kuma.wiki.views.legacy import mindtouch_to_kuma_redirect
29
30
31 DAY = 60 * 60 * 24
32 WEEK = DAY * 7
33 MONTH = DAY * 30
34
35 admin.autodiscover()
36
37 handler403 = core_views.handler403
38 handler404 = core_views.handler404
39 handler500 = core_views.handler500
40
41 urlpatterns = [re_path("", include("kuma.health.urls"))]
42 # The non-locale-based landing URL's
43 urlpatterns += [re_path("", include("kuma.landing.urls"))]
44 # The locale-based landing URL's
45 urlpatterns += i18n_patterns(re_path("", include(landing_lang_urlpatterns)))
46 urlpatterns += i18n_patterns(
47 re_path(
48 r"^events",
49 # Here the "shared_cache_control" decorator is an optimization. It
50 # informs the CDN to cache the redirect for a month, so once this URL
51 # has been requested by a client, all other client requests will be
52 # redirected by the CDN instead of this Django service.
53 shared_cache_control(s_maxage=MONTH)(
54 RedirectView.as_view(
55 url="https://mozilla.org/contribute/events", permanent=False
56 )
57 ),
58 name="events",
59 ),
60 )
61
62 if settings.MAINTENANCE_MODE:
63 urlpatterns.append(
64 re_path(
65 r"^admin/.*",
66 never_cache(
67 RedirectView.as_view(pattern_name="maintenance_mode", permanent=False)
68 ),
69 )
70 )
71 else:
72 # Django admin:
73 urlpatterns += [
74 re_path(
75 r"^admin/wiki/document/purge/", purge_view, name="wiki.admin_bulk_purge"
76 ),
77 # We don't worry about decorating the views within django.contrib.admin
78 # with "never_cache", since most have already been decorated, and the
79 # remaining can be safely cached.
80 re_path(r"^admin/", admin.site.urls),
81 ]
82
83 urlpatterns += i18n_patterns(re_path(r"^search/", include(search_lang_urlpatterns)))
84 urlpatterns += i18n_patterns(re_path(r"^search", include(search_lang_base_urlpatterns)))
85 urlpatterns += i18n_patterns(
86 re_path(r"^docs.json$", document_as_json, name="wiki.json")
87 )
88 urlpatterns += i18n_patterns(re_path(r"^docs/", include(wiki_lang_urlpatterns)))
89 urlpatterns += [re_path("", include("kuma.attachments.urls"))]
90 urlpatterns += i18n_patterns(
91 re_path(r"dashboards/?$", dashboards_index, name="dashboards.index"),
92 )
93 urlpatterns += i18n_patterns(
94 re_path(r"^dashboards/", include(dashboards_lang_urlpatterns))
95 )
96 urlpatterns += [re_path("users/", include("kuma.users.urls"))]
97 urlpatterns += i18n_patterns(
98 re_path(r"^payments/$", payment_views.contribute, name="payments"),
99 )
100 urlpatterns += i18n_patterns(
101 re_path(
102 r"^contribute/$",
103 ensure_wiki_domain(RedirectView.as_view(url=reverse_lazy("payments"))),
104 name="redirect-to-payments",
105 ),
106 )
107 urlpatterns += i18n_patterns(re_path(r"^payments/", include(payments_lang_urlpatterns)))
108 urlpatterns += i18n_patterns(
109 re_path("", decorator_include(never_cache, users_lang_urlpatterns))
110 )
111
112 if settings.MAINTENANCE_MODE:
113 urlpatterns += i18n_patterns(
114 # Redirect if we try to use the "tidings" unsubscribe.
115 re_path(
116 r"^unsubscribe/.*",
117 ensure_wiki_domain(
118 never_cache(
119 RedirectView.as_view(
120 pattern_name="maintenance_mode", permanent=False
121 )
122 )
123 ),
124 )
125 )
126 else:
127 urlpatterns += i18n_patterns(
128 # The first argument to "decorator_include" can be an iterable
129 # of view decorators, which are applied in reverse order.
130 re_path(
131 r"^", decorator_include((ensure_wiki_domain, never_cache), "tidings.urls")
132 ),
133 )
134
135
136 urlpatterns += [
137 # Services and sundry.
138 re_path("^api/", include("kuma.api.urls")),
139 re_path("", include("kuma.version.urls")),
140 # Serve sitemap files.
141 re_path(
142 r"^sitemap.xml$", serve_from_media_root, {"path": "sitemap.xml"}, name="sitemap"
143 ),
144 re_path(r"^(?P<path>sitemaps/.+)$", serve_from_media_root, name="sitemaps"),
145 re_path(r"^humans.txt$", core_views.humans_txt, name="humans_txt"),
146 re_path(
147 r"^miel$",
148 shared_cache_control(s_maxage=WEEK)(render),
149 {"template_name": "500.html", "status": 500},
150 name="users.honeypot",
151 ),
152 # We use our own views for setting language in cookies. But to just align with django, set it like this.
153 re_path(r"^i18n/setlang/", core_views.set_language, name="set-language-cookie"),
154 ]
155
156 if settings.SERVE_LEGACY and settings.LEGACY_ROOT:
157 urlpatterns.append(
158 re_path(
159 r"^(?P<path>(diagrams|presentations|samples)/.+)$",
160 shared_cache_control(s_maxage=MONTH)(serve),
161 {"document_root": settings.LEGACY_ROOT},
162 )
163 )
164
165 if getattr(settings, "DEBUG_TOOLBAR_INSTALLED", False):
166 import debug_toolbar
167
168 urlpatterns.append(
169 re_path(r"^__debug__/", decorator_include(never_cache, debug_toolbar.urls)),
170 )
171
172 # Legacy MindTouch redirects. These go last so that they don't mess
173 # with local instances' ability to serve media.
174 urlpatterns += [
175 re_path(
176 r"^@api/deki/files/(?P<file_id>\d+)/=(?P<filename>.+)$",
177 attachment_views.mindtouch_file_redirect,
178 name="attachments.mindtouch_file_redirect",
179 ),
180 re_path(r"^(?P<path>.*)$", mindtouch_to_kuma_redirect),
181 ]
182
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kuma/payments/urls.py b/kuma/payments/urls.py
--- a/kuma/payments/urls.py
+++ b/kuma/payments/urls.py
@@ -15,4 +15,5 @@
views.recurring_payment_management,
name="recurring_payment_management",
),
+ re_path(r"", views.index, name="payments_index"),
]
diff --git a/kuma/payments/views.py b/kuma/payments/views.py
--- a/kuma/payments/views.py
+++ b/kuma/payments/views.py
@@ -16,6 +16,11 @@
log = logging.getLogger("kuma.payments.views")
+@never_cache
+def index(request):
+ return render(request, "payments/index.html")
+
+
@waffle_flag("subscription")
@ensure_wiki_domain
@never_cache
diff --git a/kuma/urls.py b/kuma/urls.py
--- a/kuma/urls.py
+++ b/kuma/urls.py
@@ -14,7 +14,6 @@
from kuma.dashboards.urls import lang_urlpatterns as dashboards_lang_urlpatterns
from kuma.dashboards.views import index as dashboards_index
from kuma.landing.urls import lang_urlpatterns as landing_lang_urlpatterns
-from kuma.payments import views as payment_views
from kuma.payments.urls import lang_urlpatterns as payments_lang_urlpatterns
from kuma.search.urls import (
lang_base_urlpatterns as search_lang_base_urlpatterns,
@@ -94,17 +93,16 @@
re_path(r"^dashboards/", include(dashboards_lang_urlpatterns))
)
urlpatterns += [re_path("users/", include("kuma.users.urls"))]
-urlpatterns += i18n_patterns(
- re_path(r"^payments/$", payment_views.contribute, name="payments"),
-)
urlpatterns += i18n_patterns(
re_path(
r"^contribute/$",
- ensure_wiki_domain(RedirectView.as_view(url=reverse_lazy("payments"))),
+ RedirectView.as_view(url=reverse_lazy("payments_index")),
name="redirect-to-payments",
),
)
-urlpatterns += i18n_patterns(re_path(r"^payments/", include(payments_lang_urlpatterns)))
+urlpatterns += i18n_patterns(
+ re_path(r"^payments/?", include(payments_lang_urlpatterns))
+)
urlpatterns += i18n_patterns(
re_path("", decorator_include(never_cache, users_lang_urlpatterns))
)
|
{"golden_diff": "diff --git a/kuma/payments/urls.py b/kuma/payments/urls.py\n--- a/kuma/payments/urls.py\n+++ b/kuma/payments/urls.py\n@@ -15,4 +15,5 @@\n views.recurring_payment_management,\n name=\"recurring_payment_management\",\n ),\n+ re_path(r\"\", views.index, name=\"payments_index\"),\n ]\ndiff --git a/kuma/payments/views.py b/kuma/payments/views.py\n--- a/kuma/payments/views.py\n+++ b/kuma/payments/views.py\n@@ -16,6 +16,11 @@\n log = logging.getLogger(\"kuma.payments.views\")\n \n \n+@never_cache\n+def index(request):\n+ return render(request, \"payments/index.html\")\n+\n+\n @waffle_flag(\"subscription\")\n @ensure_wiki_domain\n @never_cache\ndiff --git a/kuma/urls.py b/kuma/urls.py\n--- a/kuma/urls.py\n+++ b/kuma/urls.py\n@@ -14,7 +14,6 @@\n from kuma.dashboards.urls import lang_urlpatterns as dashboards_lang_urlpatterns\n from kuma.dashboards.views import index as dashboards_index\n from kuma.landing.urls import lang_urlpatterns as landing_lang_urlpatterns\n-from kuma.payments import views as payment_views\n from kuma.payments.urls import lang_urlpatterns as payments_lang_urlpatterns\n from kuma.search.urls import (\n lang_base_urlpatterns as search_lang_base_urlpatterns,\n@@ -94,17 +93,16 @@\n re_path(r\"^dashboards/\", include(dashboards_lang_urlpatterns))\n )\n urlpatterns += [re_path(\"users/\", include(\"kuma.users.urls\"))]\n-urlpatterns += i18n_patterns(\n- re_path(r\"^payments/$\", payment_views.contribute, name=\"payments\"),\n-)\n urlpatterns += i18n_patterns(\n re_path(\n r\"^contribute/$\",\n- ensure_wiki_domain(RedirectView.as_view(url=reverse_lazy(\"payments\"))),\n+ RedirectView.as_view(url=reverse_lazy(\"payments_index\")),\n name=\"redirect-to-payments\",\n ),\n )\n-urlpatterns += i18n_patterns(re_path(r\"^payments/\", include(payments_lang_urlpatterns)))\n+urlpatterns += i18n_patterns(\n+ re_path(r\"^payments/?\", include(payments_lang_urlpatterns))\n+)\n urlpatterns += i18n_patterns(\n re_path(\"\", decorator_include(never_cache, users_lang_urlpatterns))\n )\n", "issue": "T - Create landing page (FAQ) for Subscriptions\n## \ud83d\udcac Description\r\nCreate a landing page for Subscriptions in React, so we can reuse components. The page will live at /payments. \r\n\r\n## \u2705 Acceptance Criteria\r\n- [ ] User can navigate to /payments page\r\n- [ ] User can read FAQ about why we are doing this and what to expect \r\n- [ ] User viewing on a smaller screen does not have a terrible experience\r\n\r\n_Note: The \"Help grow MDN\" callout in the header is **not** expected to be completed during this sprint_\r\n \r\n## \ud83d\udc77\u200d\u2640 Tasks\r\n- [x] Add a new route and view in Django for /payments\r\n- [x] Create React component, borrowing content from existing [Thank You page](https://wiki.developer.mozilla.org/en-US/payments/recurring)\r\n- [x] Update copy to match what's in [latest copy edits](https://docs.google.com/document/d/1TH-P-hcZY1UzBtXDGGZnF3jTCy7b-cgC3TUo4Vwxfdc/edit?usp=sharing )\r\n\r\n## \ud83c\udfa8 Design\r\nhttps://app.abstract.com/projects/9decc480-70fc-11e9-9453-4f3fbffea7ce/branches/master/commits/5eb3004bb0e0aeb362aae44b3da2fa7d15d893cb/files/96E76C7B-2AF8-4473-A39E-E81890F91955/layers/0A8C3971-7D07-40BF-96B5-3B467D450CC4?mode=build&selected=734966551-1AF136E1-5788-4D35-A3CF-0E452D009A7B\r\n\r\n~~https://mozilla.invisionapp.com/share/TDVED747SB9#/screens/399165366~~\r\n\r\n## \ud83d\udd26 Context\r\nPlease see Schalk's comment https://github.com/mdn/kuma/issues/6581#issuecomment-591376908\r\n\r\n\n", "before_files": [{"content": "from django.urls import re_path\n\nfrom . import views\n\nlang_urlpatterns = [\n re_path(r\"^recurring/?$\", views.contribute, name=\"recurring_payment_initial\"),\n re_path(\n r\"^recurring/subscription/?$\",\n views.contribute,\n name=\"recurring_payment_subscription\",\n ),\n re_path(r\"^terms/?$\", views.payment_terms, name=\"payment_terms\"),\n re_path(\n r\"^recurring/management/?$\",\n views.recurring_payment_management,\n name=\"recurring_payment_management\",\n ),\n]\n", "path": "kuma/payments/urls.py"}, {"content": "import logging\n\nfrom django.conf import settings\nfrom django.shortcuts import render\nfrom django.views.decorators.cache import never_cache\nfrom stripe.error import StripeError\nfrom waffle.decorators import waffle_flag\n\nfrom kuma.core.decorators import ensure_wiki_domain, login_required\n\nfrom .utils import (\n cancel_stripe_customer_subscription,\n get_stripe_customer_data,\n)\n\nlog = logging.getLogger(\"kuma.payments.views\")\n\n\n@waffle_flag(\"subscription\")\n@ensure_wiki_domain\n@never_cache\ndef contribute(request):\n return render(request, \"payments/payments.html\")\n\n\n@waffle_flag(\"subscription\")\n@ensure_wiki_domain\n@never_cache\ndef payment_terms(request):\n return render(request, \"payments/terms.html\")\n\n\n@waffle_flag(\"subscription\")\n@ensure_wiki_domain\n@login_required\n@never_cache\ndef recurring_payment_management(request):\n context = {\n \"support_mail_link\": \"mailto:\"\n + settings.CONTRIBUTION_SUPPORT_EMAIL\n + \"?Subject=Recurring%20payment%20support\",\n \"support_mail\": settings.CONTRIBUTION_SUPPORT_EMAIL,\n \"cancel_request\": False,\n \"cancel_success\": False,\n }\n\n if request.user.stripe_customer_id and \"stripe_cancel_subscription\" in request.POST:\n context[\"cancel_request\"] = True\n cancel_success = False\n try:\n cancel_stripe_customer_subscription(request.user.stripe_customer_id)\n except StripeError:\n log.exception(\n \"Stripe subscription cancellation: Stripe error for %s [%s]\",\n request.user.username,\n request.user.email,\n )\n else:\n cancel_success = True\n context[\"cancel_success\"] = cancel_success\n\n if request.user.stripe_customer_id:\n data = {\"active_subscriptions\": False}\n try:\n data = get_stripe_customer_data(request.user.stripe_customer_id)\n except StripeError:\n log.exception(\n \"Stripe subscription data: Stripe error for %s [%s]\",\n request.user.username,\n request.user.email,\n )\n context.update(data)\n\n return render(request, \"payments/management.html\", context)\n", "path": "kuma/payments/views.py"}, {"content": "from decorator_include import decorator_include\nfrom django.conf import settings\nfrom django.contrib import admin\nfrom django.shortcuts import render\nfrom django.urls import include, re_path, reverse_lazy\nfrom django.views.decorators.cache import never_cache\nfrom django.views.generic import RedirectView\nfrom django.views.static import serve\n\nfrom kuma.attachments import views as attachment_views\nfrom kuma.core import views as core_views\nfrom kuma.core.decorators import ensure_wiki_domain, shared_cache_control\nfrom kuma.core.urlresolvers import i18n_patterns\nfrom kuma.dashboards.urls import lang_urlpatterns as dashboards_lang_urlpatterns\nfrom kuma.dashboards.views import index as dashboards_index\nfrom kuma.landing.urls import lang_urlpatterns as landing_lang_urlpatterns\nfrom kuma.payments import views as payment_views\nfrom kuma.payments.urls import lang_urlpatterns as payments_lang_urlpatterns\nfrom kuma.search.urls import (\n lang_base_urlpatterns as search_lang_base_urlpatterns,\n lang_urlpatterns as search_lang_urlpatterns,\n)\nfrom kuma.users.urls import lang_urlpatterns as users_lang_urlpatterns\nfrom kuma.views import serve_from_media_root\nfrom kuma.wiki.admin import purge_view\nfrom kuma.wiki.urls import lang_urlpatterns as wiki_lang_urlpatterns\nfrom kuma.wiki.views.document import as_json as document_as_json\nfrom kuma.wiki.views.legacy import mindtouch_to_kuma_redirect\n\n\nDAY = 60 * 60 * 24\nWEEK = DAY * 7\nMONTH = DAY * 30\n\nadmin.autodiscover()\n\nhandler403 = core_views.handler403\nhandler404 = core_views.handler404\nhandler500 = core_views.handler500\n\nurlpatterns = [re_path(\"\", include(\"kuma.health.urls\"))]\n# The non-locale-based landing URL's\nurlpatterns += [re_path(\"\", include(\"kuma.landing.urls\"))]\n# The locale-based landing URL's\nurlpatterns += i18n_patterns(re_path(\"\", include(landing_lang_urlpatterns)))\nurlpatterns += i18n_patterns(\n re_path(\n r\"^events\",\n # Here the \"shared_cache_control\" decorator is an optimization. It\n # informs the CDN to cache the redirect for a month, so once this URL\n # has been requested by a client, all other client requests will be\n # redirected by the CDN instead of this Django service.\n shared_cache_control(s_maxage=MONTH)(\n RedirectView.as_view(\n url=\"https://mozilla.org/contribute/events\", permanent=False\n )\n ),\n name=\"events\",\n ),\n)\n\nif settings.MAINTENANCE_MODE:\n urlpatterns.append(\n re_path(\n r\"^admin/.*\",\n never_cache(\n RedirectView.as_view(pattern_name=\"maintenance_mode\", permanent=False)\n ),\n )\n )\nelse:\n # Django admin:\n urlpatterns += [\n re_path(\n r\"^admin/wiki/document/purge/\", purge_view, name=\"wiki.admin_bulk_purge\"\n ),\n # We don't worry about decorating the views within django.contrib.admin\n # with \"never_cache\", since most have already been decorated, and the\n # remaining can be safely cached.\n re_path(r\"^admin/\", admin.site.urls),\n ]\n\nurlpatterns += i18n_patterns(re_path(r\"^search/\", include(search_lang_urlpatterns)))\nurlpatterns += i18n_patterns(re_path(r\"^search\", include(search_lang_base_urlpatterns)))\nurlpatterns += i18n_patterns(\n re_path(r\"^docs.json$\", document_as_json, name=\"wiki.json\")\n)\nurlpatterns += i18n_patterns(re_path(r\"^docs/\", include(wiki_lang_urlpatterns)))\nurlpatterns += [re_path(\"\", include(\"kuma.attachments.urls\"))]\nurlpatterns += i18n_patterns(\n re_path(r\"dashboards/?$\", dashboards_index, name=\"dashboards.index\"),\n)\nurlpatterns += i18n_patterns(\n re_path(r\"^dashboards/\", include(dashboards_lang_urlpatterns))\n)\nurlpatterns += [re_path(\"users/\", include(\"kuma.users.urls\"))]\nurlpatterns += i18n_patterns(\n re_path(r\"^payments/$\", payment_views.contribute, name=\"payments\"),\n)\nurlpatterns += i18n_patterns(\n re_path(\n r\"^contribute/$\",\n ensure_wiki_domain(RedirectView.as_view(url=reverse_lazy(\"payments\"))),\n name=\"redirect-to-payments\",\n ),\n)\nurlpatterns += i18n_patterns(re_path(r\"^payments/\", include(payments_lang_urlpatterns)))\nurlpatterns += i18n_patterns(\n re_path(\"\", decorator_include(never_cache, users_lang_urlpatterns))\n)\n\nif settings.MAINTENANCE_MODE:\n urlpatterns += i18n_patterns(\n # Redirect if we try to use the \"tidings\" unsubscribe.\n re_path(\n r\"^unsubscribe/.*\",\n ensure_wiki_domain(\n never_cache(\n RedirectView.as_view(\n pattern_name=\"maintenance_mode\", permanent=False\n )\n )\n ),\n )\n )\nelse:\n urlpatterns += i18n_patterns(\n # The first argument to \"decorator_include\" can be an iterable\n # of view decorators, which are applied in reverse order.\n re_path(\n r\"^\", decorator_include((ensure_wiki_domain, never_cache), \"tidings.urls\")\n ),\n )\n\n\nurlpatterns += [\n # Services and sundry.\n re_path(\"^api/\", include(\"kuma.api.urls\")),\n re_path(\"\", include(\"kuma.version.urls\")),\n # Serve sitemap files.\n re_path(\n r\"^sitemap.xml$\", serve_from_media_root, {\"path\": \"sitemap.xml\"}, name=\"sitemap\"\n ),\n re_path(r\"^(?P<path>sitemaps/.+)$\", serve_from_media_root, name=\"sitemaps\"),\n re_path(r\"^humans.txt$\", core_views.humans_txt, name=\"humans_txt\"),\n re_path(\n r\"^miel$\",\n shared_cache_control(s_maxage=WEEK)(render),\n {\"template_name\": \"500.html\", \"status\": 500},\n name=\"users.honeypot\",\n ),\n # We use our own views for setting language in cookies. But to just align with django, set it like this.\n re_path(r\"^i18n/setlang/\", core_views.set_language, name=\"set-language-cookie\"),\n]\n\nif settings.SERVE_LEGACY and settings.LEGACY_ROOT:\n urlpatterns.append(\n re_path(\n r\"^(?P<path>(diagrams|presentations|samples)/.+)$\",\n shared_cache_control(s_maxage=MONTH)(serve),\n {\"document_root\": settings.LEGACY_ROOT},\n )\n )\n\nif getattr(settings, \"DEBUG_TOOLBAR_INSTALLED\", False):\n import debug_toolbar\n\n urlpatterns.append(\n re_path(r\"^__debug__/\", decorator_include(never_cache, debug_toolbar.urls)),\n )\n\n# Legacy MindTouch redirects. These go last so that they don't mess\n# with local instances' ability to serve media.\nurlpatterns += [\n re_path(\n r\"^@api/deki/files/(?P<file_id>\\d+)/=(?P<filename>.+)$\",\n attachment_views.mindtouch_file_redirect,\n name=\"attachments.mindtouch_file_redirect\",\n ),\n re_path(r\"^(?P<path>.*)$\", mindtouch_to_kuma_redirect),\n]\n", "path": "kuma/urls.py"}], "after_files": [{"content": "from django.urls import re_path\n\nfrom . import views\n\nlang_urlpatterns = [\n re_path(r\"^recurring/?$\", views.contribute, name=\"recurring_payment_initial\"),\n re_path(\n r\"^recurring/subscription/?$\",\n views.contribute,\n name=\"recurring_payment_subscription\",\n ),\n re_path(r\"^terms/?$\", views.payment_terms, name=\"payment_terms\"),\n re_path(\n r\"^recurring/management/?$\",\n views.recurring_payment_management,\n name=\"recurring_payment_management\",\n ),\n re_path(r\"\", views.index, name=\"payments_index\"),\n]\n", "path": "kuma/payments/urls.py"}, {"content": "import logging\n\nfrom django.conf import settings\nfrom django.shortcuts import render\nfrom django.views.decorators.cache import never_cache\nfrom stripe.error import StripeError\nfrom waffle.decorators import waffle_flag\n\nfrom kuma.core.decorators import ensure_wiki_domain, login_required\n\nfrom .utils import (\n cancel_stripe_customer_subscription,\n get_stripe_customer_data,\n)\n\nlog = logging.getLogger(\"kuma.payments.views\")\n\n\n@never_cache\ndef index(request):\n return render(request, \"payments/index.html\")\n\n\n@waffle_flag(\"subscription\")\n@ensure_wiki_domain\n@never_cache\ndef contribute(request):\n return render(request, \"payments/payments.html\")\n\n\n@waffle_flag(\"subscription\")\n@ensure_wiki_domain\n@never_cache\ndef payment_terms(request):\n return render(request, \"payments/terms.html\")\n\n\n@waffle_flag(\"subscription\")\n@ensure_wiki_domain\n@login_required\n@never_cache\ndef recurring_payment_management(request):\n context = {\n \"support_mail_link\": \"mailto:\"\n + settings.CONTRIBUTION_SUPPORT_EMAIL\n + \"?Subject=Recurring%20payment%20support\",\n \"support_mail\": settings.CONTRIBUTION_SUPPORT_EMAIL,\n \"cancel_request\": False,\n \"cancel_success\": False,\n }\n\n if request.user.stripe_customer_id and \"stripe_cancel_subscription\" in request.POST:\n context[\"cancel_request\"] = True\n cancel_success = False\n try:\n cancel_stripe_customer_subscription(request.user.stripe_customer_id)\n except StripeError:\n log.exception(\n \"Stripe subscription cancellation: Stripe error for %s [%s]\",\n request.user.username,\n request.user.email,\n )\n else:\n cancel_success = True\n context[\"cancel_success\"] = cancel_success\n\n if request.user.stripe_customer_id:\n data = {\"active_subscriptions\": False}\n try:\n data = get_stripe_customer_data(request.user.stripe_customer_id)\n except StripeError:\n log.exception(\n \"Stripe subscription data: Stripe error for %s [%s]\",\n request.user.username,\n request.user.email,\n )\n context.update(data)\n\n return render(request, \"payments/management.html\", context)\n", "path": "kuma/payments/views.py"}, {"content": "from decorator_include import decorator_include\nfrom django.conf import settings\nfrom django.contrib import admin\nfrom django.shortcuts import render\nfrom django.urls import include, re_path, reverse_lazy\nfrom django.views.decorators.cache import never_cache\nfrom django.views.generic import RedirectView\nfrom django.views.static import serve\n\nfrom kuma.attachments import views as attachment_views\nfrom kuma.core import views as core_views\nfrom kuma.core.decorators import ensure_wiki_domain, shared_cache_control\nfrom kuma.core.urlresolvers import i18n_patterns\nfrom kuma.dashboards.urls import lang_urlpatterns as dashboards_lang_urlpatterns\nfrom kuma.dashboards.views import index as dashboards_index\nfrom kuma.landing.urls import lang_urlpatterns as landing_lang_urlpatterns\nfrom kuma.payments.urls import lang_urlpatterns as payments_lang_urlpatterns\nfrom kuma.search.urls import (\n lang_base_urlpatterns as search_lang_base_urlpatterns,\n lang_urlpatterns as search_lang_urlpatterns,\n)\nfrom kuma.users.urls import lang_urlpatterns as users_lang_urlpatterns\nfrom kuma.views import serve_from_media_root\nfrom kuma.wiki.admin import purge_view\nfrom kuma.wiki.urls import lang_urlpatterns as wiki_lang_urlpatterns\nfrom kuma.wiki.views.document import as_json as document_as_json\nfrom kuma.wiki.views.legacy import mindtouch_to_kuma_redirect\n\n\nDAY = 60 * 60 * 24\nWEEK = DAY * 7\nMONTH = DAY * 30\n\nadmin.autodiscover()\n\nhandler403 = core_views.handler403\nhandler404 = core_views.handler404\nhandler500 = core_views.handler500\n\nurlpatterns = [re_path(\"\", include(\"kuma.health.urls\"))]\n# The non-locale-based landing URL's\nurlpatterns += [re_path(\"\", include(\"kuma.landing.urls\"))]\n# The locale-based landing URL's\nurlpatterns += i18n_patterns(re_path(\"\", include(landing_lang_urlpatterns)))\nurlpatterns += i18n_patterns(\n re_path(\n r\"^events\",\n # Here the \"shared_cache_control\" decorator is an optimization. It\n # informs the CDN to cache the redirect for a month, so once this URL\n # has been requested by a client, all other client requests will be\n # redirected by the CDN instead of this Django service.\n shared_cache_control(s_maxage=MONTH)(\n RedirectView.as_view(\n url=\"https://mozilla.org/contribute/events\", permanent=False\n )\n ),\n name=\"events\",\n ),\n)\n\nif settings.MAINTENANCE_MODE:\n urlpatterns.append(\n re_path(\n r\"^admin/.*\",\n never_cache(\n RedirectView.as_view(pattern_name=\"maintenance_mode\", permanent=False)\n ),\n )\n )\nelse:\n # Django admin:\n urlpatterns += [\n re_path(\n r\"^admin/wiki/document/purge/\", purge_view, name=\"wiki.admin_bulk_purge\"\n ),\n # We don't worry about decorating the views within django.contrib.admin\n # with \"never_cache\", since most have already been decorated, and the\n # remaining can be safely cached.\n re_path(r\"^admin/\", admin.site.urls),\n ]\n\nurlpatterns += i18n_patterns(re_path(r\"^search/\", include(search_lang_urlpatterns)))\nurlpatterns += i18n_patterns(re_path(r\"^search\", include(search_lang_base_urlpatterns)))\nurlpatterns += i18n_patterns(\n re_path(r\"^docs.json$\", document_as_json, name=\"wiki.json\")\n)\nurlpatterns += i18n_patterns(re_path(r\"^docs/\", include(wiki_lang_urlpatterns)))\nurlpatterns += [re_path(\"\", include(\"kuma.attachments.urls\"))]\nurlpatterns += i18n_patterns(\n re_path(r\"dashboards/?$\", dashboards_index, name=\"dashboards.index\"),\n)\nurlpatterns += i18n_patterns(\n re_path(r\"^dashboards/\", include(dashboards_lang_urlpatterns))\n)\nurlpatterns += [re_path(\"users/\", include(\"kuma.users.urls\"))]\nurlpatterns += i18n_patterns(\n re_path(\n r\"^contribute/$\",\n RedirectView.as_view(url=reverse_lazy(\"payments_index\")),\n name=\"redirect-to-payments\",\n ),\n)\nurlpatterns += i18n_patterns(\n re_path(r\"^payments/?\", include(payments_lang_urlpatterns))\n)\nurlpatterns += i18n_patterns(\n re_path(\"\", decorator_include(never_cache, users_lang_urlpatterns))\n)\n\nif settings.MAINTENANCE_MODE:\n urlpatterns += i18n_patterns(\n # Redirect if we try to use the \"tidings\" unsubscribe.\n re_path(\n r\"^unsubscribe/.*\",\n ensure_wiki_domain(\n never_cache(\n RedirectView.as_view(\n pattern_name=\"maintenance_mode\", permanent=False\n )\n )\n ),\n )\n )\nelse:\n urlpatterns += i18n_patterns(\n # The first argument to \"decorator_include\" can be an iterable\n # of view decorators, which are applied in reverse order.\n re_path(\n r\"^\", decorator_include((ensure_wiki_domain, never_cache), \"tidings.urls\")\n ),\n )\n\n\nurlpatterns += [\n # Services and sundry.\n re_path(\"^api/\", include(\"kuma.api.urls\")),\n re_path(\"\", include(\"kuma.version.urls\")),\n # Serve sitemap files.\n re_path(\n r\"^sitemap.xml$\", serve_from_media_root, {\"path\": \"sitemap.xml\"}, name=\"sitemap\"\n ),\n re_path(r\"^(?P<path>sitemaps/.+)$\", serve_from_media_root, name=\"sitemaps\"),\n re_path(r\"^humans.txt$\", core_views.humans_txt, name=\"humans_txt\"),\n re_path(\n r\"^miel$\",\n shared_cache_control(s_maxage=WEEK)(render),\n {\"template_name\": \"500.html\", \"status\": 500},\n name=\"users.honeypot\",\n ),\n # We use our own views for setting language in cookies. But to just align with django, set it like this.\n re_path(r\"^i18n/setlang/\", core_views.set_language, name=\"set-language-cookie\"),\n]\n\nif settings.SERVE_LEGACY and settings.LEGACY_ROOT:\n urlpatterns.append(\n re_path(\n r\"^(?P<path>(diagrams|presentations|samples)/.+)$\",\n shared_cache_control(s_maxage=MONTH)(serve),\n {\"document_root\": settings.LEGACY_ROOT},\n )\n )\n\nif getattr(settings, \"DEBUG_TOOLBAR_INSTALLED\", False):\n import debug_toolbar\n\n urlpatterns.append(\n re_path(r\"^__debug__/\", decorator_include(never_cache, debug_toolbar.urls)),\n )\n\n# Legacy MindTouch redirects. These go last so that they don't mess\n# with local instances' ability to serve media.\nurlpatterns += [\n re_path(\n r\"^@api/deki/files/(?P<file_id>\\d+)/=(?P<filename>.+)$\",\n attachment_views.mindtouch_file_redirect,\n name=\"attachments.mindtouch_file_redirect\",\n ),\n re_path(r\"^(?P<path>.*)$\", mindtouch_to_kuma_redirect),\n]\n", "path": "kuma/urls.py"}]}
| 3,559 | 537 |
gh_patches_debug_24134
|
rasdani/github-patches
|
git_diff
|
meltano__meltano-6320
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`meltano select tap-gitlab tags "*"` fails on Windows
Example: https://gitlab.com/meltano/legacy-ci/meltano/-/jobs/2653845782
This occurs with Meltano built from `main`, i.e. `fb6ed89799eb341bea295350ea809e5049ee7ca6`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/meltano/cli/cli.py`
Content:
```
1 import logging # noqa: D100
2 import sys
3 import warnings # noqa: F401
4
5 import click
6
7 import meltano
8 from meltano.core.behavior.versioned import IncompatibleVersionError
9 from meltano.core.logging import LEVELS, setup_logging
10 from meltano.core.project import Project, ProjectNotFound
11 from meltano.core.project_settings_service import ProjectSettingsService
12
13 logger = logging.getLogger(__name__)
14
15
16 @click.group(invoke_without_command=True, no_args_is_help=True)
17 @click.option("--log-level", type=click.Choice(LEVELS.keys()))
18 @click.option(
19 "--log-config", type=str, help="Path to a python logging yaml config file."
20 )
21 @click.option("-v", "--verbose", count=True, help="Not used.")
22 @click.option(
23 "--environment",
24 envvar="MELTANO_ENVIRONMENT",
25 help="Meltano environment name.",
26 )
27 @click.option(
28 "--no-environment", is_flag=True, default=False, help="Don't use any environment."
29 )
30 @click.version_option(version=meltano.__version__, prog_name="meltano")
31 @click.pass_context
32 def cli( # noqa: WPS231
33 ctx,
34 log_level: str,
35 log_config: str,
36 verbose: int,
37 environment: str,
38 no_environment: bool,
39 ): # noqa: WPS231
40 """
41 ELT for the DataOps era.
42
43 \b\nRead more at https://www.meltano.com/docs/command-line-interface.html
44 """
45 if log_level:
46 ProjectSettingsService.config_override["cli.log_level"] = log_level
47
48 if log_config:
49 ProjectSettingsService.config_override["cli.log_config"] = log_config
50
51 ctx.ensure_object(dict)
52 ctx.obj["verbosity"] = verbose
53 try: # noqa: WPS229
54 project = Project.find()
55 setup_logging(project)
56
57 readonly = ProjectSettingsService(project).get("project_readonly")
58 if readonly:
59 project.readonly = True
60 if project.readonly:
61 logger.debug("Project is read-only.")
62
63 # detect active environment
64 selected_environment = None
65 if no_environment or (environment and environment.lower() == "null"):
66 logger.info("No environment is active")
67 elif environment:
68 selected_environment = environment
69 elif project.meltano.default_environment:
70 selected_environment = project.meltano.default_environment
71 # activate environment
72 if selected_environment:
73 project.activate_environment(selected_environment)
74 logger.info(
75 "Environment '%s' is active", selected_environment # noqa: WPS323
76 )
77
78 ctx.obj["project"] = project
79 except ProjectNotFound:
80 ctx.obj["project"] = None
81 except IncompatibleVersionError:
82 click.secho(
83 "This Meltano project is incompatible with this version of `meltano`.",
84 fg="yellow",
85 )
86 click.echo(
87 "For more details, visit http://meltano.com/docs/installation.html#upgrading-meltano-version"
88 )
89 sys.exit(3)
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/meltano/cli/cli.py b/src/meltano/cli/cli.py
--- a/src/meltano/cli/cli.py
+++ b/src/meltano/cli/cli.py
@@ -1,6 +1,6 @@
import logging # noqa: D100
import sys
-import warnings # noqa: F401
+from typing import NoReturn
import click
@@ -13,7 +13,28 @@
logger = logging.getLogger(__name__)
[email protected](invoke_without_command=True, no_args_is_help=True)
+class NoWindowsGlobbingGroup(click.Group):
+ """A Click group that does not perform glob expansion on Windows.
+
+ This restores the behaviour of Click's globbing to how it was before v8.
+ Click (as of version 8.1.3) ignores quotes around an asterisk, which makes
+ it behave differently than most shells that support globbing, and make some
+ typical Meltano commands fail, e.g. `meltano select tap-gitlab tags "*"`.
+ """
+
+ def main(self, *args, **kwargs) -> NoReturn:
+ """Invoke the Click CLI with Windows globbing disabled.
+
+ Parameters:
+ args: Positional arguments for the Click group.
+ kwargs: Keyword arguments for the Click group.
+ """
+ return super().main(*args, windows_expand_args=False, **kwargs)
+
+
[email protected](
+ cls=NoWindowsGlobbingGroup, invoke_without_command=True, no_args_is_help=True
+)
@click.option("--log-level", type=click.Choice(LEVELS.keys()))
@click.option(
"--log-config", type=str, help="Path to a python logging yaml config file."
|
{"golden_diff": "diff --git a/src/meltano/cli/cli.py b/src/meltano/cli/cli.py\n--- a/src/meltano/cli/cli.py\n+++ b/src/meltano/cli/cli.py\n@@ -1,6 +1,6 @@\n import logging # noqa: D100\n import sys\n-import warnings # noqa: F401\n+from typing import NoReturn\n \n import click\n \n@@ -13,7 +13,28 @@\n logger = logging.getLogger(__name__)\n \n \[email protected](invoke_without_command=True, no_args_is_help=True)\n+class NoWindowsGlobbingGroup(click.Group):\n+ \"\"\"A Click group that does not perform glob expansion on Windows.\n+\n+ This restores the behaviour of Click's globbing to how it was before v8.\n+ Click (as of version 8.1.3) ignores quotes around an asterisk, which makes\n+ it behave differently than most shells that support globbing, and make some\n+ typical Meltano commands fail, e.g. `meltano select tap-gitlab tags \"*\"`.\n+ \"\"\"\n+\n+ def main(self, *args, **kwargs) -> NoReturn:\n+ \"\"\"Invoke the Click CLI with Windows globbing disabled.\n+\n+ Parameters:\n+ args: Positional arguments for the Click group.\n+ kwargs: Keyword arguments for the Click group.\n+ \"\"\"\n+ return super().main(*args, windows_expand_args=False, **kwargs)\n+\n+\[email protected](\n+ cls=NoWindowsGlobbingGroup, invoke_without_command=True, no_args_is_help=True\n+)\n @click.option(\"--log-level\", type=click.Choice(LEVELS.keys()))\n @click.option(\n \"--log-config\", type=str, help=\"Path to a python logging yaml config file.\"\n", "issue": "`meltano select tap-gitlab tags \"*\"` fails on Windows\nExample: https://gitlab.com/meltano/legacy-ci/meltano/-/jobs/2653845782\r\n\r\nThis occurs with Meltano built from `main`, i.e. `fb6ed89799eb341bea295350ea809e5049ee7ca6`.\n", "before_files": [{"content": "import logging # noqa: D100\nimport sys\nimport warnings # noqa: F401\n\nimport click\n\nimport meltano\nfrom meltano.core.behavior.versioned import IncompatibleVersionError\nfrom meltano.core.logging import LEVELS, setup_logging\nfrom meltano.core.project import Project, ProjectNotFound\nfrom meltano.core.project_settings_service import ProjectSettingsService\n\nlogger = logging.getLogger(__name__)\n\n\[email protected](invoke_without_command=True, no_args_is_help=True)\[email protected](\"--log-level\", type=click.Choice(LEVELS.keys()))\[email protected](\n \"--log-config\", type=str, help=\"Path to a python logging yaml config file.\"\n)\[email protected](\"-v\", \"--verbose\", count=True, help=\"Not used.\")\[email protected](\n \"--environment\",\n envvar=\"MELTANO_ENVIRONMENT\",\n help=\"Meltano environment name.\",\n)\[email protected](\n \"--no-environment\", is_flag=True, default=False, help=\"Don't use any environment.\"\n)\[email protected]_option(version=meltano.__version__, prog_name=\"meltano\")\[email protected]_context\ndef cli( # noqa: WPS231\n ctx,\n log_level: str,\n log_config: str,\n verbose: int,\n environment: str,\n no_environment: bool,\n): # noqa: WPS231\n \"\"\"\n ELT for the DataOps era.\n\n \\b\\nRead more at https://www.meltano.com/docs/command-line-interface.html\n \"\"\"\n if log_level:\n ProjectSettingsService.config_override[\"cli.log_level\"] = log_level\n\n if log_config:\n ProjectSettingsService.config_override[\"cli.log_config\"] = log_config\n\n ctx.ensure_object(dict)\n ctx.obj[\"verbosity\"] = verbose\n try: # noqa: WPS229\n project = Project.find()\n setup_logging(project)\n\n readonly = ProjectSettingsService(project).get(\"project_readonly\")\n if readonly:\n project.readonly = True\n if project.readonly:\n logger.debug(\"Project is read-only.\")\n\n # detect active environment\n selected_environment = None\n if no_environment or (environment and environment.lower() == \"null\"):\n logger.info(\"No environment is active\")\n elif environment:\n selected_environment = environment\n elif project.meltano.default_environment:\n selected_environment = project.meltano.default_environment\n # activate environment\n if selected_environment:\n project.activate_environment(selected_environment)\n logger.info(\n \"Environment '%s' is active\", selected_environment # noqa: WPS323\n )\n\n ctx.obj[\"project\"] = project\n except ProjectNotFound:\n ctx.obj[\"project\"] = None\n except IncompatibleVersionError:\n click.secho(\n \"This Meltano project is incompatible with this version of `meltano`.\",\n fg=\"yellow\",\n )\n click.echo(\n \"For more details, visit http://meltano.com/docs/installation.html#upgrading-meltano-version\"\n )\n sys.exit(3)\n", "path": "src/meltano/cli/cli.py"}], "after_files": [{"content": "import logging # noqa: D100\nimport sys\nfrom typing import NoReturn\n\nimport click\n\nimport meltano\nfrom meltano.core.behavior.versioned import IncompatibleVersionError\nfrom meltano.core.logging import LEVELS, setup_logging\nfrom meltano.core.project import Project, ProjectNotFound\nfrom meltano.core.project_settings_service import ProjectSettingsService\n\nlogger = logging.getLogger(__name__)\n\n\nclass NoWindowsGlobbingGroup(click.Group):\n \"\"\"A Click group that does not perform glob expansion on Windows.\n\n This restores the behaviour of Click's globbing to how it was before v8.\n Click (as of version 8.1.3) ignores quotes around an asterisk, which makes\n it behave differently than most shells that support globbing, and make some\n typical Meltano commands fail, e.g. `meltano select tap-gitlab tags \"*\"`.\n \"\"\"\n\n def main(self, *args, **kwargs) -> NoReturn:\n \"\"\"Invoke the Click CLI with Windows globbing disabled.\n\n Parameters:\n args: Positional arguments for the Click group.\n kwargs: Keyword arguments for the Click group.\n \"\"\"\n return super().main(*args, windows_expand_args=False, **kwargs)\n\n\[email protected](\n cls=NoWindowsGlobbingGroup, invoke_without_command=True, no_args_is_help=True\n)\[email protected](\"--log-level\", type=click.Choice(LEVELS.keys()))\[email protected](\n \"--log-config\", type=str, help=\"Path to a python logging yaml config file.\"\n)\[email protected](\"-v\", \"--verbose\", count=True, help=\"Not used.\")\[email protected](\n \"--environment\",\n envvar=\"MELTANO_ENVIRONMENT\",\n help=\"Meltano environment name.\",\n)\[email protected](\n \"--no-environment\", is_flag=True, default=False, help=\"Don't use any environment.\"\n)\[email protected]_option(version=meltano.__version__, prog_name=\"meltano\")\[email protected]_context\ndef cli( # noqa: WPS231\n ctx,\n log_level: str,\n log_config: str,\n verbose: int,\n environment: str,\n no_environment: bool,\n): # noqa: WPS231\n \"\"\"\n ELT for the DataOps era.\n\n \\b\\nRead more at https://www.meltano.com/docs/command-line-interface.html\n \"\"\"\n if log_level:\n ProjectSettingsService.config_override[\"cli.log_level\"] = log_level\n\n if log_config:\n ProjectSettingsService.config_override[\"cli.log_config\"] = log_config\n\n ctx.ensure_object(dict)\n ctx.obj[\"verbosity\"] = verbose\n try: # noqa: WPS229\n project = Project.find()\n setup_logging(project)\n\n readonly = ProjectSettingsService(project).get(\"project_readonly\")\n if readonly:\n project.readonly = True\n if project.readonly:\n logger.debug(\"Project is read-only.\")\n\n # detect active environment\n selected_environment = None\n if no_environment or (environment and environment.lower() == \"null\"):\n logger.info(\"No environment is active\")\n elif environment:\n selected_environment = environment\n elif project.meltano.default_environment:\n selected_environment = project.meltano.default_environment\n # activate environment\n if selected_environment:\n project.activate_environment(selected_environment)\n logger.info(\n \"Environment '%s' is active\", selected_environment # noqa: WPS323\n )\n\n ctx.obj[\"project\"] = project\n except ProjectNotFound:\n ctx.obj[\"project\"] = None\n except IncompatibleVersionError:\n click.secho(\n \"This Meltano project is incompatible with this version of `meltano`.\",\n fg=\"yellow\",\n )\n click.echo(\n \"For more details, visit http://meltano.com/docs/installation.html#upgrading-meltano-version\"\n )\n sys.exit(3)\n", "path": "src/meltano/cli/cli.py"}]}
| 1,192 | 384 |
gh_patches_debug_30452
|
rasdani/github-patches
|
git_diff
|
mozilla__kitsune-4541
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Importing ABC directly from collections has been deprecated and will be removed in Python 3.10
Use `collections` for Python 2 and `collections.abc` for Python 3.4+
https://github.com/mozilla/kitsune/blob/db42cf7841b75ff0d4b6d3b21ac2524636110429/kitsune/wiki/widgets.py#L34
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kitsune/wiki/widgets.py`
Content:
```
1 import collections
2
3 from django import forms
4 from django.template.loader import render_to_string
5
6 from kitsune.products.models import Topic
7 from kitsune.wiki.models import Document
8
9
10 class ProductTopicsAndSubtopicsWidget(forms.widgets.SelectMultiple):
11 """A widget to render topics organized by product and with subtopics."""
12
13 def render(self, name, value, attrs=None, renderer=None):
14 topics_and_subtopics = Topic.objects.all()
15 topics = [t for t in topics_and_subtopics if t.parent_id is None]
16
17 for topic in topics:
18 self.process_topic(value, topic)
19
20 topic.my_subtopics = [t for t in topics_and_subtopics if t.parent_id == topic.id]
21
22 for subtopic in topic.my_subtopics:
23 self.process_topic(value, subtopic)
24
25 return render_to_string(
26 "wiki/includes/product_topics_widget.html",
27 {
28 "topics": topics,
29 "name": name,
30 },
31 )
32
33 def process_topic(self, value, topic):
34 if isinstance(value, int) and topic.id == value:
35 topic.checked = True
36 elif (
37 not isinstance(value, str)
38 and isinstance(value, collections.Iterable)
39 and topic.id in value
40 ):
41 topic.checked = True
42 else:
43 topic.checked = False
44
45
46 class RelatedDocumentsWidget(forms.widgets.SelectMultiple):
47 """A widget to render the related documents list and search field."""
48
49 def render(self, name, value, attrs=None, renderer=None):
50 if isinstance(value, int):
51 related_documents = Document.objects.filter(id__in=[value])
52 elif not isinstance(value, str) and isinstance(value, collections.Iterable):
53 related_documents = Document.objects.filter(id__in=value)
54 else:
55 related_documents = Document.objects.none()
56
57 return render_to_string(
58 "wiki/includes/related_docs_widget.html",
59 {"related_documents": related_documents, "name": name},
60 )
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kitsune/wiki/widgets.py b/kitsune/wiki/widgets.py
--- a/kitsune/wiki/widgets.py
+++ b/kitsune/wiki/widgets.py
@@ -1,4 +1,3 @@
-import collections
from django import forms
from django.template.loader import render_to_string
@@ -6,6 +5,11 @@
from kitsune.products.models import Topic
from kitsune.wiki.models import Document
+try:
+ from collections.abc import Iterable
+except ImportError:
+ from collections import Iterable
+
class ProductTopicsAndSubtopicsWidget(forms.widgets.SelectMultiple):
"""A widget to render topics organized by product and with subtopics."""
@@ -35,7 +39,7 @@
topic.checked = True
elif (
not isinstance(value, str)
- and isinstance(value, collections.Iterable)
+ and isinstance(value, Iterable)
and topic.id in value
):
topic.checked = True
@@ -49,7 +53,7 @@
def render(self, name, value, attrs=None, renderer=None):
if isinstance(value, int):
related_documents = Document.objects.filter(id__in=[value])
- elif not isinstance(value, str) and isinstance(value, collections.Iterable):
+ elif not isinstance(value, str) and isinstance(value, Iterable):
related_documents = Document.objects.filter(id__in=value)
else:
related_documents = Document.objects.none()
|
{"golden_diff": "diff --git a/kitsune/wiki/widgets.py b/kitsune/wiki/widgets.py\n--- a/kitsune/wiki/widgets.py\n+++ b/kitsune/wiki/widgets.py\n@@ -1,4 +1,3 @@\n-import collections\n \n from django import forms\n from django.template.loader import render_to_string\n@@ -6,6 +5,11 @@\n from kitsune.products.models import Topic\n from kitsune.wiki.models import Document\n \n+try:\n+ from collections.abc import Iterable\n+except ImportError:\n+ from collections import Iterable\n+\n \n class ProductTopicsAndSubtopicsWidget(forms.widgets.SelectMultiple):\n \"\"\"A widget to render topics organized by product and with subtopics.\"\"\"\n@@ -35,7 +39,7 @@\n topic.checked = True\n elif (\n not isinstance(value, str)\n- and isinstance(value, collections.Iterable)\n+ and isinstance(value, Iterable)\n and topic.id in value\n ):\n topic.checked = True\n@@ -49,7 +53,7 @@\n def render(self, name, value, attrs=None, renderer=None):\n if isinstance(value, int):\n related_documents = Document.objects.filter(id__in=[value])\n- elif not isinstance(value, str) and isinstance(value, collections.Iterable):\n+ elif not isinstance(value, str) and isinstance(value, Iterable):\n related_documents = Document.objects.filter(id__in=value)\n else:\n related_documents = Document.objects.none()\n", "issue": "Importing ABC directly from collections has been deprecated and will be removed in Python 3.10\nUse `collections` for Python 2 and `collections.abc` for Python 3.4+\r\n\r\nhttps://github.com/mozilla/kitsune/blob/db42cf7841b75ff0d4b6d3b21ac2524636110429/kitsune/wiki/widgets.py#L34\n", "before_files": [{"content": "import collections\n\nfrom django import forms\nfrom django.template.loader import render_to_string\n\nfrom kitsune.products.models import Topic\nfrom kitsune.wiki.models import Document\n\n\nclass ProductTopicsAndSubtopicsWidget(forms.widgets.SelectMultiple):\n \"\"\"A widget to render topics organized by product and with subtopics.\"\"\"\n\n def render(self, name, value, attrs=None, renderer=None):\n topics_and_subtopics = Topic.objects.all()\n topics = [t for t in topics_and_subtopics if t.parent_id is None]\n\n for topic in topics:\n self.process_topic(value, topic)\n\n topic.my_subtopics = [t for t in topics_and_subtopics if t.parent_id == topic.id]\n\n for subtopic in topic.my_subtopics:\n self.process_topic(value, subtopic)\n\n return render_to_string(\n \"wiki/includes/product_topics_widget.html\",\n {\n \"topics\": topics,\n \"name\": name,\n },\n )\n\n def process_topic(self, value, topic):\n if isinstance(value, int) and topic.id == value:\n topic.checked = True\n elif (\n not isinstance(value, str)\n and isinstance(value, collections.Iterable)\n and topic.id in value\n ):\n topic.checked = True\n else:\n topic.checked = False\n\n\nclass RelatedDocumentsWidget(forms.widgets.SelectMultiple):\n \"\"\"A widget to render the related documents list and search field.\"\"\"\n\n def render(self, name, value, attrs=None, renderer=None):\n if isinstance(value, int):\n related_documents = Document.objects.filter(id__in=[value])\n elif not isinstance(value, str) and isinstance(value, collections.Iterable):\n related_documents = Document.objects.filter(id__in=value)\n else:\n related_documents = Document.objects.none()\n\n return render_to_string(\n \"wiki/includes/related_docs_widget.html\",\n {\"related_documents\": related_documents, \"name\": name},\n )\n", "path": "kitsune/wiki/widgets.py"}], "after_files": [{"content": "\nfrom django import forms\nfrom django.template.loader import render_to_string\n\nfrom kitsune.products.models import Topic\nfrom kitsune.wiki.models import Document\n\ntry:\n from collections.abc import Iterable\nexcept ImportError:\n from collections import Iterable\n\n\nclass ProductTopicsAndSubtopicsWidget(forms.widgets.SelectMultiple):\n \"\"\"A widget to render topics organized by product and with subtopics.\"\"\"\n\n def render(self, name, value, attrs=None, renderer=None):\n topics_and_subtopics = Topic.objects.all()\n topics = [t for t in topics_and_subtopics if t.parent_id is None]\n\n for topic in topics:\n self.process_topic(value, topic)\n\n topic.my_subtopics = [t for t in topics_and_subtopics if t.parent_id == topic.id]\n\n for subtopic in topic.my_subtopics:\n self.process_topic(value, subtopic)\n\n return render_to_string(\n \"wiki/includes/product_topics_widget.html\",\n {\n \"topics\": topics,\n \"name\": name,\n },\n )\n\n def process_topic(self, value, topic):\n if isinstance(value, int) and topic.id == value:\n topic.checked = True\n elif (\n not isinstance(value, str)\n and isinstance(value, Iterable)\n and topic.id in value\n ):\n topic.checked = True\n else:\n topic.checked = False\n\n\nclass RelatedDocumentsWidget(forms.widgets.SelectMultiple):\n \"\"\"A widget to render the related documents list and search field.\"\"\"\n\n def render(self, name, value, attrs=None, renderer=None):\n if isinstance(value, int):\n related_documents = Document.objects.filter(id__in=[value])\n elif not isinstance(value, str) and isinstance(value, Iterable):\n related_documents = Document.objects.filter(id__in=value)\n else:\n related_documents = Document.objects.none()\n\n return render_to_string(\n \"wiki/includes/related_docs_widget.html\",\n {\"related_documents\": related_documents, \"name\": name},\n )\n", "path": "kitsune/wiki/widgets.py"}]}
| 880 | 306 |
gh_patches_debug_41510
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-680
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Adding Dice Coefficient Metric for Image Segmentation
I wanted to implement Dice Coefficient as a metric for image segmentation problems. any interest?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ignite/metrics/__init__.py`
Content:
```
1 from ignite.metrics.metric import Metric
2 from ignite.metrics.accuracy import Accuracy
3 from ignite.metrics.loss import Loss
4 from ignite.metrics.mean_absolute_error import MeanAbsoluteError
5 from ignite.metrics.mean_pairwise_distance import MeanPairwiseDistance
6 from ignite.metrics.mean_squared_error import MeanSquaredError
7 from ignite.metrics.epoch_metric import EpochMetric
8 from ignite.metrics.precision import Precision
9 from ignite.metrics.recall import Recall
10 from ignite.metrics.root_mean_squared_error import RootMeanSquaredError
11 from ignite.metrics.top_k_categorical_accuracy import TopKCategoricalAccuracy
12 from ignite.metrics.running_average import RunningAverage
13 from ignite.metrics.metrics_lambda import MetricsLambda
14 from ignite.metrics.confusion_matrix import ConfusionMatrix, IoU, mIoU
15 from ignite.metrics.accumulation import VariableAccumulation, Average, GeometricAverage
16 from ignite.metrics.fbeta import Fbeta
17
```
Path: `ignite/metrics/confusion_matrix.py`
Content:
```
1 import numbers
2
3 import torch
4
5 from ignite.metrics import Metric, MetricsLambda
6 from ignite.exceptions import NotComputableError
7 from ignite.metrics.metric import sync_all_reduce, reinit__is_reduced
8
9
10 class ConfusionMatrix(Metric):
11 """Calculates confusion matrix for multi-class data.
12
13 - `update` must receive output of the form `(y_pred, y)` or `{'y_pred': y_pred, 'y': y}`.
14 - `y_pred` must contain logits and has the following shape (batch_size, num_categories, ...)
15 - `y` should have the following shape (batch_size, ...) and contains ground-truth class indices
16 with or without the background class. During the computation, argmax of `y_pred` is taken to determine
17 predicted classes.
18
19 Args:
20 num_classes (int): number of classes. See notes for more details.
21 average (str, optional): confusion matrix values averaging schema: None, "samples", "recall", "precision".
22 Default is None. If `average="samples"` then confusion matrix values are normalized by the number of seen
23 samples. If `average="recall"` then confusion matrix values are normalized such that diagonal values
24 represent class recalls. If `average="precision"` then confusion matrix values are normalized such that
25 diagonal values represent class precisions.
26 output_transform (callable, optional): a callable that is used to transform the
27 :class:`~ignite.engine.Engine`'s `process_function`'s output into the
28 form expected by the metric. This can be useful if, for example, you have a multi-output model and
29 you want to compute the metric with respect to one of the outputs.
30 device (str of torch.device, optional): device specification in case of distributed computation usage.
31 In most of the cases, it can be defined as "cuda:local_rank" or "cuda"
32 if already set `torch.cuda.set_device(local_rank)`. By default, if a distributed process group is
33 initialized and available, device is set to `cuda`.
34
35 Note:
36 In case of the targets `y` in `(batch_size, ...)` format, target indices between 0 and `num_classes` only
37 contribute to the confusion matrix and others are neglected. For example, if `num_classes=20` and target index
38 equal 255 is encountered, then it is filtered out.
39
40 """
41
42 def __init__(self, num_classes, average=None, output_transform=lambda x: x, device=None):
43 if average is not None and average not in ("samples", "recall", "precision"):
44 raise ValueError("Argument average can None or one of ['samples', 'recall', 'precision']")
45
46 self.num_classes = num_classes
47 self._num_examples = 0
48 self.average = average
49 self.confusion_matrix = None
50 super(ConfusionMatrix, self).__init__(output_transform=output_transform, device=device)
51
52 @reinit__is_reduced
53 def reset(self):
54 self.confusion_matrix = torch.zeros(self.num_classes, self.num_classes,
55 dtype=torch.int64,
56 device=self._device)
57 self._num_examples = 0
58
59 def _check_shape(self, output):
60 y_pred, y = output
61
62 if y_pred.ndimension() < 2:
63 raise ValueError("y_pred must have shape (batch_size, num_categories, ...), "
64 "but given {}".format(y_pred.shape))
65
66 if y_pred.shape[1] != self.num_classes:
67 raise ValueError("y_pred does not have correct number of categories: {} vs {}"
68 .format(y_pred.shape[1], self.num_classes))
69
70 if not (y.ndimension() + 1 == y_pred.ndimension()):
71 raise ValueError("y_pred must have shape (batch_size, num_categories, ...) and y must have "
72 "shape of (batch_size, ...), "
73 "but given {} vs {}.".format(y.shape, y_pred.shape))
74
75 y_shape = y.shape
76 y_pred_shape = y_pred.shape
77
78 if y.ndimension() + 1 == y_pred.ndimension():
79 y_pred_shape = (y_pred_shape[0],) + y_pred_shape[2:]
80
81 if y_shape != y_pred_shape:
82 raise ValueError("y and y_pred must have compatible shapes.")
83
84 @reinit__is_reduced
85 def update(self, output):
86 self._check_shape(output)
87 y_pred, y = output
88
89 self._num_examples += y_pred.shape[0]
90
91 # target is (batch_size, ...)
92 y_pred = torch.argmax(y_pred, dim=1).flatten()
93 y = y.flatten()
94
95 target_mask = (y >= 0) & (y < self.num_classes)
96 y = y[target_mask]
97 y_pred = y_pred[target_mask]
98
99 indices = self.num_classes * y + y_pred
100 m = torch.bincount(indices, minlength=self.num_classes ** 2).reshape(self.num_classes, self.num_classes)
101 self.confusion_matrix += m.to(self.confusion_matrix)
102
103 @sync_all_reduce('confusion_matrix', '_num_examples')
104 def compute(self):
105 if self._num_examples == 0:
106 raise NotComputableError('Confusion matrix must have at least one example before it can be computed.')
107 if self.average:
108 self.confusion_matrix = self.confusion_matrix.float()
109 if self.average == "samples":
110 return self.confusion_matrix / self._num_examples
111 elif self.average == "recall":
112 return self.confusion_matrix / (self.confusion_matrix.sum(dim=1) + 1e-15)
113 elif self.average == "precision":
114 return self.confusion_matrix / (self.confusion_matrix.sum(dim=0) + 1e-15)
115 return self.confusion_matrix
116
117
118 def IoU(cm, ignore_index=None):
119 """Calculates Intersection over Union
120
121 Args:
122 cm (ConfusionMatrix): instance of confusion matrix metric
123 ignore_index (int, optional): index to ignore, e.g. background index
124
125 Returns:
126 MetricsLambda
127
128 Examples:
129
130 .. code-block:: python
131
132 train_evaluator = ...
133
134 cm = ConfusionMatrix(num_classes=num_classes)
135 IoU(cm, ignore_index=0).attach(train_evaluator, 'IoU')
136
137 state = train_evaluator.run(train_dataset)
138 # state.metrics['IoU'] -> tensor of shape (num_classes - 1, )
139
140 """
141 if not isinstance(cm, ConfusionMatrix):
142 raise TypeError("Argument cm should be instance of ConfusionMatrix, but given {}".format(type(cm)))
143
144 if ignore_index is not None:
145 if not (isinstance(ignore_index, numbers.Integral) and 0 <= ignore_index < cm.num_classes):
146 raise ValueError("ignore_index should be non-negative integer, but given {}".format(ignore_index))
147
148 # Increase floating point precision and pass to CPU
149 cm = cm.type(torch.DoubleTensor)
150 iou = cm.diag() / (cm.sum(dim=1) + cm.sum(dim=0) - cm.diag() + 1e-15)
151 if ignore_index is not None:
152
153 def ignore_index_fn(iou_vector):
154 if ignore_index >= len(iou_vector):
155 raise ValueError("ignore_index {} is larger than the length of IoU vector {}"
156 .format(ignore_index, len(iou_vector)))
157 indices = list(range(len(iou_vector)))
158 indices.remove(ignore_index)
159 return iou_vector[indices]
160
161 return MetricsLambda(ignore_index_fn, iou)
162 else:
163 return iou
164
165
166 def mIoU(cm, ignore_index=None):
167 """Calculates mean Intersection over Union
168
169 Args:
170 cm (ConfusionMatrix): instance of confusion matrix metric
171 ignore_index (int, optional): index to ignore, e.g. background index
172
173 Returns:
174 MetricsLambda
175
176 Examples:
177
178 .. code-block:: python
179
180 train_evaluator = ...
181
182 cm = ConfusionMatrix(num_classes=num_classes)
183 mIoU(cm, ignore_index=0).attach(train_evaluator, 'mean IoU')
184
185 state = train_evaluator.run(train_dataset)
186 # state.metrics['mean IoU'] -> scalar
187
188
189 """
190 return IoU(cm=cm, ignore_index=ignore_index).mean()
191
192
193 def cmAccuracy(cm):
194 """
195 Calculates accuracy using :class:`~ignite.metrics.ConfusionMatrix` metric.
196 Args:
197 cm (ConfusionMatrix): instance of confusion matrix metric
198
199 Returns:
200 MetricsLambda
201 """
202 # Increase floating point precision and pass to CPU
203 cm = cm.type(torch.DoubleTensor)
204 return cm.diag().sum() / (cm.sum() + 1e-15)
205
206
207 def cmPrecision(cm, average=True):
208 """
209 Calculates precision using :class:`~ignite.metrics.ConfusionMatrix` metric.
210 Args:
211 cm (ConfusionMatrix): instance of confusion matrix metric
212 average (bool, optional): if True metric value is averaged over all classes
213 Returns:
214 MetricsLambda
215 """
216
217 # Increase floating point precision and pass to CPU
218 cm = cm.type(torch.DoubleTensor)
219 precision = cm.diag() / (cm.sum(dim=0) + 1e-15)
220 if average:
221 return precision.mean()
222 return precision
223
224
225 def cmRecall(cm, average=True):
226 """
227 Calculates recall using :class:`~ignite.metrics.ConfusionMatrix` metric.
228 Args:
229 cm (ConfusionMatrix): instance of confusion matrix metric
230 average (bool, optional): if True metric value is averaged over all classes
231 Returns:
232 MetricsLambda
233 """
234
235 # Increase floating point precision and pass to CPU
236 cm = cm.type(torch.DoubleTensor)
237 recall = cm.diag() / (cm.sum(dim=1) + 1e-15)
238 if average:
239 return recall.mean()
240 return recall
241
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ignite/metrics/__init__.py b/ignite/metrics/__init__.py
--- a/ignite/metrics/__init__.py
+++ b/ignite/metrics/__init__.py
@@ -11,6 +11,6 @@
from ignite.metrics.top_k_categorical_accuracy import TopKCategoricalAccuracy
from ignite.metrics.running_average import RunningAverage
from ignite.metrics.metrics_lambda import MetricsLambda
-from ignite.metrics.confusion_matrix import ConfusionMatrix, IoU, mIoU
+from ignite.metrics.confusion_matrix import ConfusionMatrix, IoU, mIoU, DiceCoefficient
from ignite.metrics.accumulation import VariableAccumulation, Average, GeometricAverage
from ignite.metrics.fbeta import Fbeta
diff --git a/ignite/metrics/confusion_matrix.py b/ignite/metrics/confusion_matrix.py
--- a/ignite/metrics/confusion_matrix.py
+++ b/ignite/metrics/confusion_matrix.py
@@ -116,7 +116,7 @@
def IoU(cm, ignore_index=None):
- """Calculates Intersection over Union
+ """Calculates Intersection over Union using :class:`~ignite.metrics.ConfusionMatrix` metric.
Args:
cm (ConfusionMatrix): instance of confusion matrix metric
@@ -164,7 +164,7 @@
def mIoU(cm, ignore_index=None):
- """Calculates mean Intersection over Union
+ """Calculates mean Intersection over Union using :class:`~ignite.metrics.ConfusionMatrix` metric.
Args:
cm (ConfusionMatrix): instance of confusion matrix metric
@@ -191,8 +191,8 @@
def cmAccuracy(cm):
- """
- Calculates accuracy using :class:`~ignite.metrics.ConfusionMatrix` metric.
+ """Calculates accuracy using :class:`~ignite.metrics.ConfusionMatrix` metric.
+
Args:
cm (ConfusionMatrix): instance of confusion matrix metric
@@ -205,8 +205,8 @@
def cmPrecision(cm, average=True):
- """
- Calculates precision using :class:`~ignite.metrics.ConfusionMatrix` metric.
+ """Calculates precision using :class:`~ignite.metrics.ConfusionMatrix` metric.
+
Args:
cm (ConfusionMatrix): instance of confusion matrix metric
average (bool, optional): if True metric value is averaged over all classes
@@ -238,3 +238,37 @@
if average:
return recall.mean()
return recall
+
+
+def DiceCoefficient(cm, ignore_index=None):
+ """Calculates Dice Coefficient for a given :class:`~ignite.metrics.ConfusionMatrix` metric.
+
+ Args:
+ cm (ConfusionMatrix): instance of confusion matrix metric
+ ignore_index (int, optional): index to ignore, e.g. background index
+ """
+
+ if not isinstance(cm, ConfusionMatrix):
+ raise TypeError("Argument cm should be instance of ConfusionMatrix, but given {}".format(type(cm)))
+
+ if ignore_index is not None:
+ if not (isinstance(ignore_index, numbers.Integral) and 0 <= ignore_index < cm.num_classes):
+ raise ValueError("ignore_index should be non-negative integer, but given {}".format(ignore_index))
+
+ # Increase floating point precision and pass to CPU
+ cm = cm.type(torch.DoubleTensor)
+ dice = 2.0 * cm.diag() / (cm.sum(dim=1) + cm.sum(dim=0) + 1e-15)
+
+ if ignore_index is not None:
+
+ def ignore_index_fn(dice_vector):
+ if ignore_index >= len(dice_vector):
+ raise ValueError("ignore_index {} is larger than the length of Dice vector {}"
+ .format(ignore_index, len(dice_vector)))
+ indices = list(range(len(dice_vector)))
+ indices.remove(ignore_index)
+ return dice_vector[indices]
+
+ return MetricsLambda(ignore_index_fn, dice)
+ else:
+ return dice
|
{"golden_diff": "diff --git a/ignite/metrics/__init__.py b/ignite/metrics/__init__.py\n--- a/ignite/metrics/__init__.py\n+++ b/ignite/metrics/__init__.py\n@@ -11,6 +11,6 @@\n from ignite.metrics.top_k_categorical_accuracy import TopKCategoricalAccuracy\n from ignite.metrics.running_average import RunningAverage\n from ignite.metrics.metrics_lambda import MetricsLambda\n-from ignite.metrics.confusion_matrix import ConfusionMatrix, IoU, mIoU\n+from ignite.metrics.confusion_matrix import ConfusionMatrix, IoU, mIoU, DiceCoefficient\n from ignite.metrics.accumulation import VariableAccumulation, Average, GeometricAverage\n from ignite.metrics.fbeta import Fbeta\ndiff --git a/ignite/metrics/confusion_matrix.py b/ignite/metrics/confusion_matrix.py\n--- a/ignite/metrics/confusion_matrix.py\n+++ b/ignite/metrics/confusion_matrix.py\n@@ -116,7 +116,7 @@\n \n \n def IoU(cm, ignore_index=None):\n- \"\"\"Calculates Intersection over Union\n+ \"\"\"Calculates Intersection over Union using :class:`~ignite.metrics.ConfusionMatrix` metric.\n \n Args:\n cm (ConfusionMatrix): instance of confusion matrix metric\n@@ -164,7 +164,7 @@\n \n \n def mIoU(cm, ignore_index=None):\n- \"\"\"Calculates mean Intersection over Union\n+ \"\"\"Calculates mean Intersection over Union using :class:`~ignite.metrics.ConfusionMatrix` metric.\n \n Args:\n cm (ConfusionMatrix): instance of confusion matrix metric\n@@ -191,8 +191,8 @@\n \n \n def cmAccuracy(cm):\n- \"\"\"\n- Calculates accuracy using :class:`~ignite.metrics.ConfusionMatrix` metric.\n+ \"\"\"Calculates accuracy using :class:`~ignite.metrics.ConfusionMatrix` metric.\n+\n Args:\n cm (ConfusionMatrix): instance of confusion matrix metric\n \n@@ -205,8 +205,8 @@\n \n \n def cmPrecision(cm, average=True):\n- \"\"\"\n- Calculates precision using :class:`~ignite.metrics.ConfusionMatrix` metric.\n+ \"\"\"Calculates precision using :class:`~ignite.metrics.ConfusionMatrix` metric.\n+\n Args:\n cm (ConfusionMatrix): instance of confusion matrix metric\n average (bool, optional): if True metric value is averaged over all classes\n@@ -238,3 +238,37 @@\n if average:\n return recall.mean()\n return recall\n+\n+\n+def DiceCoefficient(cm, ignore_index=None):\n+ \"\"\"Calculates Dice Coefficient for a given :class:`~ignite.metrics.ConfusionMatrix` metric.\n+\n+ Args:\n+ cm (ConfusionMatrix): instance of confusion matrix metric\n+ ignore_index (int, optional): index to ignore, e.g. background index\n+ \"\"\"\n+\n+ if not isinstance(cm, ConfusionMatrix):\n+ raise TypeError(\"Argument cm should be instance of ConfusionMatrix, but given {}\".format(type(cm)))\n+\n+ if ignore_index is not None:\n+ if not (isinstance(ignore_index, numbers.Integral) and 0 <= ignore_index < cm.num_classes):\n+ raise ValueError(\"ignore_index should be non-negative integer, but given {}\".format(ignore_index))\n+\n+ # Increase floating point precision and pass to CPU\n+ cm = cm.type(torch.DoubleTensor)\n+ dice = 2.0 * cm.diag() / (cm.sum(dim=1) + cm.sum(dim=0) + 1e-15)\n+\n+ if ignore_index is not None:\n+\n+ def ignore_index_fn(dice_vector):\n+ if ignore_index >= len(dice_vector):\n+ raise ValueError(\"ignore_index {} is larger than the length of Dice vector {}\"\n+ .format(ignore_index, len(dice_vector)))\n+ indices = list(range(len(dice_vector)))\n+ indices.remove(ignore_index)\n+ return dice_vector[indices]\n+\n+ return MetricsLambda(ignore_index_fn, dice)\n+ else:\n+ return dice\n", "issue": "Adding Dice Coefficient Metric for Image Segmentation\nI wanted to implement Dice Coefficient as a metric for image segmentation problems. any interest?\n", "before_files": [{"content": "from ignite.metrics.metric import Metric\nfrom ignite.metrics.accuracy import Accuracy\nfrom ignite.metrics.loss import Loss\nfrom ignite.metrics.mean_absolute_error import MeanAbsoluteError\nfrom ignite.metrics.mean_pairwise_distance import MeanPairwiseDistance\nfrom ignite.metrics.mean_squared_error import MeanSquaredError\nfrom ignite.metrics.epoch_metric import EpochMetric\nfrom ignite.metrics.precision import Precision\nfrom ignite.metrics.recall import Recall\nfrom ignite.metrics.root_mean_squared_error import RootMeanSquaredError\nfrom ignite.metrics.top_k_categorical_accuracy import TopKCategoricalAccuracy\nfrom ignite.metrics.running_average import RunningAverage\nfrom ignite.metrics.metrics_lambda import MetricsLambda\nfrom ignite.metrics.confusion_matrix import ConfusionMatrix, IoU, mIoU\nfrom ignite.metrics.accumulation import VariableAccumulation, Average, GeometricAverage\nfrom ignite.metrics.fbeta import Fbeta\n", "path": "ignite/metrics/__init__.py"}, {"content": "import numbers\n\nimport torch\n\nfrom ignite.metrics import Metric, MetricsLambda\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import sync_all_reduce, reinit__is_reduced\n\n\nclass ConfusionMatrix(Metric):\n \"\"\"Calculates confusion matrix for multi-class data.\n\n - `update` must receive output of the form `(y_pred, y)` or `{'y_pred': y_pred, 'y': y}`.\n - `y_pred` must contain logits and has the following shape (batch_size, num_categories, ...)\n - `y` should have the following shape (batch_size, ...) and contains ground-truth class indices\n with or without the background class. During the computation, argmax of `y_pred` is taken to determine\n predicted classes.\n\n Args:\n num_classes (int): number of classes. See notes for more details.\n average (str, optional): confusion matrix values averaging schema: None, \"samples\", \"recall\", \"precision\".\n Default is None. If `average=\"samples\"` then confusion matrix values are normalized by the number of seen\n samples. If `average=\"recall\"` then confusion matrix values are normalized such that diagonal values\n represent class recalls. If `average=\"precision\"` then confusion matrix values are normalized such that\n diagonal values represent class precisions.\n output_transform (callable, optional): a callable that is used to transform the\n :class:`~ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n device (str of torch.device, optional): device specification in case of distributed computation usage.\n In most of the cases, it can be defined as \"cuda:local_rank\" or \"cuda\"\n if already set `torch.cuda.set_device(local_rank)`. By default, if a distributed process group is\n initialized and available, device is set to `cuda`.\n\n Note:\n In case of the targets `y` in `(batch_size, ...)` format, target indices between 0 and `num_classes` only\n contribute to the confusion matrix and others are neglected. For example, if `num_classes=20` and target index\n equal 255 is encountered, then it is filtered out.\n\n \"\"\"\n\n def __init__(self, num_classes, average=None, output_transform=lambda x: x, device=None):\n if average is not None and average not in (\"samples\", \"recall\", \"precision\"):\n raise ValueError(\"Argument average can None or one of ['samples', 'recall', 'precision']\")\n\n self.num_classes = num_classes\n self._num_examples = 0\n self.average = average\n self.confusion_matrix = None\n super(ConfusionMatrix, self).__init__(output_transform=output_transform, device=device)\n\n @reinit__is_reduced\n def reset(self):\n self.confusion_matrix = torch.zeros(self.num_classes, self.num_classes,\n dtype=torch.int64,\n device=self._device)\n self._num_examples = 0\n\n def _check_shape(self, output):\n y_pred, y = output\n\n if y_pred.ndimension() < 2:\n raise ValueError(\"y_pred must have shape (batch_size, num_categories, ...), \"\n \"but given {}\".format(y_pred.shape))\n\n if y_pred.shape[1] != self.num_classes:\n raise ValueError(\"y_pred does not have correct number of categories: {} vs {}\"\n .format(y_pred.shape[1], self.num_classes))\n\n if not (y.ndimension() + 1 == y_pred.ndimension()):\n raise ValueError(\"y_pred must have shape (batch_size, num_categories, ...) and y must have \"\n \"shape of (batch_size, ...), \"\n \"but given {} vs {}.\".format(y.shape, y_pred.shape))\n\n y_shape = y.shape\n y_pred_shape = y_pred.shape\n\n if y.ndimension() + 1 == y_pred.ndimension():\n y_pred_shape = (y_pred_shape[0],) + y_pred_shape[2:]\n\n if y_shape != y_pred_shape:\n raise ValueError(\"y and y_pred must have compatible shapes.\")\n\n @reinit__is_reduced\n def update(self, output):\n self._check_shape(output)\n y_pred, y = output\n\n self._num_examples += y_pred.shape[0]\n\n # target is (batch_size, ...)\n y_pred = torch.argmax(y_pred, dim=1).flatten()\n y = y.flatten()\n\n target_mask = (y >= 0) & (y < self.num_classes)\n y = y[target_mask]\n y_pred = y_pred[target_mask]\n\n indices = self.num_classes * y + y_pred\n m = torch.bincount(indices, minlength=self.num_classes ** 2).reshape(self.num_classes, self.num_classes)\n self.confusion_matrix += m.to(self.confusion_matrix)\n\n @sync_all_reduce('confusion_matrix', '_num_examples')\n def compute(self):\n if self._num_examples == 0:\n raise NotComputableError('Confusion matrix must have at least one example before it can be computed.')\n if self.average:\n self.confusion_matrix = self.confusion_matrix.float()\n if self.average == \"samples\":\n return self.confusion_matrix / self._num_examples\n elif self.average == \"recall\":\n return self.confusion_matrix / (self.confusion_matrix.sum(dim=1) + 1e-15)\n elif self.average == \"precision\":\n return self.confusion_matrix / (self.confusion_matrix.sum(dim=0) + 1e-15)\n return self.confusion_matrix\n\n\ndef IoU(cm, ignore_index=None):\n \"\"\"Calculates Intersection over Union\n\n Args:\n cm (ConfusionMatrix): instance of confusion matrix metric\n ignore_index (int, optional): index to ignore, e.g. background index\n\n Returns:\n MetricsLambda\n\n Examples:\n\n .. code-block:: python\n\n train_evaluator = ...\n\n cm = ConfusionMatrix(num_classes=num_classes)\n IoU(cm, ignore_index=0).attach(train_evaluator, 'IoU')\n\n state = train_evaluator.run(train_dataset)\n # state.metrics['IoU'] -> tensor of shape (num_classes - 1, )\n\n \"\"\"\n if not isinstance(cm, ConfusionMatrix):\n raise TypeError(\"Argument cm should be instance of ConfusionMatrix, but given {}\".format(type(cm)))\n\n if ignore_index is not None:\n if not (isinstance(ignore_index, numbers.Integral) and 0 <= ignore_index < cm.num_classes):\n raise ValueError(\"ignore_index should be non-negative integer, but given {}\".format(ignore_index))\n\n # Increase floating point precision and pass to CPU\n cm = cm.type(torch.DoubleTensor)\n iou = cm.diag() / (cm.sum(dim=1) + cm.sum(dim=0) - cm.diag() + 1e-15)\n if ignore_index is not None:\n\n def ignore_index_fn(iou_vector):\n if ignore_index >= len(iou_vector):\n raise ValueError(\"ignore_index {} is larger than the length of IoU vector {}\"\n .format(ignore_index, len(iou_vector)))\n indices = list(range(len(iou_vector)))\n indices.remove(ignore_index)\n return iou_vector[indices]\n\n return MetricsLambda(ignore_index_fn, iou)\n else:\n return iou\n\n\ndef mIoU(cm, ignore_index=None):\n \"\"\"Calculates mean Intersection over Union\n\n Args:\n cm (ConfusionMatrix): instance of confusion matrix metric\n ignore_index (int, optional): index to ignore, e.g. background index\n\n Returns:\n MetricsLambda\n\n Examples:\n\n .. code-block:: python\n\n train_evaluator = ...\n\n cm = ConfusionMatrix(num_classes=num_classes)\n mIoU(cm, ignore_index=0).attach(train_evaluator, 'mean IoU')\n\n state = train_evaluator.run(train_dataset)\n # state.metrics['mean IoU'] -> scalar\n\n\n \"\"\"\n return IoU(cm=cm, ignore_index=ignore_index).mean()\n\n\ndef cmAccuracy(cm):\n \"\"\"\n Calculates accuracy using :class:`~ignite.metrics.ConfusionMatrix` metric.\n Args:\n cm (ConfusionMatrix): instance of confusion matrix metric\n\n Returns:\n MetricsLambda\n \"\"\"\n # Increase floating point precision and pass to CPU\n cm = cm.type(torch.DoubleTensor)\n return cm.diag().sum() / (cm.sum() + 1e-15)\n\n\ndef cmPrecision(cm, average=True):\n \"\"\"\n Calculates precision using :class:`~ignite.metrics.ConfusionMatrix` metric.\n Args:\n cm (ConfusionMatrix): instance of confusion matrix metric\n average (bool, optional): if True metric value is averaged over all classes\n Returns:\n MetricsLambda\n \"\"\"\n\n # Increase floating point precision and pass to CPU\n cm = cm.type(torch.DoubleTensor)\n precision = cm.diag() / (cm.sum(dim=0) + 1e-15)\n if average:\n return precision.mean()\n return precision\n\n\ndef cmRecall(cm, average=True):\n \"\"\"\n Calculates recall using :class:`~ignite.metrics.ConfusionMatrix` metric.\n Args:\n cm (ConfusionMatrix): instance of confusion matrix metric\n average (bool, optional): if True metric value is averaged over all classes\n Returns:\n MetricsLambda\n \"\"\"\n\n # Increase floating point precision and pass to CPU\n cm = cm.type(torch.DoubleTensor)\n recall = cm.diag() / (cm.sum(dim=1) + 1e-15)\n if average:\n return recall.mean()\n return recall\n", "path": "ignite/metrics/confusion_matrix.py"}], "after_files": [{"content": "from ignite.metrics.metric import Metric\nfrom ignite.metrics.accuracy import Accuracy\nfrom ignite.metrics.loss import Loss\nfrom ignite.metrics.mean_absolute_error import MeanAbsoluteError\nfrom ignite.metrics.mean_pairwise_distance import MeanPairwiseDistance\nfrom ignite.metrics.mean_squared_error import MeanSquaredError\nfrom ignite.metrics.epoch_metric import EpochMetric\nfrom ignite.metrics.precision import Precision\nfrom ignite.metrics.recall import Recall\nfrom ignite.metrics.root_mean_squared_error import RootMeanSquaredError\nfrom ignite.metrics.top_k_categorical_accuracy import TopKCategoricalAccuracy\nfrom ignite.metrics.running_average import RunningAverage\nfrom ignite.metrics.metrics_lambda import MetricsLambda\nfrom ignite.metrics.confusion_matrix import ConfusionMatrix, IoU, mIoU, DiceCoefficient\nfrom ignite.metrics.accumulation import VariableAccumulation, Average, GeometricAverage\nfrom ignite.metrics.fbeta import Fbeta\n", "path": "ignite/metrics/__init__.py"}, {"content": "import numbers\n\nimport torch\n\nfrom ignite.metrics import Metric, MetricsLambda\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import sync_all_reduce, reinit__is_reduced\n\n\nclass ConfusionMatrix(Metric):\n \"\"\"Calculates confusion matrix for multi-class data.\n\n - `update` must receive output of the form `(y_pred, y)` or `{'y_pred': y_pred, 'y': y}`.\n - `y_pred` must contain logits and has the following shape (batch_size, num_categories, ...)\n - `y` should have the following shape (batch_size, ...) and contains ground-truth class indices\n with or without the background class. During the computation, argmax of `y_pred` is taken to determine\n predicted classes.\n\n Args:\n num_classes (int): number of classes. See notes for more details.\n average (str, optional): confusion matrix values averaging schema: None, \"samples\", \"recall\", \"precision\".\n Default is None. If `average=\"samples\"` then confusion matrix values are normalized by the number of seen\n samples. If `average=\"recall\"` then confusion matrix values are normalized such that diagonal values\n represent class recalls. If `average=\"precision\"` then confusion matrix values are normalized such that\n diagonal values represent class precisions.\n output_transform (callable, optional): a callable that is used to transform the\n :class:`~ignite.engine.Engine`'s `process_function`'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n device (str of torch.device, optional): device specification in case of distributed computation usage.\n In most of the cases, it can be defined as \"cuda:local_rank\" or \"cuda\"\n if already set `torch.cuda.set_device(local_rank)`. By default, if a distributed process group is\n initialized and available, device is set to `cuda`.\n\n Note:\n In case of the targets `y` in `(batch_size, ...)` format, target indices between 0 and `num_classes` only\n contribute to the confusion matrix and others are neglected. For example, if `num_classes=20` and target index\n equal 255 is encountered, then it is filtered out.\n\n \"\"\"\n\n def __init__(self, num_classes, average=None, output_transform=lambda x: x, device=None):\n if average is not None and average not in (\"samples\", \"recall\", \"precision\"):\n raise ValueError(\"Argument average can None or one of ['samples', 'recall', 'precision']\")\n\n self.num_classes = num_classes\n self._num_examples = 0\n self.average = average\n self.confusion_matrix = None\n super(ConfusionMatrix, self).__init__(output_transform=output_transform, device=device)\n\n @reinit__is_reduced\n def reset(self):\n self.confusion_matrix = torch.zeros(self.num_classes, self.num_classes,\n dtype=torch.int64,\n device=self._device)\n self._num_examples = 0\n\n def _check_shape(self, output):\n y_pred, y = output\n\n if y_pred.ndimension() < 2:\n raise ValueError(\"y_pred must have shape (batch_size, num_categories, ...), \"\n \"but given {}\".format(y_pred.shape))\n\n if y_pred.shape[1] != self.num_classes:\n raise ValueError(\"y_pred does not have correct number of categories: {} vs {}\"\n .format(y_pred.shape[1], self.num_classes))\n\n if not (y.ndimension() + 1 == y_pred.ndimension()):\n raise ValueError(\"y_pred must have shape (batch_size, num_categories, ...) and y must have \"\n \"shape of (batch_size, ...), \"\n \"but given {} vs {}.\".format(y.shape, y_pred.shape))\n\n y_shape = y.shape\n y_pred_shape = y_pred.shape\n\n if y.ndimension() + 1 == y_pred.ndimension():\n y_pred_shape = (y_pred_shape[0],) + y_pred_shape[2:]\n\n if y_shape != y_pred_shape:\n raise ValueError(\"y and y_pred must have compatible shapes.\")\n\n @reinit__is_reduced\n def update(self, output):\n self._check_shape(output)\n y_pred, y = output\n\n self._num_examples += y_pred.shape[0]\n\n # target is (batch_size, ...)\n y_pred = torch.argmax(y_pred, dim=1).flatten()\n y = y.flatten()\n\n target_mask = (y >= 0) & (y < self.num_classes)\n y = y[target_mask]\n y_pred = y_pred[target_mask]\n\n indices = self.num_classes * y + y_pred\n m = torch.bincount(indices, minlength=self.num_classes ** 2).reshape(self.num_classes, self.num_classes)\n self.confusion_matrix += m.to(self.confusion_matrix)\n\n @sync_all_reduce('confusion_matrix', '_num_examples')\n def compute(self):\n if self._num_examples == 0:\n raise NotComputableError('Confusion matrix must have at least one example before it can be computed.')\n if self.average:\n self.confusion_matrix = self.confusion_matrix.float()\n if self.average == \"samples\":\n return self.confusion_matrix / self._num_examples\n elif self.average == \"recall\":\n return self.confusion_matrix / (self.confusion_matrix.sum(dim=1) + 1e-15)\n elif self.average == \"precision\":\n return self.confusion_matrix / (self.confusion_matrix.sum(dim=0) + 1e-15)\n return self.confusion_matrix\n\n\ndef IoU(cm, ignore_index=None):\n \"\"\"Calculates Intersection over Union using :class:`~ignite.metrics.ConfusionMatrix` metric.\n\n Args:\n cm (ConfusionMatrix): instance of confusion matrix metric\n ignore_index (int, optional): index to ignore, e.g. background index\n\n Returns:\n MetricsLambda\n\n Examples:\n\n .. code-block:: python\n\n train_evaluator = ...\n\n cm = ConfusionMatrix(num_classes=num_classes)\n IoU(cm, ignore_index=0).attach(train_evaluator, 'IoU')\n\n state = train_evaluator.run(train_dataset)\n # state.metrics['IoU'] -> tensor of shape (num_classes - 1, )\n\n \"\"\"\n if not isinstance(cm, ConfusionMatrix):\n raise TypeError(\"Argument cm should be instance of ConfusionMatrix, but given {}\".format(type(cm)))\n\n if ignore_index is not None:\n if not (isinstance(ignore_index, numbers.Integral) and 0 <= ignore_index < cm.num_classes):\n raise ValueError(\"ignore_index should be non-negative integer, but given {}\".format(ignore_index))\n\n # Increase floating point precision and pass to CPU\n cm = cm.type(torch.DoubleTensor)\n iou = cm.diag() / (cm.sum(dim=1) + cm.sum(dim=0) - cm.diag() + 1e-15)\n if ignore_index is not None:\n\n def ignore_index_fn(iou_vector):\n if ignore_index >= len(iou_vector):\n raise ValueError(\"ignore_index {} is larger than the length of IoU vector {}\"\n .format(ignore_index, len(iou_vector)))\n indices = list(range(len(iou_vector)))\n indices.remove(ignore_index)\n return iou_vector[indices]\n\n return MetricsLambda(ignore_index_fn, iou)\n else:\n return iou\n\n\ndef mIoU(cm, ignore_index=None):\n \"\"\"Calculates mean Intersection over Union using :class:`~ignite.metrics.ConfusionMatrix` metric.\n\n Args:\n cm (ConfusionMatrix): instance of confusion matrix metric\n ignore_index (int, optional): index to ignore, e.g. background index\n\n Returns:\n MetricsLambda\n\n Examples:\n\n .. code-block:: python\n\n train_evaluator = ...\n\n cm = ConfusionMatrix(num_classes=num_classes)\n mIoU(cm, ignore_index=0).attach(train_evaluator, 'mean IoU')\n\n state = train_evaluator.run(train_dataset)\n # state.metrics['mean IoU'] -> scalar\n\n\n \"\"\"\n return IoU(cm=cm, ignore_index=ignore_index).mean()\n\n\ndef cmAccuracy(cm):\n \"\"\"Calculates accuracy using :class:`~ignite.metrics.ConfusionMatrix` metric.\n\n Args:\n cm (ConfusionMatrix): instance of confusion matrix metric\n\n Returns:\n MetricsLambda\n \"\"\"\n # Increase floating point precision and pass to CPU\n cm = cm.type(torch.DoubleTensor)\n return cm.diag().sum() / (cm.sum() + 1e-15)\n\n\ndef cmPrecision(cm, average=True):\n \"\"\"Calculates precision using :class:`~ignite.metrics.ConfusionMatrix` metric.\n\n Args:\n cm (ConfusionMatrix): instance of confusion matrix metric\n average (bool, optional): if True metric value is averaged over all classes\n Returns:\n MetricsLambda\n \"\"\"\n\n # Increase floating point precision and pass to CPU\n cm = cm.type(torch.DoubleTensor)\n precision = cm.diag() / (cm.sum(dim=0) + 1e-15)\n if average:\n return precision.mean()\n return precision\n\n\ndef cmRecall(cm, average=True):\n \"\"\"\n Calculates recall using :class:`~ignite.metrics.ConfusionMatrix` metric.\n Args:\n cm (ConfusionMatrix): instance of confusion matrix metric\n average (bool, optional): if True metric value is averaged over all classes\n Returns:\n MetricsLambda\n \"\"\"\n\n # Increase floating point precision and pass to CPU\n cm = cm.type(torch.DoubleTensor)\n recall = cm.diag() / (cm.sum(dim=1) + 1e-15)\n if average:\n return recall.mean()\n return recall\n\n\ndef DiceCoefficient(cm, ignore_index=None):\n \"\"\"Calculates Dice Coefficient for a given :class:`~ignite.metrics.ConfusionMatrix` metric.\n\n Args:\n cm (ConfusionMatrix): instance of confusion matrix metric\n ignore_index (int, optional): index to ignore, e.g. background index\n \"\"\"\n\n if not isinstance(cm, ConfusionMatrix):\n raise TypeError(\"Argument cm should be instance of ConfusionMatrix, but given {}\".format(type(cm)))\n\n if ignore_index is not None:\n if not (isinstance(ignore_index, numbers.Integral) and 0 <= ignore_index < cm.num_classes):\n raise ValueError(\"ignore_index should be non-negative integer, but given {}\".format(ignore_index))\n\n # Increase floating point precision and pass to CPU\n cm = cm.type(torch.DoubleTensor)\n dice = 2.0 * cm.diag() / (cm.sum(dim=1) + cm.sum(dim=0) + 1e-15)\n\n if ignore_index is not None:\n\n def ignore_index_fn(dice_vector):\n if ignore_index >= len(dice_vector):\n raise ValueError(\"ignore_index {} is larger than the length of Dice vector {}\"\n .format(ignore_index, len(dice_vector)))\n indices = list(range(len(dice_vector)))\n indices.remove(ignore_index)\n return dice_vector[indices]\n\n return MetricsLambda(ignore_index_fn, dice)\n else:\n return dice\n", "path": "ignite/metrics/confusion_matrix.py"}]}
| 3,252 | 868 |
gh_patches_debug_32990
|
rasdani/github-patches
|
git_diff
|
numba__numba-3007
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Detect whether LLVM has SVML enabled and override SVML setting if not present
The bug in #2998 (and likely the one reported in the discussion of #2938) has revealed a situation where array expressions will give the wrong answer. Triggering this bug requires:
* The ICC runtime (specifically libsvml) is present in the user's environment
* The user is using an llvmlite statically linked against a version of LLVM that has not been patched with SVML support.
* [Edit: Forgot one more condition] The platform is 64-bit.
We incorrectly assumed that this combination would be extremely unlikely, and if did happen, LLVM would raise an error. This was a mistake, as it seems:
1. There are a number of Windows users getting llvmlite from the [Chris Gohlke packages](https://www.lfd.uci.edu/~gohlke/pythonlibs/) or from conda-forge, which do not apply all of our LLVM patches. (Note that anyone who gets llvmlite from the Numba channel on Anaconda Cloud, the default channel from Anaconda, or pip installs the wheels posted to PyPI are unaffected by this problem.)
2. Many Windows users of Numba have the ICC runtime present as Intel Fortran is commonly used to compile SciPy for Windows due to challenges getting other Fortran compilers to work on Windows.
3. Unpatched LLVM does not raise an error when we enable SVML autovectorization, but rather seems to generate incorrect code instead.
To correct this, we're going to add a function to llvmlite to verify if the LLVM being used has the patch. (Note, as LLVM is statically linked to llvmlite, it does not matter what LLVM is present on the end user system, only which LLVM is used to build llvmlite.) Numba will look for this function, and if it is not present or returns false (SVML support not available), it will disable the SVML code generation on that platform, regardless of whether the ICC runtime is present.
Although it affects a small number of users, we view this as a critical bug, and will be making patch releases of Numba 0.38 and llvmlite 0.23 outside of our normal release cycle.
Wrong results when calculating power with an array
Since Numba 0.38.0, the following function leads to wrong results:
```python
import numpy as np
import numba as nb
def test(a, b):
return (1. / a) ** b
test_nb = nb.njit(test)
```
Example:
```python
>>> test_nb(0.1 * np.ones(50), np.ones(50))
array([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 10., 10.])
```
Expected output (with NumPy):
```python
>>> test(0.1 * np.ones(50), np.ones(50))
array([10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.,
10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.,
10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.,
10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.])
```
I have only encountered this bug on Windows, and it does not occur on Numba 0.37.0. Relevant output from `conda list`:
```
llvmlite 0.23.0 py36_1 conda-forge
numba 0.38.0 py36h830ac7b_0
numpy 1.14.2 py36h5c71026_0
python 3.6.0 0
```
Let me know if you need more information.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numba/__init__.py`
Content:
```
1 """
2 Expose top-level symbols that are safe for import *
3 """
4 from __future__ import print_function, division, absolute_import
5
6 import platform
7 import re
8 import sys
9
10 from . import config, errors, runtests, types
11
12 # Re-export typeof
13 from .special import typeof, prange, pndindex
14
15 # Re-export error classes
16 from .errors import *
17
18 # Re-export all type names
19 from .types import *
20
21 from .smartarray import SmartArray
22
23 # Re-export decorators
24 from .decorators import autojit, cfunc, generated_jit, jit, njit, stencil
25
26 # Re-export vectorize decorators
27 from .npyufunc import vectorize, guvectorize
28
29 # Re-export Numpy helpers
30 from .numpy_support import carray, farray, from_dtype
31
32 # Re-export jitclass
33 from .jitclass import jitclass
34
35 # Keep this for backward compatibility.
36 test = runtests.main
37
38
39 __all__ = """
40 autojit
41 cfunc
42 from_dtype
43 guvectorize
44 jit
45 jitclass
46 njit
47 stencil
48 typeof
49 prange
50 stencil
51 vectorize
52 """.split() + types.__all__ + errors.__all__
53
54
55 _min_llvmlite_version = (0, 23, 0)
56 _min_llvm_version = (6, 0, 0)
57
58 def _ensure_llvm():
59 """
60 Make sure llvmlite is operational.
61 """
62 import warnings
63 import llvmlite
64
65 # Only look at the the major, minor and bugfix version numbers.
66 # Ignore other stuffs
67 regex = re.compile(r'(\d+)\.(\d+).(\d+)')
68 m = regex.match(llvmlite.__version__)
69 if m:
70 ver = tuple(map(int, m.groups()))
71 if ver < _min_llvmlite_version:
72 msg = ("Numba requires at least version %d.%d.%d of llvmlite.\n"
73 "Installed version is %s.\n"
74 "Please update llvmlite." %
75 (_min_llvmlite_version + (llvmlite.__version__,)))
76 raise ImportError(msg)
77 else:
78 # Not matching?
79 warnings.warn("llvmlite version format not recognized!")
80
81 from llvmlite.binding import llvm_version_info, check_jit_execution
82
83 if llvm_version_info < _min_llvm_version:
84 msg = ("Numba requires at least version %d.%d.%d of LLVM.\n"
85 "Installed llvmlite is built against version %d.%d.%d.\n"
86 "Please update llvmlite." %
87 (_min_llvm_version + llvm_version_info))
88 raise ImportError(msg)
89
90 check_jit_execution()
91
92 def _ensure_pynumpy():
93 """
94 Make sure Python and Numpy have supported versions.
95 """
96 import warnings
97 from . import numpy_support
98
99 pyver = sys.version_info[:2]
100 if pyver < (2, 7) or ((3,) <= pyver < (3, 4)):
101 raise ImportError("Numba needs Python 2.7 or greater, or 3.4 or greater")
102
103 np_version = numpy_support.version[:2]
104 if np_version < (1, 7):
105 raise ImportError("Numba needs Numpy 1.7 or greater")
106
107 def _try_enable_svml():
108 """
109 Tries to enable SVML if configuration permits use and the library is found.
110 """
111 if not config.DISABLE_INTEL_SVML:
112 try:
113 if sys.platform.startswith('linux'):
114 llvmlite.binding.load_library_permanently("libsvml.so")
115 elif sys.platform.startswith('darwin'):
116 llvmlite.binding.load_library_permanently("libsvml.dylib")
117 elif sys.platform.startswith('win'):
118 llvmlite.binding.load_library_permanently("svml_dispmd")
119 else:
120 return False
121 llvmlite.binding.set_option('SVML', '-vector-library=SVML')
122 return True
123 except:
124 if platform.machine == 'x86_64' and config.DEBUG:
125 warnings.warn("SVML was not found/could not be loaded.")
126 return False
127
128 _ensure_llvm()
129 _ensure_pynumpy()
130
131 # we know llvmlite is working as the above tests passed, import it now as SVML
132 # needs to mutate runtime options (sets the `-vector-library`).
133 import llvmlite
134
135 """
136 Is set to True if Intel SVML is in use.
137 """
138 config.USING_SVML = _try_enable_svml()
139
140 from ._version import get_versions
141 __version__ = get_versions()['version']
142 del get_versions
143
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/numba/__init__.py b/numba/__init__.py
--- a/numba/__init__.py
+++ b/numba/__init__.py
@@ -6,6 +6,7 @@
import platform
import re
import sys
+import warnings
from . import config, errors, runtests, types
@@ -118,10 +119,36 @@
llvmlite.binding.load_library_permanently("svml_dispmd")
else:
return False
+ # The SVML library is loaded, therefore SVML *could* be supported.
+ # Now see if LLVM has been compiled with the SVML support patch.
+ # If llvmlite has the checking function `has_svml` and it returns
+ # True, then LLVM was compiled with SVML support and the the setup
+ # for SVML can proceed. We err on the side of caution and if the
+ # checking function is missing, regardless of that being fine for
+ # most 0.23.{0,1} llvmlite instances (i.e. conda or pip installed),
+ # we assume that SVML was not compiled in. llvmlite 0.23.2 is a
+ # bugfix release with the checking function present that will always
+ # produce correct behaviour. For context see: #3006.
+ try:
+ if not getattr(llvmlite.binding.targets, "has_svml")():
+ # has detection function, but no svml compiled in, therefore
+ # disable SVML
+ return False
+ except AttributeError:
+ if platform.machine() == 'x86_64' and config.DEBUG:
+ msg = ("SVML was found but llvmlite >= 0.23.2 is "
+ "needed to support it.")
+ warnings.warn(msg)
+ # does not have detection function, cannot detect reliably,
+ # disable SVML.
+ return False
+
+ # All is well, detection function present and reports SVML is
+ # compiled in, set the vector library to SVML.
llvmlite.binding.set_option('SVML', '-vector-library=SVML')
return True
except:
- if platform.machine == 'x86_64' and config.DEBUG:
+ if platform.machine() == 'x86_64' and config.DEBUG:
warnings.warn("SVML was not found/could not be loaded.")
return False
|
{"golden_diff": "diff --git a/numba/__init__.py b/numba/__init__.py\n--- a/numba/__init__.py\n+++ b/numba/__init__.py\n@@ -6,6 +6,7 @@\n import platform\n import re\n import sys\n+import warnings\n \n from . import config, errors, runtests, types\n \n@@ -118,10 +119,36 @@\n llvmlite.binding.load_library_permanently(\"svml_dispmd\")\n else:\n return False\n+ # The SVML library is loaded, therefore SVML *could* be supported.\n+ # Now see if LLVM has been compiled with the SVML support patch.\n+ # If llvmlite has the checking function `has_svml` and it returns\n+ # True, then LLVM was compiled with SVML support and the the setup\n+ # for SVML can proceed. We err on the side of caution and if the\n+ # checking function is missing, regardless of that being fine for\n+ # most 0.23.{0,1} llvmlite instances (i.e. conda or pip installed),\n+ # we assume that SVML was not compiled in. llvmlite 0.23.2 is a\n+ # bugfix release with the checking function present that will always\n+ # produce correct behaviour. For context see: #3006.\n+ try:\n+ if not getattr(llvmlite.binding.targets, \"has_svml\")():\n+ # has detection function, but no svml compiled in, therefore\n+ # disable SVML\n+ return False\n+ except AttributeError:\n+ if platform.machine() == 'x86_64' and config.DEBUG:\n+ msg = (\"SVML was found but llvmlite >= 0.23.2 is \"\n+ \"needed to support it.\")\n+ warnings.warn(msg)\n+ # does not have detection function, cannot detect reliably,\n+ # disable SVML.\n+ return False\n+\n+ # All is well, detection function present and reports SVML is\n+ # compiled in, set the vector library to SVML.\n llvmlite.binding.set_option('SVML', '-vector-library=SVML')\n return True\n except:\n- if platform.machine == 'x86_64' and config.DEBUG:\n+ if platform.machine() == 'x86_64' and config.DEBUG:\n warnings.warn(\"SVML was not found/could not be loaded.\")\n return False\n", "issue": "Detect whether LLVM has SVML enabled and override SVML setting if not present\nThe bug in #2998 (and likely the one reported in the discussion of #2938) has revealed a situation where array expressions will give the wrong answer. Triggering this bug requires:\r\n\r\n* The ICC runtime (specifically libsvml) is present in the user's environment\r\n* The user is using an llvmlite statically linked against a version of LLVM that has not been patched with SVML support.\r\n* [Edit: Forgot one more condition] The platform is 64-bit.\r\n\r\nWe incorrectly assumed that this combination would be extremely unlikely, and if did happen, LLVM would raise an error. This was a mistake, as it seems:\r\n\r\n1. There are a number of Windows users getting llvmlite from the [Chris Gohlke packages](https://www.lfd.uci.edu/~gohlke/pythonlibs/) or from conda-forge, which do not apply all of our LLVM patches. (Note that anyone who gets llvmlite from the Numba channel on Anaconda Cloud, the default channel from Anaconda, or pip installs the wheels posted to PyPI are unaffected by this problem.)\r\n2. Many Windows users of Numba have the ICC runtime present as Intel Fortran is commonly used to compile SciPy for Windows due to challenges getting other Fortran compilers to work on Windows.\r\n3. Unpatched LLVM does not raise an error when we enable SVML autovectorization, but rather seems to generate incorrect code instead.\r\n\r\nTo correct this, we're going to add a function to llvmlite to verify if the LLVM being used has the patch. (Note, as LLVM is statically linked to llvmlite, it does not matter what LLVM is present on the end user system, only which LLVM is used to build llvmlite.) Numba will look for this function, and if it is not present or returns false (SVML support not available), it will disable the SVML code generation on that platform, regardless of whether the ICC runtime is present.\r\n\r\nAlthough it affects a small number of users, we view this as a critical bug, and will be making patch releases of Numba 0.38 and llvmlite 0.23 outside of our normal release cycle.\nWrong results when calculating power with an array\nSince Numba 0.38.0, the following function leads to wrong results:\r\n\r\n```python\r\nimport numpy as np\r\nimport numba as nb\r\n\r\ndef test(a, b):\r\n return (1. / a) ** b\r\n\r\ntest_nb = nb.njit(test)\r\n```\r\n\r\nExample:\r\n\r\n```python\r\n>>> test_nb(0.1 * np.ones(50), np.ones(50))\r\narray([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\r\n 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\r\n 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\r\n 1., 1., 1., 1., 1., 1., 1., 1., 1., 10., 10.])\r\n```\r\n\r\nExpected output (with NumPy):\r\n\r\n```python\r\n>>> test(0.1 * np.ones(50), np.ones(50))\r\narray([10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.,\r\n 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.,\r\n 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.,\r\n 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.])\r\n```\r\n\r\n\r\nI have only encountered this bug on Windows, and it does not occur on Numba 0.37.0. Relevant output from `conda list`:\r\n\r\n```\r\nllvmlite 0.23.0 py36_1 conda-forge\r\nnumba 0.38.0 py36h830ac7b_0\r\nnumpy 1.14.2 py36h5c71026_0\r\npython 3.6.0 0\r\n```\r\n\r\nLet me know if you need more information.\n", "before_files": [{"content": "\"\"\"\nExpose top-level symbols that are safe for import *\n\"\"\"\nfrom __future__ import print_function, division, absolute_import\n\nimport platform\nimport re\nimport sys\n\nfrom . import config, errors, runtests, types\n\n# Re-export typeof\nfrom .special import typeof, prange, pndindex\n\n# Re-export error classes\nfrom .errors import *\n\n# Re-export all type names\nfrom .types import *\n\nfrom .smartarray import SmartArray\n\n# Re-export decorators\nfrom .decorators import autojit, cfunc, generated_jit, jit, njit, stencil\n\n# Re-export vectorize decorators\nfrom .npyufunc import vectorize, guvectorize\n\n# Re-export Numpy helpers\nfrom .numpy_support import carray, farray, from_dtype\n\n# Re-export jitclass\nfrom .jitclass import jitclass\n\n# Keep this for backward compatibility.\ntest = runtests.main\n\n\n__all__ = \"\"\"\n autojit\n cfunc\n from_dtype\n guvectorize\n jit\n jitclass\n njit\n stencil\n typeof\n prange\n stencil\n vectorize\n \"\"\".split() + types.__all__ + errors.__all__\n\n\n_min_llvmlite_version = (0, 23, 0)\n_min_llvm_version = (6, 0, 0)\n\ndef _ensure_llvm():\n \"\"\"\n Make sure llvmlite is operational.\n \"\"\"\n import warnings\n import llvmlite\n\n # Only look at the the major, minor and bugfix version numbers.\n # Ignore other stuffs\n regex = re.compile(r'(\\d+)\\.(\\d+).(\\d+)')\n m = regex.match(llvmlite.__version__)\n if m:\n ver = tuple(map(int, m.groups()))\n if ver < _min_llvmlite_version:\n msg = (\"Numba requires at least version %d.%d.%d of llvmlite.\\n\"\n \"Installed version is %s.\\n\"\n \"Please update llvmlite.\" %\n (_min_llvmlite_version + (llvmlite.__version__,)))\n raise ImportError(msg)\n else:\n # Not matching?\n warnings.warn(\"llvmlite version format not recognized!\")\n\n from llvmlite.binding import llvm_version_info, check_jit_execution\n\n if llvm_version_info < _min_llvm_version:\n msg = (\"Numba requires at least version %d.%d.%d of LLVM.\\n\"\n \"Installed llvmlite is built against version %d.%d.%d.\\n\"\n \"Please update llvmlite.\" %\n (_min_llvm_version + llvm_version_info))\n raise ImportError(msg)\n\n check_jit_execution()\n\ndef _ensure_pynumpy():\n \"\"\"\n Make sure Python and Numpy have supported versions.\n \"\"\"\n import warnings\n from . import numpy_support\n\n pyver = sys.version_info[:2]\n if pyver < (2, 7) or ((3,) <= pyver < (3, 4)):\n raise ImportError(\"Numba needs Python 2.7 or greater, or 3.4 or greater\")\n\n np_version = numpy_support.version[:2]\n if np_version < (1, 7):\n raise ImportError(\"Numba needs Numpy 1.7 or greater\")\n\ndef _try_enable_svml():\n \"\"\"\n Tries to enable SVML if configuration permits use and the library is found.\n \"\"\"\n if not config.DISABLE_INTEL_SVML:\n try:\n if sys.platform.startswith('linux'):\n llvmlite.binding.load_library_permanently(\"libsvml.so\")\n elif sys.platform.startswith('darwin'):\n llvmlite.binding.load_library_permanently(\"libsvml.dylib\")\n elif sys.platform.startswith('win'):\n llvmlite.binding.load_library_permanently(\"svml_dispmd\")\n else:\n return False\n llvmlite.binding.set_option('SVML', '-vector-library=SVML')\n return True\n except:\n if platform.machine == 'x86_64' and config.DEBUG:\n warnings.warn(\"SVML was not found/could not be loaded.\")\n return False\n\n_ensure_llvm()\n_ensure_pynumpy()\n\n# we know llvmlite is working as the above tests passed, import it now as SVML\n# needs to mutate runtime options (sets the `-vector-library`).\nimport llvmlite\n\n\"\"\"\nIs set to True if Intel SVML is in use.\n\"\"\"\nconfig.USING_SVML = _try_enable_svml()\n\nfrom ._version import get_versions\n__version__ = get_versions()['version']\ndel get_versions\n", "path": "numba/__init__.py"}], "after_files": [{"content": "\"\"\"\nExpose top-level symbols that are safe for import *\n\"\"\"\nfrom __future__ import print_function, division, absolute_import\n\nimport platform\nimport re\nimport sys\nimport warnings\n\nfrom . import config, errors, runtests, types\n\n# Re-export typeof\nfrom .special import typeof, prange, pndindex\n\n# Re-export error classes\nfrom .errors import *\n\n# Re-export all type names\nfrom .types import *\n\nfrom .smartarray import SmartArray\n\n# Re-export decorators\nfrom .decorators import autojit, cfunc, generated_jit, jit, njit, stencil\n\n# Re-export vectorize decorators\nfrom .npyufunc import vectorize, guvectorize\n\n# Re-export Numpy helpers\nfrom .numpy_support import carray, farray, from_dtype\n\n# Re-export jitclass\nfrom .jitclass import jitclass\n\n# Keep this for backward compatibility.\ntest = runtests.main\n\n\n__all__ = \"\"\"\n autojit\n cfunc\n from_dtype\n guvectorize\n jit\n jitclass\n njit\n stencil\n typeof\n prange\n stencil\n vectorize\n \"\"\".split() + types.__all__ + errors.__all__\n\n\n_min_llvmlite_version = (0, 23, 0)\n_min_llvm_version = (6, 0, 0)\n\ndef _ensure_llvm():\n \"\"\"\n Make sure llvmlite is operational.\n \"\"\"\n import warnings\n import llvmlite\n\n # Only look at the the major, minor and bugfix version numbers.\n # Ignore other stuffs\n regex = re.compile(r'(\\d+)\\.(\\d+).(\\d+)')\n m = regex.match(llvmlite.__version__)\n if m:\n ver = tuple(map(int, m.groups()))\n if ver < _min_llvmlite_version:\n msg = (\"Numba requires at least version %d.%d.%d of llvmlite.\\n\"\n \"Installed version is %s.\\n\"\n \"Please update llvmlite.\" %\n (_min_llvmlite_version + (llvmlite.__version__,)))\n raise ImportError(msg)\n else:\n # Not matching?\n warnings.warn(\"llvmlite version format not recognized!\")\n\n from llvmlite.binding import llvm_version_info, check_jit_execution\n\n if llvm_version_info < _min_llvm_version:\n msg = (\"Numba requires at least version %d.%d.%d of LLVM.\\n\"\n \"Installed llvmlite is built against version %d.%d.%d.\\n\"\n \"Please update llvmlite.\" %\n (_min_llvm_version + llvm_version_info))\n raise ImportError(msg)\n\n check_jit_execution()\n\ndef _ensure_pynumpy():\n \"\"\"\n Make sure Python and Numpy have supported versions.\n \"\"\"\n import warnings\n from . import numpy_support\n\n pyver = sys.version_info[:2]\n if pyver < (2, 7) or ((3,) <= pyver < (3, 4)):\n raise ImportError(\"Numba needs Python 2.7 or greater, or 3.4 or greater\")\n\n np_version = numpy_support.version[:2]\n if np_version < (1, 7):\n raise ImportError(\"Numba needs Numpy 1.7 or greater\")\n\ndef _try_enable_svml():\n \"\"\"\n Tries to enable SVML if configuration permits use and the library is found.\n \"\"\"\n if not config.DISABLE_INTEL_SVML:\n try:\n if sys.platform.startswith('linux'):\n llvmlite.binding.load_library_permanently(\"libsvml.so\")\n elif sys.platform.startswith('darwin'):\n llvmlite.binding.load_library_permanently(\"libsvml.dylib\")\n elif sys.platform.startswith('win'):\n llvmlite.binding.load_library_permanently(\"svml_dispmd\")\n else:\n return False\n # The SVML library is loaded, therefore SVML *could* be supported.\n # Now see if LLVM has been compiled with the SVML support patch.\n # If llvmlite has the checking function `has_svml` and it returns\n # True, then LLVM was compiled with SVML support and the the setup\n # for SVML can proceed. We err on the side of caution and if the\n # checking function is missing, regardless of that being fine for\n # most 0.23.{0,1} llvmlite instances (i.e. conda or pip installed),\n # we assume that SVML was not compiled in. llvmlite 0.23.2 is a\n # bugfix release with the checking function present that will always\n # produce correct behaviour. For context see: #3006.\n try:\n if not getattr(llvmlite.binding.targets, \"has_svml\")():\n # has detection function, but no svml compiled in, therefore\n # disable SVML\n return False\n except AttributeError:\n if platform.machine() == 'x86_64' and config.DEBUG:\n msg = (\"SVML was found but llvmlite >= 0.23.2 is \"\n \"needed to support it.\")\n warnings.warn(msg)\n # does not have detection function, cannot detect reliably,\n # disable SVML.\n return False\n\n # All is well, detection function present and reports SVML is\n # compiled in, set the vector library to SVML.\n llvmlite.binding.set_option('SVML', '-vector-library=SVML')\n return True\n except:\n if platform.machine() == 'x86_64' and config.DEBUG:\n warnings.warn(\"SVML was not found/could not be loaded.\")\n return False\n\n_ensure_llvm()\n_ensure_pynumpy()\n\n# we know llvmlite is working as the above tests passed, import it now as SVML\n# needs to mutate runtime options (sets the `-vector-library`).\nimport llvmlite\n\n\"\"\"\nIs set to True if Intel SVML is in use.\n\"\"\"\nconfig.USING_SVML = _try_enable_svml()\n\nfrom ._version import get_versions\n__version__ = get_versions()['version']\ndel get_versions\n", "path": "numba/__init__.py"}]}
| 2,766 | 563 |
gh_patches_debug_59577
|
rasdani/github-patches
|
git_diff
|
sotetsuk__pgx-186
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pgx v 0.0.9 でimages フォルダがダウンロードされない.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from setuptools import find_packages, setup
2
3 setup(
4 name="pgx",
5 version="0.0.9",
6 long_description_content_type="text/markdown",
7 description="",
8 url="",
9 author="Sotetsu KOYAMADA",
10 author_email="[email protected]",
11 keywords="",
12 packages=find_packages(),
13 install_requires=["jax", "flax", "svgwrite"],
14 classifiers=[
15 "Programming Language :: Python :: 3.8",
16 ],
17 )
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -10,6 +10,8 @@
author_email="[email protected]",
keywords="",
packages=find_packages(),
+ package_data={'': ['LICENSE', '*.svg']},
+ include_package_data=True,
install_requires=["jax", "flax", "svgwrite"],
classifiers=[
"Programming Language :: Python :: 3.8",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -10,6 +10,8 @@\n author_email=\"[email protected]\",\n keywords=\"\",\n packages=find_packages(),\n+ package_data={'': ['LICENSE', '*.svg']},\n+ include_package_data=True,\n install_requires=[\"jax\", \"flax\", \"svgwrite\"],\n classifiers=[\n \"Programming Language :: Python :: 3.8\",\n", "issue": "pgx v 0.0.9 \u3067images \u30d5\u30a9\u30eb\u30c0\u304c\u30c0\u30a6\u30f3\u30ed\u30fc\u30c9\u3055\u308c\u306a\u3044.\n\n", "before_files": [{"content": "from setuptools import find_packages, setup\n\nsetup(\n name=\"pgx\",\n version=\"0.0.9\",\n long_description_content_type=\"text/markdown\",\n description=\"\",\n url=\"\",\n author=\"Sotetsu KOYAMADA\",\n author_email=\"[email protected]\",\n keywords=\"\",\n packages=find_packages(),\n install_requires=[\"jax\", \"flax\", \"svgwrite\"],\n classifiers=[\n \"Programming Language :: Python :: 3.8\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import find_packages, setup\n\nsetup(\n name=\"pgx\",\n version=\"0.0.9\",\n long_description_content_type=\"text/markdown\",\n description=\"\",\n url=\"\",\n author=\"Sotetsu KOYAMADA\",\n author_email=\"[email protected]\",\n keywords=\"\",\n packages=find_packages(),\n package_data={'': ['LICENSE', '*.svg']},\n include_package_data=True,\n install_requires=[\"jax\", \"flax\", \"svgwrite\"],\n classifiers=[\n \"Programming Language :: Python :: 3.8\",\n ],\n)\n", "path": "setup.py"}]}
| 419 | 104 |
gh_patches_debug_32803
|
rasdani/github-patches
|
git_diff
|
PrefectHQ__prefect-5103
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CLI registration inconsistently bumps flow version even when metadata hasn't changed
## Opened from the [Prefect Public Slack Community](https://prefect.io/slack)
**dkmarsh**: Hi Everyone, I'm having an issue with registering flows. Whenever I register the same flow, it bumps the version, even if the metadata is unchanged. I've tried using the CLI as well as flow.register(). I've also tried using flow.register('project_1', idempotency_key=flow.serialized_hash()) and it still bumps the version. I am using a local agent, local storage and the cloud backend. Is there another configuration that needs to be set so this doesn't happen? Or should this all be done in docker?
**kevin701**: Hey <@U022J2DB0UR>, I don’t think moving to Docker will do this. Will ask the team to be sure but what is your use case that you need the version pinned? We might be able to solve this some other way.
**ranugoldan**: I once tried with cli command `prefect register flow` it doesn't bump version, but with `prefect register` it does
**kevin701**: Actually that’s right <@ULCV623AT>! I know some people on the other side have been asking why their flow is not bumping version :sweat_smile:
**amanda.wee**: <@U01QEJ9PP53> when *should* the flow version be bumped? In my ECS-based setup with a bunch of flows packaged together with a local agent, I build the Docker image each time we make a change, even if it is only to tweak a single task. When the Docker container starts up, it runs a shell script that runs the Python scripts that define and register the flows (i.e., not using the CLI). Like what <@ULCV623AT> observed, with `serialized_hash()` the flow versions kept increasing even if it was just because the ECS task was restarted, thereby starting up the Docker container again.
My suspicion was that some Docker container metadata was changing each time, e.g., something host-related, so the serialized hash computation changed. My solution was to write my own hash function that took into account the dependencies (including Prefect version) and the code content, but I wonder if that is overkill.
**kevin701**: Hey <@U01DJDK6AQ7>, correct me if I’m wrong but I think what you’re referring to is that DockerStorage was not respecting the cache and rebuilding everything. Have you seen this <https://github.com/PrefectHQ/prefect/pull/4584|recent PR> that fixed it?.
**amanda.wee**: No, I'm using S3 storage. The Docker image is for ECS, not for Prefect.
**kevin701**: Oh I see what you mean. Ok I’ll ask the team and get back to you
**dkmarsh**: <@U01QEJ9PP53> It was my understanding that the version should only be bumped when the metadata changes. My use case is to automate the registering of flows once they are added or changed by a developer. I wrote a function that looks in a directory and collects all flows and registers them. I would like to not have the versions bumped on flows that have already been registered and have had no changes.
Interestingly enough, I tried testing again this morning and discovered that in using the CLI or python api to register, the version will get bumped every other time. For example, if I run
`prefect register --project Project1 -p src/register.py -n Flow1`
it will register the flow, then if I run the same command a few seconds later, it will skip with the message: "Skipped (metadata unchanged)". However, running it a third time, it will register the flow as version 2. So it seems to be skipping the registration as desired every other time.
**znicholasbrown**: Hi <@U022J2DB0UR> - this sounds like an issue with CLI registration; I'm going to open an issue from this thread for the Core team to look into.
**znicholasbrown**: <@ULVA73B9P> open "CLI registration inconsistently bumps flow version even when metadata hasn't changed"
Original thread can be found [here](https://prefect-community.slack.com/archives/CL09KU1K7/p1622771376434600?thread_ts=1622771376.434600&cid=CL09KU1K7).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/storage/azure.py`
Content:
```
1 import os
2 from typing import TYPE_CHECKING, Any
3
4 import pendulum
5 from slugify import slugify
6
7 import prefect
8 from prefect.client import Secret
9 from prefect.engine.results import AzureResult
10 from prefect.storage import Storage
11 from prefect.utilities.storage import (
12 extract_flow_from_file,
13 flow_to_bytes_pickle,
14 flow_from_bytes_pickle,
15 )
16
17 if TYPE_CHECKING:
18 from prefect.core.flow import Flow
19
20
21 class Azure(Storage):
22 """
23 Azure Blob storage class. This class represents the Storage interface for Flows stored as
24 bytes in an Azure container.
25
26 This storage class optionally takes a `blob_name` which will be the name of the Flow object
27 when stored in Azure. If this key is not provided the Flow upload name will take the form
28 `slugified-flow-name/slugified-current-timestamp`.
29
30 Args:
31 - container (str): the name of the Azure Blob Container to store the Flow
32 - connection_string_secret (str, optional): the name of a Prefect secret
33 that contains an Azure connection string for communicating with
34 Blob storage. If not provided the value set in the environment as
35 `AZURE_STORAGE_CONNECTION_STRING` will be used
36 - blob_name (str, optional): a unique key to use for uploading this Flow to Azure. This
37 is only useful when storing a single Flow using this storage object.
38 - stored_as_script (bool, optional): boolean for specifying if the flow has been stored
39 as a `.py` file. Defaults to `False`
40 - **kwargs (Any, optional): any additional `Storage` initialization options
41 """
42
43 def __init__(
44 self,
45 container: str,
46 connection_string_secret: str = None,
47 blob_name: str = None,
48 stored_as_script: bool = False,
49 **kwargs: Any
50 ) -> None:
51 self.container = container
52 self.connection_string_secret = connection_string_secret
53 self.blob_name = blob_name
54
55 result = AzureResult(
56 connection_string_secret=self.connection_string_secret,
57 container=container,
58 )
59 super().__init__(result=result, stored_as_script=stored_as_script, **kwargs)
60
61 def get_flow(self, flow_name: str) -> "Flow":
62 """
63 Given a flow name within this Storage object, load and return the Flow.
64
65 Args:
66 - flow_name (str): the name of the flow to return.
67
68 Returns:
69 - Flow: the requested flow
70 """
71 if flow_name not in self.flows:
72 raise ValueError("Flow is not contained in this Storage")
73 flow_location = self.flows[flow_name]
74 try:
75 client = self._azure_block_blob_service.get_blob_client(
76 container=self.container, blob=flow_location
77 )
78
79 self.logger.info(
80 "Downloading {} from {}".format(flow_location, self.container)
81 )
82
83 content = client.download_blob().content_as_bytes()
84 except Exception as err:
85 self.logger.error("Error downloading Flow from Azure: {}".format(err))
86 raise
87 if self.stored_as_script:
88 return extract_flow_from_file(file_contents=content, flow_name=flow_name) # type: ignore
89
90 return flow_from_bytes_pickle(content)
91
92 def add_flow(self, flow: "Flow") -> str:
93 """
94 Method for storing a new flow as bytes in an Azure Blob container.
95
96 Args:
97 - flow (Flow): a Prefect Flow to add
98
99 Returns:
100 - str: the key of the newly added Flow in the container
101
102 Raises:
103 - ValueError: if a flow with the same name is already contained in this storage
104 """
105
106 if flow.name in self:
107 raise ValueError(
108 'Name conflict: Flow with the name "{}" is already present in this storage.'.format(
109 flow.name
110 )
111 )
112
113 # create key for Flow that uniquely identifies Flow object in Azure
114 blob_name = self.blob_name or "{}/{}".format(
115 slugify(flow.name), slugify(pendulum.now("utc").isoformat())
116 )
117
118 self.flows[flow.name] = blob_name
119 self._flows[flow.name] = flow
120 return blob_name
121
122 def build(self) -> "Storage":
123 """
124 Build the Azure storage object by uploading Flows to an Azure Blob container.
125 This will upload all of the flows found in `storage.flows`.
126
127 Returns:
128 - Storage: an Azure object that contains information about how and where
129 each flow is stored
130 """
131 self.run_basic_healthchecks()
132
133 if self.stored_as_script:
134 if not self.blob_name:
135 raise ValueError(
136 "A `blob_name` must be provided to show where flow `.py` file is stored in Azure."
137 )
138 return self
139
140 for flow_name, flow in self._flows.items():
141 data = flow_to_bytes_pickle(flow)
142
143 client = self._azure_block_blob_service.get_blob_client(
144 container=self.container, blob=self.flows[flow_name]
145 )
146
147 self.logger.info(
148 "Uploading {} to {}".format(self.flows[flow_name], self.container)
149 )
150
151 client.upload_blob(data)
152
153 return self
154
155 @property
156 def connection_string(self): # type: ignore
157 if self.connection_string_secret is not None:
158 return Secret(self.connection_string_secret).get()
159 conn_string = prefect.context.get("secrets", {}).get(
160 "AZURE_STORAGE_CONNECTION_STRING"
161 ) or os.getenv("AZURE_STORAGE_CONNECTION_STRING")
162 if conn_string is None:
163 raise Exception(
164 "Azure connection string not provided. Set `AZURE_STORAGE_CONNECTION_STRING` environment"
165 " variable or save connection string as Prefect secret."
166 )
167 return conn_string
168
169 @property
170 def _azure_block_blob_service(self): # type: ignore
171 import azure.storage.blob
172
173 return azure.storage.blob.BlobServiceClient.from_connection_string(
174 conn_str=self.connection_string
175 )
176
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/prefect/storage/azure.py b/src/prefect/storage/azure.py
--- a/src/prefect/storage/azure.py
+++ b/src/prefect/storage/azure.py
@@ -35,6 +35,8 @@
`AZURE_STORAGE_CONNECTION_STRING` will be used
- blob_name (str, optional): a unique key to use for uploading this Flow to Azure. This
is only useful when storing a single Flow using this storage object.
+ - overwrite (bool, optional): if set, an existing blob with the same name will be overwritten.
+ By default, an error will be thrown if the blob already exists.
- stored_as_script (bool, optional): boolean for specifying if the flow has been stored
as a `.py` file. Defaults to `False`
- **kwargs (Any, optional): any additional `Storage` initialization options
@@ -45,12 +47,14 @@
container: str,
connection_string_secret: str = None,
blob_name: str = None,
+ overwrite: bool = False,
stored_as_script: bool = False,
**kwargs: Any
) -> None:
self.container = container
self.connection_string_secret = connection_string_secret
self.blob_name = blob_name
+ self.overwrite = overwrite
result = AzureResult(
connection_string_secret=self.connection_string_secret,
@@ -148,7 +152,7 @@
"Uploading {} to {}".format(self.flows[flow_name], self.container)
)
- client.upload_blob(data)
+ client.upload_blob(data, overwrite=self.overwrite)
return self
|
{"golden_diff": "diff --git a/src/prefect/storage/azure.py b/src/prefect/storage/azure.py\n--- a/src/prefect/storage/azure.py\n+++ b/src/prefect/storage/azure.py\n@@ -35,6 +35,8 @@\n `AZURE_STORAGE_CONNECTION_STRING` will be used\n - blob_name (str, optional): a unique key to use for uploading this Flow to Azure. This\n is only useful when storing a single Flow using this storage object.\n+ - overwrite (bool, optional): if set, an existing blob with the same name will be overwritten.\n+ By default, an error will be thrown if the blob already exists.\n - stored_as_script (bool, optional): boolean for specifying if the flow has been stored\n as a `.py` file. Defaults to `False`\n - **kwargs (Any, optional): any additional `Storage` initialization options\n@@ -45,12 +47,14 @@\n container: str,\n connection_string_secret: str = None,\n blob_name: str = None,\n+ overwrite: bool = False,\n stored_as_script: bool = False,\n **kwargs: Any\n ) -> None:\n self.container = container\n self.connection_string_secret = connection_string_secret\n self.blob_name = blob_name\n+ self.overwrite = overwrite\n \n result = AzureResult(\n connection_string_secret=self.connection_string_secret,\n@@ -148,7 +152,7 @@\n \"Uploading {} to {}\".format(self.flows[flow_name], self.container)\n )\n \n- client.upload_blob(data)\n+ client.upload_blob(data, overwrite=self.overwrite)\n \n return self\n", "issue": "CLI registration inconsistently bumps flow version even when metadata hasn't changed\n## Opened from the [Prefect Public Slack Community](https://prefect.io/slack)\n\n**dkmarsh**: Hi Everyone, I'm having an issue with registering flows. Whenever I register the same flow, it bumps the version, even if the metadata is unchanged. I've tried using the CLI as well as flow.register(). I've also tried using flow.register('project_1', idempotency_key=flow.serialized_hash()) and it still bumps the version. I am using a local agent, local storage and the cloud backend. Is there another configuration that needs to be set so this doesn't happen? Or should this all be done in docker?\n\n**kevin701**: Hey <@U022J2DB0UR>, I don\u2019t think moving to Docker will do this. Will ask the team to be sure but what is your use case that you need the version pinned? We might be able to solve this some other way.\n\n**ranugoldan**: I once tried with cli command `prefect register flow` it doesn't bump version, but with `prefect register` it does\n\n**kevin701**: Actually that\u2019s right <@ULCV623AT>! I know some people on the other side have been asking why their flow is not bumping version :sweat_smile:\n\n**amanda.wee**: <@U01QEJ9PP53> when *should* the flow version be bumped? In my ECS-based setup with a bunch of flows packaged together with a local agent, I build the Docker image each time we make a change, even if it is only to tweak a single task. When the Docker container starts up, it runs a shell script that runs the Python scripts that define and register the flows (i.e., not using the CLI). Like what <@ULCV623AT> observed, with `serialized_hash()` the flow versions kept increasing even if it was just because the ECS task was restarted, thereby starting up the Docker container again.\n\nMy suspicion was that some Docker container metadata was changing each time, e.g., something host-related, so the serialized hash computation changed. My solution was to write my own hash function that took into account the dependencies (including Prefect version) and the code content, but I wonder if that is overkill.\n\n**kevin701**: Hey <@U01DJDK6AQ7>, correct me if I\u2019m wrong but I think what you\u2019re referring to is that DockerStorage was not respecting the cache and rebuilding everything. Have you seen this <https://github.com/PrefectHQ/prefect/pull/4584|recent PR> that fixed it?.\n\n**amanda.wee**: No, I'm using S3 storage. The Docker image is for ECS, not for Prefect.\n\n**kevin701**: Oh I see what you mean. Ok I\u2019ll ask the team and get back to you\n\n**dkmarsh**: <@U01QEJ9PP53> It was my understanding that the version should only be bumped when the metadata changes. My use case is to automate the registering of flows once they are added or changed by a developer. I wrote a function that looks in a directory and collects all flows and registers them. I would like to not have the versions bumped on flows that have already been registered and have had no changes.\n\nInterestingly enough, I tried testing again this morning and discovered that in using the CLI or python api to register, the version will get bumped every other time. For example, if I run\n`prefect register --project Project1 -p src/register.py -n Flow1`\nit will register the flow, then if I run the same command a few seconds later, it will skip with the message: \"Skipped (metadata unchanged)\". However, running it a third time, it will register the flow as version 2. So it seems to be skipping the registration as desired every other time.\n\n**znicholasbrown**: Hi <@U022J2DB0UR> - this sounds like an issue with CLI registration; I'm going to open an issue from this thread for the Core team to look into.\n\n**znicholasbrown**: <@ULVA73B9P> open \"CLI registration inconsistently bumps flow version even when metadata hasn't changed\"\n\nOriginal thread can be found [here](https://prefect-community.slack.com/archives/CL09KU1K7/p1622771376434600?thread_ts=1622771376.434600&cid=CL09KU1K7).\n\n\n", "before_files": [{"content": "import os\nfrom typing import TYPE_CHECKING, Any\n\nimport pendulum\nfrom slugify import slugify\n\nimport prefect\nfrom prefect.client import Secret\nfrom prefect.engine.results import AzureResult\nfrom prefect.storage import Storage\nfrom prefect.utilities.storage import (\n extract_flow_from_file,\n flow_to_bytes_pickle,\n flow_from_bytes_pickle,\n)\n\nif TYPE_CHECKING:\n from prefect.core.flow import Flow\n\n\nclass Azure(Storage):\n \"\"\"\n Azure Blob storage class. This class represents the Storage interface for Flows stored as\n bytes in an Azure container.\n\n This storage class optionally takes a `blob_name` which will be the name of the Flow object\n when stored in Azure. If this key is not provided the Flow upload name will take the form\n `slugified-flow-name/slugified-current-timestamp`.\n\n Args:\n - container (str): the name of the Azure Blob Container to store the Flow\n - connection_string_secret (str, optional): the name of a Prefect secret\n that contains an Azure connection string for communicating with\n Blob storage. If not provided the value set in the environment as\n `AZURE_STORAGE_CONNECTION_STRING` will be used\n - blob_name (str, optional): a unique key to use for uploading this Flow to Azure. This\n is only useful when storing a single Flow using this storage object.\n - stored_as_script (bool, optional): boolean for specifying if the flow has been stored\n as a `.py` file. Defaults to `False`\n - **kwargs (Any, optional): any additional `Storage` initialization options\n \"\"\"\n\n def __init__(\n self,\n container: str,\n connection_string_secret: str = None,\n blob_name: str = None,\n stored_as_script: bool = False,\n **kwargs: Any\n ) -> None:\n self.container = container\n self.connection_string_secret = connection_string_secret\n self.blob_name = blob_name\n\n result = AzureResult(\n connection_string_secret=self.connection_string_secret,\n container=container,\n )\n super().__init__(result=result, stored_as_script=stored_as_script, **kwargs)\n\n def get_flow(self, flow_name: str) -> \"Flow\":\n \"\"\"\n Given a flow name within this Storage object, load and return the Flow.\n\n Args:\n - flow_name (str): the name of the flow to return.\n\n Returns:\n - Flow: the requested flow\n \"\"\"\n if flow_name not in self.flows:\n raise ValueError(\"Flow is not contained in this Storage\")\n flow_location = self.flows[flow_name]\n try:\n client = self._azure_block_blob_service.get_blob_client(\n container=self.container, blob=flow_location\n )\n\n self.logger.info(\n \"Downloading {} from {}\".format(flow_location, self.container)\n )\n\n content = client.download_blob().content_as_bytes()\n except Exception as err:\n self.logger.error(\"Error downloading Flow from Azure: {}\".format(err))\n raise\n if self.stored_as_script:\n return extract_flow_from_file(file_contents=content, flow_name=flow_name) # type: ignore\n\n return flow_from_bytes_pickle(content)\n\n def add_flow(self, flow: \"Flow\") -> str:\n \"\"\"\n Method for storing a new flow as bytes in an Azure Blob container.\n\n Args:\n - flow (Flow): a Prefect Flow to add\n\n Returns:\n - str: the key of the newly added Flow in the container\n\n Raises:\n - ValueError: if a flow with the same name is already contained in this storage\n \"\"\"\n\n if flow.name in self:\n raise ValueError(\n 'Name conflict: Flow with the name \"{}\" is already present in this storage.'.format(\n flow.name\n )\n )\n\n # create key for Flow that uniquely identifies Flow object in Azure\n blob_name = self.blob_name or \"{}/{}\".format(\n slugify(flow.name), slugify(pendulum.now(\"utc\").isoformat())\n )\n\n self.flows[flow.name] = blob_name\n self._flows[flow.name] = flow\n return blob_name\n\n def build(self) -> \"Storage\":\n \"\"\"\n Build the Azure storage object by uploading Flows to an Azure Blob container.\n This will upload all of the flows found in `storage.flows`.\n\n Returns:\n - Storage: an Azure object that contains information about how and where\n each flow is stored\n \"\"\"\n self.run_basic_healthchecks()\n\n if self.stored_as_script:\n if not self.blob_name:\n raise ValueError(\n \"A `blob_name` must be provided to show where flow `.py` file is stored in Azure.\"\n )\n return self\n\n for flow_name, flow in self._flows.items():\n data = flow_to_bytes_pickle(flow)\n\n client = self._azure_block_blob_service.get_blob_client(\n container=self.container, blob=self.flows[flow_name]\n )\n\n self.logger.info(\n \"Uploading {} to {}\".format(self.flows[flow_name], self.container)\n )\n\n client.upload_blob(data)\n\n return self\n\n @property\n def connection_string(self): # type: ignore\n if self.connection_string_secret is not None:\n return Secret(self.connection_string_secret).get()\n conn_string = prefect.context.get(\"secrets\", {}).get(\n \"AZURE_STORAGE_CONNECTION_STRING\"\n ) or os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n if conn_string is None:\n raise Exception(\n \"Azure connection string not provided. Set `AZURE_STORAGE_CONNECTION_STRING` environment\"\n \" variable or save connection string as Prefect secret.\"\n )\n return conn_string\n\n @property\n def _azure_block_blob_service(self): # type: ignore\n import azure.storage.blob\n\n return azure.storage.blob.BlobServiceClient.from_connection_string(\n conn_str=self.connection_string\n )\n", "path": "src/prefect/storage/azure.py"}], "after_files": [{"content": "import os\nfrom typing import TYPE_CHECKING, Any\n\nimport pendulum\nfrom slugify import slugify\n\nimport prefect\nfrom prefect.client import Secret\nfrom prefect.engine.results import AzureResult\nfrom prefect.storage import Storage\nfrom prefect.utilities.storage import (\n extract_flow_from_file,\n flow_to_bytes_pickle,\n flow_from_bytes_pickle,\n)\n\nif TYPE_CHECKING:\n from prefect.core.flow import Flow\n\n\nclass Azure(Storage):\n \"\"\"\n Azure Blob storage class. This class represents the Storage interface for Flows stored as\n bytes in an Azure container.\n\n This storage class optionally takes a `blob_name` which will be the name of the Flow object\n when stored in Azure. If this key is not provided the Flow upload name will take the form\n `slugified-flow-name/slugified-current-timestamp`.\n\n Args:\n - container (str): the name of the Azure Blob Container to store the Flow\n - connection_string_secret (str, optional): the name of a Prefect secret\n that contains an Azure connection string for communicating with\n Blob storage. If not provided the value set in the environment as\n `AZURE_STORAGE_CONNECTION_STRING` will be used\n - blob_name (str, optional): a unique key to use for uploading this Flow to Azure. This\n is only useful when storing a single Flow using this storage object.\n - overwrite (bool, optional): if set, an existing blob with the same name will be overwritten.\n By default, an error will be thrown if the blob already exists.\n - stored_as_script (bool, optional): boolean for specifying if the flow has been stored\n as a `.py` file. Defaults to `False`\n - **kwargs (Any, optional): any additional `Storage` initialization options\n \"\"\"\n\n def __init__(\n self,\n container: str,\n connection_string_secret: str = None,\n blob_name: str = None,\n overwrite: bool = False,\n stored_as_script: bool = False,\n **kwargs: Any\n ) -> None:\n self.container = container\n self.connection_string_secret = connection_string_secret\n self.blob_name = blob_name\n self.overwrite = overwrite\n\n result = AzureResult(\n connection_string_secret=self.connection_string_secret,\n container=container,\n )\n super().__init__(result=result, stored_as_script=stored_as_script, **kwargs)\n\n def get_flow(self, flow_name: str) -> \"Flow\":\n \"\"\"\n Given a flow name within this Storage object, load and return the Flow.\n\n Args:\n - flow_name (str): the name of the flow to return.\n\n Returns:\n - Flow: the requested flow\n \"\"\"\n if flow_name not in self.flows:\n raise ValueError(\"Flow is not contained in this Storage\")\n flow_location = self.flows[flow_name]\n try:\n client = self._azure_block_blob_service.get_blob_client(\n container=self.container, blob=flow_location\n )\n\n self.logger.info(\n \"Downloading {} from {}\".format(flow_location, self.container)\n )\n\n content = client.download_blob().content_as_bytes()\n except Exception as err:\n self.logger.error(\"Error downloading Flow from Azure: {}\".format(err))\n raise\n if self.stored_as_script:\n return extract_flow_from_file(file_contents=content, flow_name=flow_name) # type: ignore\n\n return flow_from_bytes_pickle(content)\n\n def add_flow(self, flow: \"Flow\") -> str:\n \"\"\"\n Method for storing a new flow as bytes in an Azure Blob container.\n\n Args:\n - flow (Flow): a Prefect Flow to add\n\n Returns:\n - str: the key of the newly added Flow in the container\n\n Raises:\n - ValueError: if a flow with the same name is already contained in this storage\n \"\"\"\n\n if flow.name in self:\n raise ValueError(\n 'Name conflict: Flow with the name \"{}\" is already present in this storage.'.format(\n flow.name\n )\n )\n\n # create key for Flow that uniquely identifies Flow object in Azure\n blob_name = self.blob_name or \"{}/{}\".format(\n slugify(flow.name), slugify(pendulum.now(\"utc\").isoformat())\n )\n\n self.flows[flow.name] = blob_name\n self._flows[flow.name] = flow\n return blob_name\n\n def build(self) -> \"Storage\":\n \"\"\"\n Build the Azure storage object by uploading Flows to an Azure Blob container.\n This will upload all of the flows found in `storage.flows`.\n\n Returns:\n - Storage: an Azure object that contains information about how and where\n each flow is stored\n \"\"\"\n self.run_basic_healthchecks()\n\n if self.stored_as_script:\n if not self.blob_name:\n raise ValueError(\n \"A `blob_name` must be provided to show where flow `.py` file is stored in Azure.\"\n )\n return self\n\n for flow_name, flow in self._flows.items():\n data = flow_to_bytes_pickle(flow)\n\n client = self._azure_block_blob_service.get_blob_client(\n container=self.container, blob=self.flows[flow_name]\n )\n\n self.logger.info(\n \"Uploading {} to {}\".format(self.flows[flow_name], self.container)\n )\n\n client.upload_blob(data, overwrite=self.overwrite)\n\n return self\n\n @property\n def connection_string(self): # type: ignore\n if self.connection_string_secret is not None:\n return Secret(self.connection_string_secret).get()\n conn_string = prefect.context.get(\"secrets\", {}).get(\n \"AZURE_STORAGE_CONNECTION_STRING\"\n ) or os.getenv(\"AZURE_STORAGE_CONNECTION_STRING\")\n if conn_string is None:\n raise Exception(\n \"Azure connection string not provided. Set `AZURE_STORAGE_CONNECTION_STRING` environment\"\n \" variable or save connection string as Prefect secret.\"\n )\n return conn_string\n\n @property\n def _azure_block_blob_service(self): # type: ignore\n import azure.storage.blob\n\n return azure.storage.blob.BlobServiceClient.from_connection_string(\n conn_str=self.connection_string\n )\n", "path": "src/prefect/storage/azure.py"}]}
| 2,956 | 364 |
gh_patches_debug_1857
|
rasdani/github-patches
|
git_diff
|
readthedocs__readthedocs.org-4676
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve Intro and Getting Started documentation
I think the [introduction](https://docs.readthedocs.io/en/latest/) and [getting started guide](https://docs.readthedocs.io/en/latest/getting_started.html) could use a few improvements to make it easier for brand new users who may not already know about Sphinx/MkDocs/Markdown/RestructuredText and are just looking for a guide on how to write some docs.
I also think our introduction could stand some improvements to point users in the right direction. We have a lot of docs, but a few layout and explanation improvements will help users find the right section for them.
Here are some specific goals and improvements:
- Make it easier to start a brand new docs project
* Have a getting started guide for Sphinx
* Have a getting started guide for Sphinx with commonmark
* Have a getting started guide for MkDocs
* Explain the why between the above technologies
- Improve the intro paragraphs ("Read the Docs hosts documentation for...") on the index page to explain RTD's value proposition and why somebody should choose Read the Docs.
- Full sentence/paragraph descriptions on different sections (eg. User documentation) rather than just a big toctree.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 from __future__ import division, print_function, unicode_literals
4
5 import os
6 import sys
7
8 import sphinx_rtd_theme
9 from recommonmark.parser import CommonMarkParser
10
11 sys.path.insert(0, os.path.abspath('..'))
12 sys.path.append(os.path.dirname(__file__))
13 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "readthedocs.settings.dev")
14
15 from django.conf import settings
16 from django.utils import timezone
17
18 import django
19 django.setup()
20
21
22 sys.path.append(os.path.abspath('_ext'))
23 extensions = [
24 'sphinx.ext.autosectionlabel',
25 'sphinx.ext.autodoc',
26 'sphinx.ext.intersphinx',
27 'sphinxcontrib.httpdomain',
28 'djangodocs',
29 'doc_extensions',
30 ]
31 templates_path = ['_templates']
32
33 source_suffix = ['.rst', '.md']
34 source_parsers = {
35 '.md': CommonMarkParser,
36 }
37
38 master_doc = 'index'
39 project = u'Read the Docs'
40 copyright = '2010-{}, Read the Docs, Inc & contributors'.format(
41 timezone.now().year
42 )
43 version = '1.0'
44 release = '1.0'
45 exclude_patterns = ['_build']
46 default_role = 'obj'
47 intersphinx_mapping = {
48 'python': ('http://python.readthedocs.io/en/latest/', None),
49 'django': ('http://django.readthedocs.io/en/1.9.x/', None),
50 'sphinx': ('http://sphinx.readthedocs.io/en/latest/', None),
51 }
52 htmlhelp_basename = 'ReadTheDocsdoc'
53 latex_documents = [
54 ('index', 'ReadTheDocs.tex', u'Read the Docs Documentation',
55 u'Eric Holscher, Charlie Leifer, Bobby Grace', 'manual'),
56 ]
57 man_pages = [
58 ('index', 'read-the-docs', u'Read the Docs Documentation',
59 [u'Eric Holscher, Charlie Leifer, Bobby Grace'], 1)
60 ]
61
62 exclude_patterns = [
63 # 'api' # needed for ``make gettext`` to not die.
64 ]
65
66 language = 'en'
67
68 locale_dirs = [
69 'locale/',
70 ]
71 gettext_compact = False
72
73 html_theme = 'sphinx_rtd_theme'
74 # html_static_path = ['_static']
75 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
76 html_logo = 'img/logo.svg'
77 html_theme_options = {
78 'logo_only': True,
79 'display_version': False,
80 }
81
82 # Activate autosectionlabel plugin
83 autosectionlabel_prefix_document = True
84
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -71,7 +71,7 @@
gettext_compact = False
html_theme = 'sphinx_rtd_theme'
-# html_static_path = ['_static']
+html_static_path = ['_static']
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
html_logo = 'img/logo.svg'
html_theme_options = {
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -71,7 +71,7 @@\n gettext_compact = False\n \n html_theme = 'sphinx_rtd_theme'\n-# html_static_path = ['_static']\n+html_static_path = ['_static']\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n html_logo = 'img/logo.svg'\n html_theme_options = {\n", "issue": "Improve Intro and Getting Started documentation\nI think the [introduction](https://docs.readthedocs.io/en/latest/) and [getting started guide](https://docs.readthedocs.io/en/latest/getting_started.html) could use a few improvements to make it easier for brand new users who may not already know about Sphinx/MkDocs/Markdown/RestructuredText and are just looking for a guide on how to write some docs.\r\n\r\nI also think our introduction could stand some improvements to point users in the right direction. We have a lot of docs, but a few layout and explanation improvements will help users find the right section for them.\r\n\r\nHere are some specific goals and improvements:\r\n\r\n- Make it easier to start a brand new docs project\r\n * Have a getting started guide for Sphinx\r\n * Have a getting started guide for Sphinx with commonmark\r\n * Have a getting started guide for MkDocs\r\n * Explain the why between the above technologies\r\n- Improve the intro paragraphs (\"Read the Docs hosts documentation for...\") on the index page to explain RTD's value proposition and why somebody should choose Read the Docs.\r\n- Full sentence/paragraph descriptions on different sections (eg. User documentation) rather than just a big toctree.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom __future__ import division, print_function, unicode_literals\n\nimport os\nimport sys\n\nimport sphinx_rtd_theme\nfrom recommonmark.parser import CommonMarkParser\n\nsys.path.insert(0, os.path.abspath('..'))\nsys.path.append(os.path.dirname(__file__))\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"readthedocs.settings.dev\")\n\nfrom django.conf import settings\nfrom django.utils import timezone\n\nimport django\ndjango.setup()\n\n\nsys.path.append(os.path.abspath('_ext'))\nextensions = [\n 'sphinx.ext.autosectionlabel',\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'sphinxcontrib.httpdomain',\n 'djangodocs',\n 'doc_extensions',\n]\ntemplates_path = ['_templates']\n\nsource_suffix = ['.rst', '.md']\nsource_parsers = {\n '.md': CommonMarkParser,\n}\n\nmaster_doc = 'index'\nproject = u'Read the Docs'\ncopyright = '2010-{}, Read the Docs, Inc & contributors'.format(\n timezone.now().year\n)\nversion = '1.0'\nrelease = '1.0'\nexclude_patterns = ['_build']\ndefault_role = 'obj'\nintersphinx_mapping = {\n 'python': ('http://python.readthedocs.io/en/latest/', None),\n 'django': ('http://django.readthedocs.io/en/1.9.x/', None),\n 'sphinx': ('http://sphinx.readthedocs.io/en/latest/', None),\n}\nhtmlhelp_basename = 'ReadTheDocsdoc'\nlatex_documents = [\n ('index', 'ReadTheDocs.tex', u'Read the Docs Documentation',\n u'Eric Holscher, Charlie Leifer, Bobby Grace', 'manual'),\n]\nman_pages = [\n ('index', 'read-the-docs', u'Read the Docs Documentation',\n [u'Eric Holscher, Charlie Leifer, Bobby Grace'], 1)\n]\n\nexclude_patterns = [\n # 'api' # needed for ``make gettext`` to not die.\n]\n\nlanguage = 'en'\n\nlocale_dirs = [\n 'locale/',\n]\ngettext_compact = False\n\nhtml_theme = 'sphinx_rtd_theme'\n# html_static_path = ['_static']\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\nhtml_logo = 'img/logo.svg'\nhtml_theme_options = {\n 'logo_only': True,\n 'display_version': False,\n}\n\n# Activate autosectionlabel plugin\nautosectionlabel_prefix_document = True\n", "path": "docs/conf.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom __future__ import division, print_function, unicode_literals\n\nimport os\nimport sys\n\nimport sphinx_rtd_theme\nfrom recommonmark.parser import CommonMarkParser\n\nsys.path.insert(0, os.path.abspath('..'))\nsys.path.append(os.path.dirname(__file__))\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"readthedocs.settings.dev\")\n\nfrom django.conf import settings\nfrom django.utils import timezone\n\nimport django\ndjango.setup()\n\n\nsys.path.append(os.path.abspath('_ext'))\nextensions = [\n 'sphinx.ext.autosectionlabel',\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'sphinxcontrib.httpdomain',\n 'djangodocs',\n 'doc_extensions',\n]\ntemplates_path = ['_templates']\n\nsource_suffix = ['.rst', '.md']\nsource_parsers = {\n '.md': CommonMarkParser,\n}\n\nmaster_doc = 'index'\nproject = u'Read the Docs'\ncopyright = '2010-{}, Read the Docs, Inc & contributors'.format(\n timezone.now().year\n)\nversion = '1.0'\nrelease = '1.0'\nexclude_patterns = ['_build']\ndefault_role = 'obj'\nintersphinx_mapping = {\n 'python': ('http://python.readthedocs.io/en/latest/', None),\n 'django': ('http://django.readthedocs.io/en/1.9.x/', None),\n 'sphinx': ('http://sphinx.readthedocs.io/en/latest/', None),\n}\nhtmlhelp_basename = 'ReadTheDocsdoc'\nlatex_documents = [\n ('index', 'ReadTheDocs.tex', u'Read the Docs Documentation',\n u'Eric Holscher, Charlie Leifer, Bobby Grace', 'manual'),\n]\nman_pages = [\n ('index', 'read-the-docs', u'Read the Docs Documentation',\n [u'Eric Holscher, Charlie Leifer, Bobby Grace'], 1)\n]\n\nexclude_patterns = [\n # 'api' # needed for ``make gettext`` to not die.\n]\n\nlanguage = 'en'\n\nlocale_dirs = [\n 'locale/',\n]\ngettext_compact = False\n\nhtml_theme = 'sphinx_rtd_theme'\nhtml_static_path = ['_static']\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\nhtml_logo = 'img/logo.svg'\nhtml_theme_options = {\n 'logo_only': True,\n 'display_version': False,\n}\n\n# Activate autosectionlabel plugin\nautosectionlabel_prefix_document = True\n", "path": "docs/conf.py"}]}
| 1,214 | 99 |
gh_patches_debug_7180
|
rasdani/github-patches
|
git_diff
|
aws-cloudformation__cfn-lint-2661
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
False positive E0001 for CodeUri in AWS::Serverless::Function
### CloudFormation Lint Version
cfn-lint 0.76.1
### What operating system are you using?
Mac
### Describe the bug
cfn-lint gives a false-positive error when CodeUri is an S3 URI.
```E0001 Error transforming template: Resource with id [rCustomAuthorizerFunction] is invalid. 'CodeUri' requires Bucket and Key properties to be specified.```
Not only does this template deploy via CloudFormation, the warning from cfn-lint contradicts the [public documentation for this resource type](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-function.html#Examples)
### Expected behavior
cfn-lint should not give a false positive that contradicts public documentation
### Reproduction template
```yaml
AWSTemplateFormatVersion: "2010-09-09"
Transform:
- 'AWS::LanguageExtensions'
- 'AWS::Serverless-2016-10-31'
Description: cfn-lint test case
Mappings:
RegionMap:
us-east-1:
InsightsArn: arn:aws:lambda:us-east-1:580247275435:layer:LambdaInsightsExtension:21
us-west-2:
InsightsArn: arn:aws:lambda:us-west-2:580247275435:layer:LambdaInsightsExtension:21
Resources:
rCustomAuthorizerFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: !Sub "my-custom-authorizer-${AWS::Region}"
Description: Custom authorizer for APIGW
Runtime: nodejs16.x
Handler: index.handler
CodeUri: !Sub "s3://custom-authz-${AWS::AccountId}-${AWS::Region}/function-v1.0.zip"
Policies:
- arn:aws:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Layers:
- !FindInMap [RegionMap, !Ref AWS::Region, "InsightsArn"]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cfnlint/transform.py`
Content:
```
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import logging
6 import os
7
8 import samtranslator
9 from samtranslator.parser import parser
10 from samtranslator.public.exceptions import InvalidDocumentException
11 from samtranslator.sdk import resource
12 from samtranslator.translator.translator import Translator
13
14 from cfnlint.data import Serverless
15 from cfnlint.helpers import convert_dict, format_json_string, load_resource
16 from cfnlint.rules import Match, TransformError
17
18 LOGGER = logging.getLogger("cfnlint")
19
20 samtranslator_logger = logging.getLogger("samtranslator")
21 samtranslator_logger.setLevel(logging.CRITICAL)
22
23
24 # Override SAM validation as cfn-lint does thoese
25 # checks already
26 # pylint: disable=unused-argument
27 def valid_override(self):
28 return resource.SamResourceType.has_value(self.type)
29
30
31 # pylint: disable=redefined-outer-name
32 resource.SamResource.valid = valid_override
33
34
35 class Transform:
36 """
37 Application Serverless Module tranform Wrapper.
38 Based on code from AWS SAM CLI:
39 https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/commands/validate/lib/sam_template_validator.py
40 """
41
42 def __init__(self, filename, template, region):
43 """
44 Initialize Transform class
45 """
46 self._filename = filename
47 self._template = template
48 self._region = region
49 self._parameters = {}
50
51 self._managed_policy_map = self.load_managed_policies()
52 self._sam_parser = parser.Parser()
53
54 def template(self):
55 """Get the template"""
56 return self._template
57
58 def load_managed_policies(self):
59 """
60 Load the ManagedPolicies locally, based on the AWS-CLI:
61 https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/lib/samlib/default_managed_policies.json
62 """
63 return load_resource(Serverless, "ManagedPolicies.json")
64
65 def _replace_local_codeuri(self):
66 """
67 Replaces the CodeUri in AWS::Serverless::Function and DefinitionUri in
68 AWS::Serverless::Api to a fake S3 Uri. This is to support running the
69 SAM Translator with valid values for these fields. If this is not done,
70 the template is invalid in the eyes of SAM Translator (the translator
71 does not support local paths)
72 """
73
74 all_resources = self._template.get("Resources", {})
75
76 template_globals = self._template.get("Globals", {})
77 auto_publish_alias = template_globals.get("Function", {}).get(
78 "AutoPublishAlias"
79 )
80 if isinstance(auto_publish_alias, dict):
81 if len(auto_publish_alias) == 1:
82 for k, v in auto_publish_alias.items():
83 if k == "Ref":
84 if v in self._template.get("Parameters"):
85 self._parameters[v] = "Alias"
86
87 for _, resource in all_resources.items():
88 resource_type = resource.get("Type")
89 resource_dict = resource.get("Properties")
90
91 if resource_type == "AWS::Serverless::Function":
92 if resource_dict.get("PackageType") == "Image":
93 Transform._update_to_s3_uri("ImageUri", resource_dict)
94 else:
95 Transform._update_to_s3_uri("CodeUri", resource_dict)
96 auto_publish_alias = resource_dict.get("AutoPublishAlias")
97 if isinstance(auto_publish_alias, dict):
98 if len(auto_publish_alias) == 1:
99 for k, v in auto_publish_alias.items():
100 if k == "Ref":
101 if v in self._template.get("Parameters"):
102 self._parameters[v] = "Alias"
103 if resource_type in ["AWS::Serverless::LayerVersion"]:
104 if resource_dict.get("ContentUri"):
105 Transform._update_to_s3_uri("ContentUri", resource_dict)
106 if resource_type == "AWS::Serverless::Application":
107 if resource_dict.get("Location"):
108 if isinstance(resource_dict.get("Location"), dict):
109 resource_dict["Location"] = ""
110 Transform._update_to_s3_uri("Location", resource_dict)
111 if resource_type == "AWS::Serverless::Api":
112 if (
113 "DefinitionBody" not in resource_dict
114 and "Auth" not in resource_dict
115 and "Cors" not in resource_dict
116 and "DisableExecuteApiEndpoint" not in resource_dict
117 ):
118 Transform._update_to_s3_uri("DefinitionUri", resource_dict)
119 else:
120 resource_dict["DefinitionBody"] = ""
121 if resource_type == "AWS::Serverless::StateMachine" and resource_dict.get(
122 "DefinitionUri"
123 ):
124 Transform._update_to_s3_uri("DefinitionUri", resource_dict)
125
126 def transform_template(self):
127 """
128 Transform the Template using the Serverless Application Model.
129 """
130 matches = []
131
132 try:
133 # Output the SAM Translator version in debug mode
134 LOGGER.info("SAM Translator: %s", samtranslator.__version__)
135
136 sam_translator = Translator(
137 managed_policy_map=self._managed_policy_map, sam_parser=self._sam_parser
138 )
139
140 self._replace_local_codeuri()
141
142 # Tell SAM to use the region we're linting in, this has to be
143 # controlled using the default AWS mechanisms, see also:
144 # https://github.com/awslabs/serverless-application-model/blob/master/samtranslator/translator/arn_generator.py
145 LOGGER.info("Setting AWS_DEFAULT_REGION to %s", self._region)
146 os.environ["AWS_DEFAULT_REGION"] = self._region
147
148 self._template = convert_dict(
149 sam_translator.translate(
150 sam_template=self._template, parameter_values=self._parameters
151 )
152 )
153
154 LOGGER.info(
155 "Transformed template: \n%s", format_json_string(self._template)
156 )
157 except InvalidDocumentException as e:
158 message = "Error transforming template: {0}"
159 for cause in e.causes:
160 matches.append(
161 Match(
162 1,
163 1,
164 1,
165 1,
166 self._filename,
167 TransformError(),
168 message.format(cause.message),
169 )
170 )
171 except Exception as e: # pylint: disable=W0703
172 LOGGER.debug("Error transforming template: %s", str(e))
173 LOGGER.debug("Stack trace: %s", e, exc_info=True)
174 message = "Error transforming template: {0}"
175 matches.append(
176 Match(
177 1, 1, 1, 1, self._filename, TransformError(), message.format(str(e))
178 )
179 )
180
181 return matches
182
183 @staticmethod
184 def is_s3_uri(uri):
185 """
186 Checks the uri and determines if it is a valid S3 Uri
187 Parameters
188 ----------
189 uri str, required
190 Uri to check
191 Returns
192 -------
193 bool
194 Returns True if the uri given is an S3 uri, otherwise False
195 """
196 return isinstance(uri, str) and uri.startswith("s3://")
197
198 @staticmethod
199 def _update_to_s3_uri(
200 property_key, resource_property_dict, s3_uri_value="s3://bucket/value"
201 ):
202 """
203 Updates the 'property_key' in the 'resource_property_dict' to the
204 value of 's3_uri_value'
205 Note: The function will mutate the resource_property_dict that is pass
206 in Parameters
207 ----------
208 property_key str, required
209 Key in the resource_property_dict
210 resource_property_dict dict, required
211 Property dictionary of a Resource in the template to replace
212 s3_uri_value str, optional
213 Value to update the value of the property_key to
214 """
215 uri_property = resource_property_dict.get(property_key, ".")
216
217 # ignore if dict or already an S3 Uri
218 if isinstance(uri_property, dict):
219 if len(uri_property) == 1:
220 for k in uri_property.keys():
221 if k == "Ref":
222 resource_property_dict[property_key] = s3_uri_value
223 return
224 if Transform.is_s3_uri(uri_property):
225 return
226
227 resource_property_dict[property_key] = s3_uri_value
228
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/cfnlint/transform.py b/src/cfnlint/transform.py
--- a/src/cfnlint/transform.py
+++ b/src/cfnlint/transform.py
@@ -218,7 +218,7 @@
if isinstance(uri_property, dict):
if len(uri_property) == 1:
for k in uri_property.keys():
- if k == "Ref":
+ if k in ["Ref", "Fn::Sub"]:
resource_property_dict[property_key] = s3_uri_value
return
if Transform.is_s3_uri(uri_property):
|
{"golden_diff": "diff --git a/src/cfnlint/transform.py b/src/cfnlint/transform.py\n--- a/src/cfnlint/transform.py\n+++ b/src/cfnlint/transform.py\n@@ -218,7 +218,7 @@\n if isinstance(uri_property, dict):\n if len(uri_property) == 1:\n for k in uri_property.keys():\n- if k == \"Ref\":\n+ if k in [\"Ref\", \"Fn::Sub\"]:\n resource_property_dict[property_key] = s3_uri_value\n return\n if Transform.is_s3_uri(uri_property):\n", "issue": "False positive E0001 for CodeUri in AWS::Serverless::Function\n### CloudFormation Lint Version\n\ncfn-lint 0.76.1\n\n### What operating system are you using?\n\nMac\n\n### Describe the bug\n\ncfn-lint gives a false-positive error when CodeUri is an S3 URI.\r\n\r\n```E0001 Error transforming template: Resource with id [rCustomAuthorizerFunction] is invalid. 'CodeUri' requires Bucket and Key properties to be specified.```\r\n\r\nNot only does this template deploy via CloudFormation, the warning from cfn-lint contradicts the [public documentation for this resource type](https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-resource-function.html#Examples)\n\n### Expected behavior\n\ncfn-lint should not give a false positive that contradicts public documentation\n\n### Reproduction template\n\n```yaml\r\nAWSTemplateFormatVersion: \"2010-09-09\"\r\nTransform:\r\n - 'AWS::LanguageExtensions'\r\n - 'AWS::Serverless-2016-10-31'\r\n\r\nDescription: cfn-lint test case\r\n\r\nMappings:\r\n RegionMap:\r\n us-east-1:\r\n InsightsArn: arn:aws:lambda:us-east-1:580247275435:layer:LambdaInsightsExtension:21\r\n us-west-2:\r\n InsightsArn: arn:aws:lambda:us-west-2:580247275435:layer:LambdaInsightsExtension:21\r\n\r\nResources:\r\n rCustomAuthorizerFunction:\r\n Type: AWS::Serverless::Function\r\n Properties:\r\n FunctionName: !Sub \"my-custom-authorizer-${AWS::Region}\"\r\n Description: Custom authorizer for APIGW\r\n Runtime: nodejs16.x\r\n Handler: index.handler\r\n CodeUri: !Sub \"s3://custom-authz-${AWS::AccountId}-${AWS::Region}/function-v1.0.zip\"\r\n Policies:\r\n - arn:aws:iam::aws:policy/CloudWatchLambdaInsightsExecutionRolePolicy\r\n - arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole\r\n Layers:\r\n - !FindInMap [RegionMap, !Ref AWS::Region, \"InsightsArn\"]\r\n\r\n```\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport logging\nimport os\n\nimport samtranslator\nfrom samtranslator.parser import parser\nfrom samtranslator.public.exceptions import InvalidDocumentException\nfrom samtranslator.sdk import resource\nfrom samtranslator.translator.translator import Translator\n\nfrom cfnlint.data import Serverless\nfrom cfnlint.helpers import convert_dict, format_json_string, load_resource\nfrom cfnlint.rules import Match, TransformError\n\nLOGGER = logging.getLogger(\"cfnlint\")\n\nsamtranslator_logger = logging.getLogger(\"samtranslator\")\nsamtranslator_logger.setLevel(logging.CRITICAL)\n\n\n# Override SAM validation as cfn-lint does thoese\n# checks already\n# pylint: disable=unused-argument\ndef valid_override(self):\n return resource.SamResourceType.has_value(self.type)\n\n\n# pylint: disable=redefined-outer-name\nresource.SamResource.valid = valid_override\n\n\nclass Transform:\n \"\"\"\n Application Serverless Module tranform Wrapper.\n Based on code from AWS SAM CLI:\n https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/commands/validate/lib/sam_template_validator.py\n \"\"\"\n\n def __init__(self, filename, template, region):\n \"\"\"\n Initialize Transform class\n \"\"\"\n self._filename = filename\n self._template = template\n self._region = region\n self._parameters = {}\n\n self._managed_policy_map = self.load_managed_policies()\n self._sam_parser = parser.Parser()\n\n def template(self):\n \"\"\"Get the template\"\"\"\n return self._template\n\n def load_managed_policies(self):\n \"\"\"\n Load the ManagedPolicies locally, based on the AWS-CLI:\n https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/lib/samlib/default_managed_policies.json\n \"\"\"\n return load_resource(Serverless, \"ManagedPolicies.json\")\n\n def _replace_local_codeuri(self):\n \"\"\"\n Replaces the CodeUri in AWS::Serverless::Function and DefinitionUri in\n AWS::Serverless::Api to a fake S3 Uri. This is to support running the\n SAM Translator with valid values for these fields. If this is not done,\n the template is invalid in the eyes of SAM Translator (the translator\n does not support local paths)\n \"\"\"\n\n all_resources = self._template.get(\"Resources\", {})\n\n template_globals = self._template.get(\"Globals\", {})\n auto_publish_alias = template_globals.get(\"Function\", {}).get(\n \"AutoPublishAlias\"\n )\n if isinstance(auto_publish_alias, dict):\n if len(auto_publish_alias) == 1:\n for k, v in auto_publish_alias.items():\n if k == \"Ref\":\n if v in self._template.get(\"Parameters\"):\n self._parameters[v] = \"Alias\"\n\n for _, resource in all_resources.items():\n resource_type = resource.get(\"Type\")\n resource_dict = resource.get(\"Properties\")\n\n if resource_type == \"AWS::Serverless::Function\":\n if resource_dict.get(\"PackageType\") == \"Image\":\n Transform._update_to_s3_uri(\"ImageUri\", resource_dict)\n else:\n Transform._update_to_s3_uri(\"CodeUri\", resource_dict)\n auto_publish_alias = resource_dict.get(\"AutoPublishAlias\")\n if isinstance(auto_publish_alias, dict):\n if len(auto_publish_alias) == 1:\n for k, v in auto_publish_alias.items():\n if k == \"Ref\":\n if v in self._template.get(\"Parameters\"):\n self._parameters[v] = \"Alias\"\n if resource_type in [\"AWS::Serverless::LayerVersion\"]:\n if resource_dict.get(\"ContentUri\"):\n Transform._update_to_s3_uri(\"ContentUri\", resource_dict)\n if resource_type == \"AWS::Serverless::Application\":\n if resource_dict.get(\"Location\"):\n if isinstance(resource_dict.get(\"Location\"), dict):\n resource_dict[\"Location\"] = \"\"\n Transform._update_to_s3_uri(\"Location\", resource_dict)\n if resource_type == \"AWS::Serverless::Api\":\n if (\n \"DefinitionBody\" not in resource_dict\n and \"Auth\" not in resource_dict\n and \"Cors\" not in resource_dict\n and \"DisableExecuteApiEndpoint\" not in resource_dict\n ):\n Transform._update_to_s3_uri(\"DefinitionUri\", resource_dict)\n else:\n resource_dict[\"DefinitionBody\"] = \"\"\n if resource_type == \"AWS::Serverless::StateMachine\" and resource_dict.get(\n \"DefinitionUri\"\n ):\n Transform._update_to_s3_uri(\"DefinitionUri\", resource_dict)\n\n def transform_template(self):\n \"\"\"\n Transform the Template using the Serverless Application Model.\n \"\"\"\n matches = []\n\n try:\n # Output the SAM Translator version in debug mode\n LOGGER.info(\"SAM Translator: %s\", samtranslator.__version__)\n\n sam_translator = Translator(\n managed_policy_map=self._managed_policy_map, sam_parser=self._sam_parser\n )\n\n self._replace_local_codeuri()\n\n # Tell SAM to use the region we're linting in, this has to be\n # controlled using the default AWS mechanisms, see also:\n # https://github.com/awslabs/serverless-application-model/blob/master/samtranslator/translator/arn_generator.py\n LOGGER.info(\"Setting AWS_DEFAULT_REGION to %s\", self._region)\n os.environ[\"AWS_DEFAULT_REGION\"] = self._region\n\n self._template = convert_dict(\n sam_translator.translate(\n sam_template=self._template, parameter_values=self._parameters\n )\n )\n\n LOGGER.info(\n \"Transformed template: \\n%s\", format_json_string(self._template)\n )\n except InvalidDocumentException as e:\n message = \"Error transforming template: {0}\"\n for cause in e.causes:\n matches.append(\n Match(\n 1,\n 1,\n 1,\n 1,\n self._filename,\n TransformError(),\n message.format(cause.message),\n )\n )\n except Exception as e: # pylint: disable=W0703\n LOGGER.debug(\"Error transforming template: %s\", str(e))\n LOGGER.debug(\"Stack trace: %s\", e, exc_info=True)\n message = \"Error transforming template: {0}\"\n matches.append(\n Match(\n 1, 1, 1, 1, self._filename, TransformError(), message.format(str(e))\n )\n )\n\n return matches\n\n @staticmethod\n def is_s3_uri(uri):\n \"\"\"\n Checks the uri and determines if it is a valid S3 Uri\n Parameters\n ----------\n uri str, required\n Uri to check\n Returns\n -------\n bool\n Returns True if the uri given is an S3 uri, otherwise False\n \"\"\"\n return isinstance(uri, str) and uri.startswith(\"s3://\")\n\n @staticmethod\n def _update_to_s3_uri(\n property_key, resource_property_dict, s3_uri_value=\"s3://bucket/value\"\n ):\n \"\"\"\n Updates the 'property_key' in the 'resource_property_dict' to the\n value of 's3_uri_value'\n Note: The function will mutate the resource_property_dict that is pass\n in Parameters\n ----------\n property_key str, required\n Key in the resource_property_dict\n resource_property_dict dict, required\n Property dictionary of a Resource in the template to replace\n s3_uri_value str, optional\n Value to update the value of the property_key to\n \"\"\"\n uri_property = resource_property_dict.get(property_key, \".\")\n\n # ignore if dict or already an S3 Uri\n if isinstance(uri_property, dict):\n if len(uri_property) == 1:\n for k in uri_property.keys():\n if k == \"Ref\":\n resource_property_dict[property_key] = s3_uri_value\n return\n if Transform.is_s3_uri(uri_property):\n return\n\n resource_property_dict[property_key] = s3_uri_value\n", "path": "src/cfnlint/transform.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport logging\nimport os\n\nimport samtranslator\nfrom samtranslator.parser import parser\nfrom samtranslator.public.exceptions import InvalidDocumentException\nfrom samtranslator.sdk import resource\nfrom samtranslator.translator.translator import Translator\n\nfrom cfnlint.data import Serverless\nfrom cfnlint.helpers import convert_dict, format_json_string, load_resource\nfrom cfnlint.rules import Match, TransformError\n\nLOGGER = logging.getLogger(\"cfnlint\")\n\nsamtranslator_logger = logging.getLogger(\"samtranslator\")\nsamtranslator_logger.setLevel(logging.CRITICAL)\n\n\n# Override SAM validation as cfn-lint does thoese\n# checks already\n# pylint: disable=unused-argument\ndef valid_override(self):\n return resource.SamResourceType.has_value(self.type)\n\n\n# pylint: disable=redefined-outer-name\nresource.SamResource.valid = valid_override\n\n\nclass Transform:\n \"\"\"\n Application Serverless Module tranform Wrapper.\n Based on code from AWS SAM CLI:\n https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/commands/validate/lib/sam_template_validator.py\n \"\"\"\n\n def __init__(self, filename, template, region):\n \"\"\"\n Initialize Transform class\n \"\"\"\n self._filename = filename\n self._template = template\n self._region = region\n self._parameters = {}\n\n self._managed_policy_map = self.load_managed_policies()\n self._sam_parser = parser.Parser()\n\n def template(self):\n \"\"\"Get the template\"\"\"\n return self._template\n\n def load_managed_policies(self):\n \"\"\"\n Load the ManagedPolicies locally, based on the AWS-CLI:\n https://github.com/awslabs/aws-sam-cli/blob/develop/samcli/lib/samlib/default_managed_policies.json\n \"\"\"\n return load_resource(Serverless, \"ManagedPolicies.json\")\n\n def _replace_local_codeuri(self):\n \"\"\"\n Replaces the CodeUri in AWS::Serverless::Function and DefinitionUri in\n AWS::Serverless::Api to a fake S3 Uri. This is to support running the\n SAM Translator with valid values for these fields. If this is not done,\n the template is invalid in the eyes of SAM Translator (the translator\n does not support local paths)\n \"\"\"\n\n all_resources = self._template.get(\"Resources\", {})\n\n template_globals = self._template.get(\"Globals\", {})\n auto_publish_alias = template_globals.get(\"Function\", {}).get(\n \"AutoPublishAlias\"\n )\n if isinstance(auto_publish_alias, dict):\n if len(auto_publish_alias) == 1:\n for k, v in auto_publish_alias.items():\n if k == \"Ref\":\n if v in self._template.get(\"Parameters\"):\n self._parameters[v] = \"Alias\"\n\n for _, resource in all_resources.items():\n resource_type = resource.get(\"Type\")\n resource_dict = resource.get(\"Properties\")\n\n if resource_type == \"AWS::Serverless::Function\":\n if resource_dict.get(\"PackageType\") == \"Image\":\n Transform._update_to_s3_uri(\"ImageUri\", resource_dict)\n else:\n Transform._update_to_s3_uri(\"CodeUri\", resource_dict)\n auto_publish_alias = resource_dict.get(\"AutoPublishAlias\")\n if isinstance(auto_publish_alias, dict):\n if len(auto_publish_alias) == 1:\n for k, v in auto_publish_alias.items():\n if k == \"Ref\":\n if v in self._template.get(\"Parameters\"):\n self._parameters[v] = \"Alias\"\n if resource_type in [\"AWS::Serverless::LayerVersion\"]:\n if resource_dict.get(\"ContentUri\"):\n Transform._update_to_s3_uri(\"ContentUri\", resource_dict)\n if resource_type == \"AWS::Serverless::Application\":\n if resource_dict.get(\"Location\"):\n if isinstance(resource_dict.get(\"Location\"), dict):\n resource_dict[\"Location\"] = \"\"\n Transform._update_to_s3_uri(\"Location\", resource_dict)\n if resource_type == \"AWS::Serverless::Api\":\n if (\n \"DefinitionBody\" not in resource_dict\n and \"Auth\" not in resource_dict\n and \"Cors\" not in resource_dict\n and \"DisableExecuteApiEndpoint\" not in resource_dict\n ):\n Transform._update_to_s3_uri(\"DefinitionUri\", resource_dict)\n else:\n resource_dict[\"DefinitionBody\"] = \"\"\n if resource_type == \"AWS::Serverless::StateMachine\" and resource_dict.get(\n \"DefinitionUri\"\n ):\n Transform._update_to_s3_uri(\"DefinitionUri\", resource_dict)\n\n def transform_template(self):\n \"\"\"\n Transform the Template using the Serverless Application Model.\n \"\"\"\n matches = []\n\n try:\n # Output the SAM Translator version in debug mode\n LOGGER.info(\"SAM Translator: %s\", samtranslator.__version__)\n\n sam_translator = Translator(\n managed_policy_map=self._managed_policy_map, sam_parser=self._sam_parser\n )\n\n self._replace_local_codeuri()\n\n # Tell SAM to use the region we're linting in, this has to be\n # controlled using the default AWS mechanisms, see also:\n # https://github.com/awslabs/serverless-application-model/blob/master/samtranslator/translator/arn_generator.py\n LOGGER.info(\"Setting AWS_DEFAULT_REGION to %s\", self._region)\n os.environ[\"AWS_DEFAULT_REGION\"] = self._region\n\n self._template = convert_dict(\n sam_translator.translate(\n sam_template=self._template, parameter_values=self._parameters\n )\n )\n\n LOGGER.info(\n \"Transformed template: \\n%s\", format_json_string(self._template)\n )\n except InvalidDocumentException as e:\n message = \"Error transforming template: {0}\"\n for cause in e.causes:\n matches.append(\n Match(\n 1,\n 1,\n 1,\n 1,\n self._filename,\n TransformError(),\n message.format(cause.message),\n )\n )\n except Exception as e: # pylint: disable=W0703\n LOGGER.debug(\"Error transforming template: %s\", str(e))\n LOGGER.debug(\"Stack trace: %s\", e, exc_info=True)\n message = \"Error transforming template: {0}\"\n matches.append(\n Match(\n 1, 1, 1, 1, self._filename, TransformError(), message.format(str(e))\n )\n )\n\n return matches\n\n @staticmethod\n def is_s3_uri(uri):\n \"\"\"\n Checks the uri and determines if it is a valid S3 Uri\n Parameters\n ----------\n uri str, required\n Uri to check\n Returns\n -------\n bool\n Returns True if the uri given is an S3 uri, otherwise False\n \"\"\"\n return isinstance(uri, str) and uri.startswith(\"s3://\")\n\n @staticmethod\n def _update_to_s3_uri(\n property_key, resource_property_dict, s3_uri_value=\"s3://bucket/value\"\n ):\n \"\"\"\n Updates the 'property_key' in the 'resource_property_dict' to the\n value of 's3_uri_value'\n Note: The function will mutate the resource_property_dict that is pass\n in Parameters\n ----------\n property_key str, required\n Key in the resource_property_dict\n resource_property_dict dict, required\n Property dictionary of a Resource in the template to replace\n s3_uri_value str, optional\n Value to update the value of the property_key to\n \"\"\"\n uri_property = resource_property_dict.get(property_key, \".\")\n\n # ignore if dict or already an S3 Uri\n if isinstance(uri_property, dict):\n if len(uri_property) == 1:\n for k in uri_property.keys():\n if k in [\"Ref\", \"Fn::Sub\"]:\n resource_property_dict[property_key] = s3_uri_value\n return\n if Transform.is_s3_uri(uri_property):\n return\n\n resource_property_dict[property_key] = s3_uri_value\n", "path": "src/cfnlint/transform.py"}]}
| 3,074 | 127 |
gh_patches_debug_21685
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-1683
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing 'org_results' data in production but has some in staging
In file 'web/frontend/help_platform_base.py', method 'group_project_list' the org_results appears to be blank in our production environment.
This manifests when we try to load the 'setup & installation' page
<img width="734" alt="prod" src="https://cloud.githubusercontent.com/assets/4432296/9042832/2c773b94-39c6-11e5-9cc8-fa49f0a6f3ef.png">
However, the same setup is in staging and appears to work, for certain random projects:
<img width="737" alt="staging" src="https://cloud.githubusercontent.com/assets/4432296/9042839/37822d32-39c6-11e5-8c65-ddf6d700de75.png">
In staging I ran 'update sentry_organizationmember_teams set is_active = 1;' and the projects all showed up in the dropdown.
We _did_ add a custom SSO implementation, so perhaps we missed a step on account creation?
Since there is a "client keys" page, this isn't a show stopper but I'm worried that whatever is wrong could manifest in other ways that we haven't found yet.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/sentry/web/frontend/help_platform_base.py`
Content:
```
1 from __future__ import absolute_import
2
3 from django.db.models import Q
4 from itertools import groupby
5
6 from sentry.models import Project, ProjectKey
7 from sentry.web.frontend.base import BaseView
8
9
10 class HelpPlatformBaseView(BaseView):
11 auth_required = False
12
13 def get_project_list(self, user):
14 return list(Project.objects.filter(
15 Q(organization__member_set__has_global_access=True, organization__member_set__user=user)
16 | Q(team__organizationmember__user=user)
17 ).exclude(
18 team__organizationmemberteam__is_active=False,
19 ).select_related('team', 'organization').order_by('organization', 'team').distinct())
20
21 def group_project_list(self, project_list):
22 results = []
23 for org, org_project_list in groupby(project_list, key=lambda x: x.organization):
24 org_results = []
25 for team, team_project_list in groupby(list(org_project_list), key=lambda x: x.team):
26 org_results.append((team, list(team_project_list)))
27 results.append((org, org_results))
28 return results
29
30 def get_key(self, project):
31 try:
32 key = ProjectKey.objects.filter(project=project)[0]
33 except IndexError:
34 key = None
35 return key
36
37 def convert_args(self, request, *args, **kwargs):
38 try:
39 pid = int(request.GET.get('pid', request.session.get('pid', 0)))
40 except (TypeError, ValueError):
41 pid = None
42
43 if request.user.is_authenticated():
44 project_list = self.get_project_list(request.user)
45 else:
46 project_list = []
47
48 if pid:
49 for project in project_list:
50 if pid == project.id:
51 selected_project = project
52 break
53 else:
54 selected_project = None
55 else:
56 selected_project = None
57
58 if selected_project:
59 request.session['pid'] = selected_project.id
60
61 kwargs['project_list'] = project_list
62 kwargs['selected_project'] = selected_project
63
64 return (args, kwargs)
65
66 def get_context_data(self, request, project_list, selected_project, **kwargs):
67 context = super(HelpPlatformBaseView, self).get_context_data(request, **kwargs)
68
69 if selected_project:
70 key = self.get_key(selected_project)
71 else:
72 key = None
73
74 if key:
75 dsn_private = key.dsn_private
76 dsn_public = key.dsn_public
77 else:
78 dsn_private = None
79 dsn_public = None
80
81 context = {
82 'selected_project': selected_project,
83 'org_results': self.group_project_list(project_list),
84 'dsn': dsn_private,
85 'dsn_public': dsn_public,
86
87 }
88 return context
89
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/sentry/web/frontend/help_platform_base.py b/src/sentry/web/frontend/help_platform_base.py
--- a/src/sentry/web/frontend/help_platform_base.py
+++ b/src/sentry/web/frontend/help_platform_base.py
@@ -3,7 +3,7 @@
from django.db.models import Q
from itertools import groupby
-from sentry.models import Project, ProjectKey
+from sentry.models import OrganizationMemberTeam, Project, ProjectKey
from sentry.web.frontend.base import BaseView
@@ -15,8 +15,11 @@
Q(organization__member_set__has_global_access=True, organization__member_set__user=user)
| Q(team__organizationmember__user=user)
).exclude(
- team__organizationmemberteam__is_active=False,
- ).select_related('team', 'organization').order_by('organization', 'team').distinct())
+ team__id__in=OrganizationMemberTeam.objects.filter(
+ organizationmember__user=user,
+ is_active=False
+ ).values('team_id')
+ ).select_related('team', 'organization').order_by('organization__name', 'team__name', 'name').distinct())
def group_project_list(self, project_list):
results = []
|
{"golden_diff": "diff --git a/src/sentry/web/frontend/help_platform_base.py b/src/sentry/web/frontend/help_platform_base.py\n--- a/src/sentry/web/frontend/help_platform_base.py\n+++ b/src/sentry/web/frontend/help_platform_base.py\n@@ -3,7 +3,7 @@\n from django.db.models import Q\n from itertools import groupby\n \n-from sentry.models import Project, ProjectKey\n+from sentry.models import OrganizationMemberTeam, Project, ProjectKey\n from sentry.web.frontend.base import BaseView\n \n \n@@ -15,8 +15,11 @@\n Q(organization__member_set__has_global_access=True, organization__member_set__user=user)\n | Q(team__organizationmember__user=user)\n ).exclude(\n- team__organizationmemberteam__is_active=False,\n- ).select_related('team', 'organization').order_by('organization', 'team').distinct())\n+ team__id__in=OrganizationMemberTeam.objects.filter(\n+ organizationmember__user=user,\n+ is_active=False\n+ ).values('team_id')\n+ ).select_related('team', 'organization').order_by('organization__name', 'team__name', 'name').distinct())\n \n def group_project_list(self, project_list):\n results = []\n", "issue": "Missing 'org_results' data in production but has some in staging\nIn file 'web/frontend/help_platform_base.py', method 'group_project_list' the org_results appears to be blank in our production environment.\n\nThis manifests when we try to load the 'setup & installation' page\n<img width=\"734\" alt=\"prod\" src=\"https://cloud.githubusercontent.com/assets/4432296/9042832/2c773b94-39c6-11e5-9cc8-fa49f0a6f3ef.png\">\n\nHowever, the same setup is in staging and appears to work, for certain random projects:\n<img width=\"737\" alt=\"staging\" src=\"https://cloud.githubusercontent.com/assets/4432296/9042839/37822d32-39c6-11e5-8c65-ddf6d700de75.png\">\n\nIn staging I ran 'update sentry_organizationmember_teams set is_active = 1;' and the projects all showed up in the dropdown. \n\nWe _did_ add a custom SSO implementation, so perhaps we missed a step on account creation?\n\nSince there is a \"client keys\" page, this isn't a show stopper but I'm worried that whatever is wrong could manifest in other ways that we haven't found yet.\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom django.db.models import Q\nfrom itertools import groupby\n\nfrom sentry.models import Project, ProjectKey\nfrom sentry.web.frontend.base import BaseView\n\n\nclass HelpPlatformBaseView(BaseView):\n auth_required = False\n\n def get_project_list(self, user):\n return list(Project.objects.filter(\n Q(organization__member_set__has_global_access=True, organization__member_set__user=user)\n | Q(team__organizationmember__user=user)\n ).exclude(\n team__organizationmemberteam__is_active=False,\n ).select_related('team', 'organization').order_by('organization', 'team').distinct())\n\n def group_project_list(self, project_list):\n results = []\n for org, org_project_list in groupby(project_list, key=lambda x: x.organization):\n org_results = []\n for team, team_project_list in groupby(list(org_project_list), key=lambda x: x.team):\n org_results.append((team, list(team_project_list)))\n results.append((org, org_results))\n return results\n\n def get_key(self, project):\n try:\n key = ProjectKey.objects.filter(project=project)[0]\n except IndexError:\n key = None\n return key\n\n def convert_args(self, request, *args, **kwargs):\n try:\n pid = int(request.GET.get('pid', request.session.get('pid', 0)))\n except (TypeError, ValueError):\n pid = None\n\n if request.user.is_authenticated():\n project_list = self.get_project_list(request.user)\n else:\n project_list = []\n\n if pid:\n for project in project_list:\n if pid == project.id:\n selected_project = project\n break\n else:\n selected_project = None\n else:\n selected_project = None\n\n if selected_project:\n request.session['pid'] = selected_project.id\n\n kwargs['project_list'] = project_list\n kwargs['selected_project'] = selected_project\n\n return (args, kwargs)\n\n def get_context_data(self, request, project_list, selected_project, **kwargs):\n context = super(HelpPlatformBaseView, self).get_context_data(request, **kwargs)\n\n if selected_project:\n key = self.get_key(selected_project)\n else:\n key = None\n\n if key:\n dsn_private = key.dsn_private\n dsn_public = key.dsn_public\n else:\n dsn_private = None\n dsn_public = None\n\n context = {\n 'selected_project': selected_project,\n 'org_results': self.group_project_list(project_list),\n 'dsn': dsn_private,\n 'dsn_public': dsn_public,\n\n }\n return context\n", "path": "src/sentry/web/frontend/help_platform_base.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nfrom django.db.models import Q\nfrom itertools import groupby\n\nfrom sentry.models import OrganizationMemberTeam, Project, ProjectKey\nfrom sentry.web.frontend.base import BaseView\n\n\nclass HelpPlatformBaseView(BaseView):\n auth_required = False\n\n def get_project_list(self, user):\n return list(Project.objects.filter(\n Q(organization__member_set__has_global_access=True, organization__member_set__user=user)\n | Q(team__organizationmember__user=user)\n ).exclude(\n team__id__in=OrganizationMemberTeam.objects.filter(\n organizationmember__user=user,\n is_active=False\n ).values('team_id')\n ).select_related('team', 'organization').order_by('organization__name', 'team__name', 'name').distinct())\n\n def group_project_list(self, project_list):\n results = []\n for org, org_project_list in groupby(project_list, key=lambda x: x.organization):\n org_results = []\n for team, team_project_list in groupby(list(org_project_list), key=lambda x: x.team):\n org_results.append((team, list(team_project_list)))\n results.append((org, org_results))\n return results\n\n def get_key(self, project):\n try:\n key = ProjectKey.objects.filter(project=project)[0]\n except IndexError:\n key = None\n return key\n\n def convert_args(self, request, *args, **kwargs):\n try:\n pid = int(request.GET.get('pid', request.session.get('pid', 0)))\n except (TypeError, ValueError):\n pid = None\n\n if request.user.is_authenticated():\n project_list = self.get_project_list(request.user)\n else:\n project_list = []\n\n if pid:\n for project in project_list:\n if pid == project.id:\n selected_project = project\n break\n else:\n selected_project = None\n else:\n selected_project = None\n\n if selected_project:\n request.session['pid'] = selected_project.id\n\n kwargs['project_list'] = project_list\n kwargs['selected_project'] = selected_project\n\n return (args, kwargs)\n\n def get_context_data(self, request, project_list, selected_project, **kwargs):\n context = super(HelpPlatformBaseView, self).get_context_data(request, **kwargs)\n\n if selected_project:\n key = self.get_key(selected_project)\n else:\n key = None\n\n if key:\n dsn_private = key.dsn_private\n dsn_public = key.dsn_public\n else:\n dsn_private = None\n dsn_public = None\n\n context = {\n 'selected_project': selected_project,\n 'org_results': self.group_project_list(project_list),\n 'dsn': dsn_private,\n 'dsn_public': dsn_public,\n\n }\n return context\n", "path": "src/sentry/web/frontend/help_platform_base.py"}]}
| 1,329 | 271 |
gh_patches_debug_34602
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-5958
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove pkg_resources usage
Importing pkg_resources raises a deprecation warning now so it makes sense to remove it. We can even use importlib without checking if it's available as we no longer support 3.7. I see only `pkg_resources.parse_version` and `pkg_resources.iter_entry_points` used in Scrapy, I think they can be replaced by things from `packaging` and `importlib.metadata` respectively.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 from pathlib import Path
2
3 from pkg_resources import parse_version
4 from setuptools import __version__ as setuptools_version
5 from setuptools import find_packages, setup
6
7 version = (Path(__file__).parent / "scrapy/VERSION").read_text("ascii").strip()
8
9
10 def has_environment_marker_platform_impl_support():
11 """Code extracted from 'pytest/setup.py'
12 https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31
13
14 The first known release to support environment marker with range operators
15 it is 18.5, see:
16 https://setuptools.readthedocs.io/en/latest/history.html#id235
17 """
18 return parse_version(setuptools_version) >= parse_version("18.5")
19
20
21 install_requires = [
22 "Twisted>=18.9.0",
23 "cryptography>=36.0.0",
24 "cssselect>=0.9.1",
25 "itemloaders>=1.0.1",
26 "parsel>=1.5.0",
27 "pyOpenSSL>=21.0.0",
28 "queuelib>=1.4.2",
29 "service_identity>=18.1.0",
30 "w3lib>=1.17.0",
31 "zope.interface>=5.1.0",
32 "protego>=0.1.15",
33 "itemadapter>=0.1.0",
34 "setuptools",
35 "packaging",
36 "tldextract",
37 "lxml>=4.4.1",
38 ]
39 extras_require = {}
40 cpython_dependencies = [
41 "PyDispatcher>=2.0.5",
42 ]
43 if has_environment_marker_platform_impl_support():
44 extras_require[
45 ':platform_python_implementation == "CPython"'
46 ] = cpython_dependencies
47 extras_require[':platform_python_implementation == "PyPy"'] = [
48 "PyPyDispatcher>=2.1.0",
49 ]
50 else:
51 install_requires.extend(cpython_dependencies)
52
53
54 setup(
55 name="Scrapy",
56 version=version,
57 url="https://scrapy.org",
58 project_urls={
59 "Documentation": "https://docs.scrapy.org/",
60 "Source": "https://github.com/scrapy/scrapy",
61 "Tracker": "https://github.com/scrapy/scrapy/issues",
62 },
63 description="A high-level Web Crawling and Web Scraping framework",
64 long_description=open("README.rst", encoding="utf-8").read(),
65 author="Scrapy developers",
66 author_email="[email protected]",
67 maintainer="Pablo Hoffman",
68 maintainer_email="[email protected]",
69 license="BSD",
70 packages=find_packages(exclude=("tests", "tests.*")),
71 include_package_data=True,
72 zip_safe=False,
73 entry_points={"console_scripts": ["scrapy = scrapy.cmdline:execute"]},
74 classifiers=[
75 "Framework :: Scrapy",
76 "Development Status :: 5 - Production/Stable",
77 "Environment :: Console",
78 "Intended Audience :: Developers",
79 "License :: OSI Approved :: BSD License",
80 "Operating System :: OS Independent",
81 "Programming Language :: Python",
82 "Programming Language :: Python :: 3",
83 "Programming Language :: Python :: 3.8",
84 "Programming Language :: Python :: 3.9",
85 "Programming Language :: Python :: 3.10",
86 "Programming Language :: Python :: 3.11",
87 "Programming Language :: Python :: Implementation :: CPython",
88 "Programming Language :: Python :: Implementation :: PyPy",
89 "Topic :: Internet :: WWW/HTTP",
90 "Topic :: Software Development :: Libraries :: Application Frameworks",
91 "Topic :: Software Development :: Libraries :: Python Modules",
92 ],
93 python_requires=">=3.8",
94 install_requires=install_requires,
95 extras_require=extras_require,
96 )
97
```
Path: `scrapy/cmdline.py`
Content:
```
1 import argparse
2 import cProfile
3 import inspect
4 import os
5 import sys
6
7 import pkg_resources
8
9 import scrapy
10 from scrapy.commands import BaseRunSpiderCommand, ScrapyCommand, ScrapyHelpFormatter
11 from scrapy.crawler import CrawlerProcess
12 from scrapy.exceptions import UsageError
13 from scrapy.utils.misc import walk_modules
14 from scrapy.utils.project import get_project_settings, inside_project
15 from scrapy.utils.python import garbage_collect
16
17
18 class ScrapyArgumentParser(argparse.ArgumentParser):
19 def _parse_optional(self, arg_string):
20 # if starts with -: it means that is a parameter not a argument
21 if arg_string[:2] == "-:":
22 return None
23
24 return super()._parse_optional(arg_string)
25
26
27 def _iter_command_classes(module_name):
28 # TODO: add `name` attribute to commands and merge this function with
29 # scrapy.utils.spider.iter_spider_classes
30 for module in walk_modules(module_name):
31 for obj in vars(module).values():
32 if (
33 inspect.isclass(obj)
34 and issubclass(obj, ScrapyCommand)
35 and obj.__module__ == module.__name__
36 and obj not in (ScrapyCommand, BaseRunSpiderCommand)
37 ):
38 yield obj
39
40
41 def _get_commands_from_module(module, inproject):
42 d = {}
43 for cmd in _iter_command_classes(module):
44 if inproject or not cmd.requires_project:
45 cmdname = cmd.__module__.split(".")[-1]
46 d[cmdname] = cmd()
47 return d
48
49
50 def _get_commands_from_entry_points(inproject, group="scrapy.commands"):
51 cmds = {}
52 for entry_point in pkg_resources.iter_entry_points(group):
53 obj = entry_point.load()
54 if inspect.isclass(obj):
55 cmds[entry_point.name] = obj()
56 else:
57 raise Exception(f"Invalid entry point {entry_point.name}")
58 return cmds
59
60
61 def _get_commands_dict(settings, inproject):
62 cmds = _get_commands_from_module("scrapy.commands", inproject)
63 cmds.update(_get_commands_from_entry_points(inproject))
64 cmds_module = settings["COMMANDS_MODULE"]
65 if cmds_module:
66 cmds.update(_get_commands_from_module(cmds_module, inproject))
67 return cmds
68
69
70 def _pop_command_name(argv):
71 i = 0
72 for arg in argv[1:]:
73 if not arg.startswith("-"):
74 del argv[i]
75 return arg
76 i += 1
77
78
79 def _print_header(settings, inproject):
80 version = scrapy.__version__
81 if inproject:
82 print(f"Scrapy {version} - active project: {settings['BOT_NAME']}\n")
83
84 else:
85 print(f"Scrapy {version} - no active project\n")
86
87
88 def _print_commands(settings, inproject):
89 _print_header(settings, inproject)
90 print("Usage:")
91 print(" scrapy <command> [options] [args]\n")
92 print("Available commands:")
93 cmds = _get_commands_dict(settings, inproject)
94 for cmdname, cmdclass in sorted(cmds.items()):
95 print(f" {cmdname:<13} {cmdclass.short_desc()}")
96 if not inproject:
97 print()
98 print(" [ more ] More commands available when run from project directory")
99 print()
100 print('Use "scrapy <command> -h" to see more info about a command')
101
102
103 def _print_unknown_command(settings, cmdname, inproject):
104 _print_header(settings, inproject)
105 print(f"Unknown command: {cmdname}\n")
106 print('Use "scrapy" to see available commands')
107
108
109 def _run_print_help(parser, func, *a, **kw):
110 try:
111 func(*a, **kw)
112 except UsageError as e:
113 if str(e):
114 parser.error(str(e))
115 if e.print_help:
116 parser.print_help()
117 sys.exit(2)
118
119
120 def execute(argv=None, settings=None):
121 if argv is None:
122 argv = sys.argv
123
124 if settings is None:
125 settings = get_project_settings()
126 # set EDITOR from environment if available
127 try:
128 editor = os.environ["EDITOR"]
129 except KeyError:
130 pass
131 else:
132 settings["EDITOR"] = editor
133
134 inproject = inside_project()
135 cmds = _get_commands_dict(settings, inproject)
136 cmdname = _pop_command_name(argv)
137 if not cmdname:
138 _print_commands(settings, inproject)
139 sys.exit(0)
140 elif cmdname not in cmds:
141 _print_unknown_command(settings, cmdname, inproject)
142 sys.exit(2)
143
144 cmd = cmds[cmdname]
145 parser = ScrapyArgumentParser(
146 formatter_class=ScrapyHelpFormatter,
147 usage=f"scrapy {cmdname} {cmd.syntax()}",
148 conflict_handler="resolve",
149 description=cmd.long_desc(),
150 )
151 settings.setdict(cmd.default_settings, priority="command")
152 cmd.settings = settings
153 cmd.add_options(parser)
154 opts, args = parser.parse_known_args(args=argv[1:])
155 _run_print_help(parser, cmd.process_options, args, opts)
156
157 cmd.crawler_process = CrawlerProcess(settings)
158 _run_print_help(parser, _run_command, cmd, args, opts)
159 sys.exit(cmd.exitcode)
160
161
162 def _run_command(cmd, args, opts):
163 if opts.profile:
164 _run_command_profiled(cmd, args, opts)
165 else:
166 cmd.run(args, opts)
167
168
169 def _run_command_profiled(cmd, args, opts):
170 if opts.profile:
171 sys.stderr.write(f"scrapy: writing cProfile stats to {opts.profile!r}\n")
172 loc = locals()
173 p = cProfile.Profile()
174 p.runctx("cmd.run(args, opts)", globals(), loc)
175 if opts.profile:
176 p.dump_stats(opts.profile)
177
178
179 if __name__ == "__main__":
180 try:
181 execute()
182 finally:
183 # Twisted prints errors in DebugInfo.__del__, but PyPy does not run gc.collect() on exit:
184 # http://doc.pypy.org/en/latest/cpython_differences.html
185 # ?highlight=gc.collect#differences-related-to-garbage-collection-strategies
186 garbage_collect()
187
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scrapy/cmdline.py b/scrapy/cmdline.py
--- a/scrapy/cmdline.py
+++ b/scrapy/cmdline.py
@@ -3,8 +3,7 @@
import inspect
import os
import sys
-
-import pkg_resources
+from importlib.metadata import entry_points
import scrapy
from scrapy.commands import BaseRunSpiderCommand, ScrapyCommand, ScrapyHelpFormatter
@@ -49,7 +48,7 @@
def _get_commands_from_entry_points(inproject, group="scrapy.commands"):
cmds = {}
- for entry_point in pkg_resources.iter_entry_points(group):
+ for entry_point in entry_points().get(group, {}):
obj = entry_point.load()
if inspect.isclass(obj):
cmds[entry_point.name] = obj()
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,23 +1,10 @@
from pathlib import Path
-from pkg_resources import parse_version
-from setuptools import __version__ as setuptools_version
from setuptools import find_packages, setup
version = (Path(__file__).parent / "scrapy/VERSION").read_text("ascii").strip()
-def has_environment_marker_platform_impl_support():
- """Code extracted from 'pytest/setup.py'
- https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31
-
- The first known release to support environment marker with range operators
- it is 18.5, see:
- https://setuptools.readthedocs.io/en/latest/history.html#id235
- """
- return parse_version(setuptools_version) >= parse_version("18.5")
-
-
install_requires = [
"Twisted>=18.9.0",
"cryptography>=36.0.0",
@@ -36,19 +23,10 @@
"tldextract",
"lxml>=4.4.1",
]
-extras_require = {}
-cpython_dependencies = [
- "PyDispatcher>=2.0.5",
-]
-if has_environment_marker_platform_impl_support():
- extras_require[
- ':platform_python_implementation == "CPython"'
- ] = cpython_dependencies
- extras_require[':platform_python_implementation == "PyPy"'] = [
- "PyPyDispatcher>=2.1.0",
- ]
-else:
- install_requires.extend(cpython_dependencies)
+extras_require = {
+ ':platform_python_implementation == "CPython"': ["PyDispatcher>=2.0.5"],
+ ':platform_python_implementation == "PyPy"': ["PyPyDispatcher>=2.1.0"],
+}
setup(
|
{"golden_diff": "diff --git a/scrapy/cmdline.py b/scrapy/cmdline.py\n--- a/scrapy/cmdline.py\n+++ b/scrapy/cmdline.py\n@@ -3,8 +3,7 @@\n import inspect\n import os\n import sys\n-\n-import pkg_resources\n+from importlib.metadata import entry_points\n \n import scrapy\n from scrapy.commands import BaseRunSpiderCommand, ScrapyCommand, ScrapyHelpFormatter\n@@ -49,7 +48,7 @@\n \n def _get_commands_from_entry_points(inproject, group=\"scrapy.commands\"):\n cmds = {}\n- for entry_point in pkg_resources.iter_entry_points(group):\n+ for entry_point in entry_points().get(group, {}):\n obj = entry_point.load()\n if inspect.isclass(obj):\n cmds[entry_point.name] = obj()\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,23 +1,10 @@\n from pathlib import Path\n \n-from pkg_resources import parse_version\n-from setuptools import __version__ as setuptools_version\n from setuptools import find_packages, setup\n \n version = (Path(__file__).parent / \"scrapy/VERSION\").read_text(\"ascii\").strip()\n \n \n-def has_environment_marker_platform_impl_support():\n- \"\"\"Code extracted from 'pytest/setup.py'\n- https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31\n-\n- The first known release to support environment marker with range operators\n- it is 18.5, see:\n- https://setuptools.readthedocs.io/en/latest/history.html#id235\n- \"\"\"\n- return parse_version(setuptools_version) >= parse_version(\"18.5\")\n-\n-\n install_requires = [\n \"Twisted>=18.9.0\",\n \"cryptography>=36.0.0\",\n@@ -36,19 +23,10 @@\n \"tldextract\",\n \"lxml>=4.4.1\",\n ]\n-extras_require = {}\n-cpython_dependencies = [\n- \"PyDispatcher>=2.0.5\",\n-]\n-if has_environment_marker_platform_impl_support():\n- extras_require[\n- ':platform_python_implementation == \"CPython\"'\n- ] = cpython_dependencies\n- extras_require[':platform_python_implementation == \"PyPy\"'] = [\n- \"PyPyDispatcher>=2.1.0\",\n- ]\n-else:\n- install_requires.extend(cpython_dependencies)\n+extras_require = {\n+ ':platform_python_implementation == \"CPython\"': [\"PyDispatcher>=2.0.5\"],\n+ ':platform_python_implementation == \"PyPy\"': [\"PyPyDispatcher>=2.1.0\"],\n+}\n \n \n setup(\n", "issue": "Remove pkg_resources usage\nImporting pkg_resources raises a deprecation warning now so it makes sense to remove it. We can even use importlib without checking if it's available as we no longer support 3.7. I see only `pkg_resources.parse_version` and `pkg_resources.iter_entry_points` used in Scrapy, I think they can be replaced by things from `packaging` and `importlib.metadata` respectively.\n", "before_files": [{"content": "from pathlib import Path\n\nfrom pkg_resources import parse_version\nfrom setuptools import __version__ as setuptools_version\nfrom setuptools import find_packages, setup\n\nversion = (Path(__file__).parent / \"scrapy/VERSION\").read_text(\"ascii\").strip()\n\n\ndef has_environment_marker_platform_impl_support():\n \"\"\"Code extracted from 'pytest/setup.py'\n https://github.com/pytest-dev/pytest/blob/7538680c/setup.py#L31\n\n The first known release to support environment marker with range operators\n it is 18.5, see:\n https://setuptools.readthedocs.io/en/latest/history.html#id235\n \"\"\"\n return parse_version(setuptools_version) >= parse_version(\"18.5\")\n\n\ninstall_requires = [\n \"Twisted>=18.9.0\",\n \"cryptography>=36.0.0\",\n \"cssselect>=0.9.1\",\n \"itemloaders>=1.0.1\",\n \"parsel>=1.5.0\",\n \"pyOpenSSL>=21.0.0\",\n \"queuelib>=1.4.2\",\n \"service_identity>=18.1.0\",\n \"w3lib>=1.17.0\",\n \"zope.interface>=5.1.0\",\n \"protego>=0.1.15\",\n \"itemadapter>=0.1.0\",\n \"setuptools\",\n \"packaging\",\n \"tldextract\",\n \"lxml>=4.4.1\",\n]\nextras_require = {}\ncpython_dependencies = [\n \"PyDispatcher>=2.0.5\",\n]\nif has_environment_marker_platform_impl_support():\n extras_require[\n ':platform_python_implementation == \"CPython\"'\n ] = cpython_dependencies\n extras_require[':platform_python_implementation == \"PyPy\"'] = [\n \"PyPyDispatcher>=2.1.0\",\n ]\nelse:\n install_requires.extend(cpython_dependencies)\n\n\nsetup(\n name=\"Scrapy\",\n version=version,\n url=\"https://scrapy.org\",\n project_urls={\n \"Documentation\": \"https://docs.scrapy.org/\",\n \"Source\": \"https://github.com/scrapy/scrapy\",\n \"Tracker\": \"https://github.com/scrapy/scrapy/issues\",\n },\n description=\"A high-level Web Crawling and Web Scraping framework\",\n long_description=open(\"README.rst\", encoding=\"utf-8\").read(),\n author=\"Scrapy developers\",\n author_email=\"[email protected]\",\n maintainer=\"Pablo Hoffman\",\n maintainer_email=\"[email protected]\",\n license=\"BSD\",\n packages=find_packages(exclude=(\"tests\", \"tests.*\")),\n include_package_data=True,\n zip_safe=False,\n entry_points={\"console_scripts\": [\"scrapy = scrapy.cmdline:execute\"]},\n classifiers=[\n \"Framework :: Scrapy\",\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Software Development :: Libraries :: Application Frameworks\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ],\n python_requires=\">=3.8\",\n install_requires=install_requires,\n extras_require=extras_require,\n)\n", "path": "setup.py"}, {"content": "import argparse\nimport cProfile\nimport inspect\nimport os\nimport sys\n\nimport pkg_resources\n\nimport scrapy\nfrom scrapy.commands import BaseRunSpiderCommand, ScrapyCommand, ScrapyHelpFormatter\nfrom scrapy.crawler import CrawlerProcess\nfrom scrapy.exceptions import UsageError\nfrom scrapy.utils.misc import walk_modules\nfrom scrapy.utils.project import get_project_settings, inside_project\nfrom scrapy.utils.python import garbage_collect\n\n\nclass ScrapyArgumentParser(argparse.ArgumentParser):\n def _parse_optional(self, arg_string):\n # if starts with -: it means that is a parameter not a argument\n if arg_string[:2] == \"-:\":\n return None\n\n return super()._parse_optional(arg_string)\n\n\ndef _iter_command_classes(module_name):\n # TODO: add `name` attribute to commands and merge this function with\n # scrapy.utils.spider.iter_spider_classes\n for module in walk_modules(module_name):\n for obj in vars(module).values():\n if (\n inspect.isclass(obj)\n and issubclass(obj, ScrapyCommand)\n and obj.__module__ == module.__name__\n and obj not in (ScrapyCommand, BaseRunSpiderCommand)\n ):\n yield obj\n\n\ndef _get_commands_from_module(module, inproject):\n d = {}\n for cmd in _iter_command_classes(module):\n if inproject or not cmd.requires_project:\n cmdname = cmd.__module__.split(\".\")[-1]\n d[cmdname] = cmd()\n return d\n\n\ndef _get_commands_from_entry_points(inproject, group=\"scrapy.commands\"):\n cmds = {}\n for entry_point in pkg_resources.iter_entry_points(group):\n obj = entry_point.load()\n if inspect.isclass(obj):\n cmds[entry_point.name] = obj()\n else:\n raise Exception(f\"Invalid entry point {entry_point.name}\")\n return cmds\n\n\ndef _get_commands_dict(settings, inproject):\n cmds = _get_commands_from_module(\"scrapy.commands\", inproject)\n cmds.update(_get_commands_from_entry_points(inproject))\n cmds_module = settings[\"COMMANDS_MODULE\"]\n if cmds_module:\n cmds.update(_get_commands_from_module(cmds_module, inproject))\n return cmds\n\n\ndef _pop_command_name(argv):\n i = 0\n for arg in argv[1:]:\n if not arg.startswith(\"-\"):\n del argv[i]\n return arg\n i += 1\n\n\ndef _print_header(settings, inproject):\n version = scrapy.__version__\n if inproject:\n print(f\"Scrapy {version} - active project: {settings['BOT_NAME']}\\n\")\n\n else:\n print(f\"Scrapy {version} - no active project\\n\")\n\n\ndef _print_commands(settings, inproject):\n _print_header(settings, inproject)\n print(\"Usage:\")\n print(\" scrapy <command> [options] [args]\\n\")\n print(\"Available commands:\")\n cmds = _get_commands_dict(settings, inproject)\n for cmdname, cmdclass in sorted(cmds.items()):\n print(f\" {cmdname:<13} {cmdclass.short_desc()}\")\n if not inproject:\n print()\n print(\" [ more ] More commands available when run from project directory\")\n print()\n print('Use \"scrapy <command> -h\" to see more info about a command')\n\n\ndef _print_unknown_command(settings, cmdname, inproject):\n _print_header(settings, inproject)\n print(f\"Unknown command: {cmdname}\\n\")\n print('Use \"scrapy\" to see available commands')\n\n\ndef _run_print_help(parser, func, *a, **kw):\n try:\n func(*a, **kw)\n except UsageError as e:\n if str(e):\n parser.error(str(e))\n if e.print_help:\n parser.print_help()\n sys.exit(2)\n\n\ndef execute(argv=None, settings=None):\n if argv is None:\n argv = sys.argv\n\n if settings is None:\n settings = get_project_settings()\n # set EDITOR from environment if available\n try:\n editor = os.environ[\"EDITOR\"]\n except KeyError:\n pass\n else:\n settings[\"EDITOR\"] = editor\n\n inproject = inside_project()\n cmds = _get_commands_dict(settings, inproject)\n cmdname = _pop_command_name(argv)\n if not cmdname:\n _print_commands(settings, inproject)\n sys.exit(0)\n elif cmdname not in cmds:\n _print_unknown_command(settings, cmdname, inproject)\n sys.exit(2)\n\n cmd = cmds[cmdname]\n parser = ScrapyArgumentParser(\n formatter_class=ScrapyHelpFormatter,\n usage=f\"scrapy {cmdname} {cmd.syntax()}\",\n conflict_handler=\"resolve\",\n description=cmd.long_desc(),\n )\n settings.setdict(cmd.default_settings, priority=\"command\")\n cmd.settings = settings\n cmd.add_options(parser)\n opts, args = parser.parse_known_args(args=argv[1:])\n _run_print_help(parser, cmd.process_options, args, opts)\n\n cmd.crawler_process = CrawlerProcess(settings)\n _run_print_help(parser, _run_command, cmd, args, opts)\n sys.exit(cmd.exitcode)\n\n\ndef _run_command(cmd, args, opts):\n if opts.profile:\n _run_command_profiled(cmd, args, opts)\n else:\n cmd.run(args, opts)\n\n\ndef _run_command_profiled(cmd, args, opts):\n if opts.profile:\n sys.stderr.write(f\"scrapy: writing cProfile stats to {opts.profile!r}\\n\")\n loc = locals()\n p = cProfile.Profile()\n p.runctx(\"cmd.run(args, opts)\", globals(), loc)\n if opts.profile:\n p.dump_stats(opts.profile)\n\n\nif __name__ == \"__main__\":\n try:\n execute()\n finally:\n # Twisted prints errors in DebugInfo.__del__, but PyPy does not run gc.collect() on exit:\n # http://doc.pypy.org/en/latest/cpython_differences.html\n # ?highlight=gc.collect#differences-related-to-garbage-collection-strategies\n garbage_collect()\n", "path": "scrapy/cmdline.py"}], "after_files": [{"content": "from pathlib import Path\n\nfrom setuptools import find_packages, setup\n\nversion = (Path(__file__).parent / \"scrapy/VERSION\").read_text(\"ascii\").strip()\n\n\ninstall_requires = [\n \"Twisted>=18.9.0\",\n \"cryptography>=36.0.0\",\n \"cssselect>=0.9.1\",\n \"itemloaders>=1.0.1\",\n \"parsel>=1.5.0\",\n \"pyOpenSSL>=21.0.0\",\n \"queuelib>=1.4.2\",\n \"service_identity>=18.1.0\",\n \"w3lib>=1.17.0\",\n \"zope.interface>=5.1.0\",\n \"protego>=0.1.15\",\n \"itemadapter>=0.1.0\",\n \"setuptools\",\n \"packaging\",\n \"tldextract\",\n \"lxml>=4.4.1\",\n]\nextras_require = {\n ':platform_python_implementation == \"CPython\"': [\"PyDispatcher>=2.0.5\"],\n ':platform_python_implementation == \"PyPy\"': [\"PyPyDispatcher>=2.1.0\"],\n}\n\n\nsetup(\n name=\"Scrapy\",\n version=version,\n url=\"https://scrapy.org\",\n project_urls={\n \"Documentation\": \"https://docs.scrapy.org/\",\n \"Source\": \"https://github.com/scrapy/scrapy\",\n \"Tracker\": \"https://github.com/scrapy/scrapy/issues\",\n },\n description=\"A high-level Web Crawling and Web Scraping framework\",\n long_description=open(\"README.rst\", encoding=\"utf-8\").read(),\n author=\"Scrapy developers\",\n author_email=\"[email protected]\",\n maintainer=\"Pablo Hoffman\",\n maintainer_email=\"[email protected]\",\n license=\"BSD\",\n packages=find_packages(exclude=(\"tests\", \"tests.*\")),\n include_package_data=True,\n zip_safe=False,\n entry_points={\"console_scripts\": [\"scrapy = scrapy.cmdline:execute\"]},\n classifiers=[\n \"Framework :: Scrapy\",\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Software Development :: Libraries :: Application Frameworks\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ],\n python_requires=\">=3.8\",\n install_requires=install_requires,\n extras_require=extras_require,\n)\n", "path": "setup.py"}, {"content": "import argparse\nimport cProfile\nimport inspect\nimport os\nimport sys\nfrom importlib.metadata import entry_points\n\nimport scrapy\nfrom scrapy.commands import BaseRunSpiderCommand, ScrapyCommand, ScrapyHelpFormatter\nfrom scrapy.crawler import CrawlerProcess\nfrom scrapy.exceptions import UsageError\nfrom scrapy.utils.misc import walk_modules\nfrom scrapy.utils.project import get_project_settings, inside_project\nfrom scrapy.utils.python import garbage_collect\n\n\nclass ScrapyArgumentParser(argparse.ArgumentParser):\n def _parse_optional(self, arg_string):\n # if starts with -: it means that is a parameter not a argument\n if arg_string[:2] == \"-:\":\n return None\n\n return super()._parse_optional(arg_string)\n\n\ndef _iter_command_classes(module_name):\n # TODO: add `name` attribute to commands and merge this function with\n # scrapy.utils.spider.iter_spider_classes\n for module in walk_modules(module_name):\n for obj in vars(module).values():\n if (\n inspect.isclass(obj)\n and issubclass(obj, ScrapyCommand)\n and obj.__module__ == module.__name__\n and obj not in (ScrapyCommand, BaseRunSpiderCommand)\n ):\n yield obj\n\n\ndef _get_commands_from_module(module, inproject):\n d = {}\n for cmd in _iter_command_classes(module):\n if inproject or not cmd.requires_project:\n cmdname = cmd.__module__.split(\".\")[-1]\n d[cmdname] = cmd()\n return d\n\n\ndef _get_commands_from_entry_points(inproject, group=\"scrapy.commands\"):\n cmds = {}\n for entry_point in entry_points().get(group, {}):\n obj = entry_point.load()\n if inspect.isclass(obj):\n cmds[entry_point.name] = obj()\n else:\n raise Exception(f\"Invalid entry point {entry_point.name}\")\n return cmds\n\n\ndef _get_commands_dict(settings, inproject):\n cmds = _get_commands_from_module(\"scrapy.commands\", inproject)\n cmds.update(_get_commands_from_entry_points(inproject))\n cmds_module = settings[\"COMMANDS_MODULE\"]\n if cmds_module:\n cmds.update(_get_commands_from_module(cmds_module, inproject))\n return cmds\n\n\ndef _pop_command_name(argv):\n i = 0\n for arg in argv[1:]:\n if not arg.startswith(\"-\"):\n del argv[i]\n return arg\n i += 1\n\n\ndef _print_header(settings, inproject):\n version = scrapy.__version__\n if inproject:\n print(f\"Scrapy {version} - active project: {settings['BOT_NAME']}\\n\")\n\n else:\n print(f\"Scrapy {version} - no active project\\n\")\n\n\ndef _print_commands(settings, inproject):\n _print_header(settings, inproject)\n print(\"Usage:\")\n print(\" scrapy <command> [options] [args]\\n\")\n print(\"Available commands:\")\n cmds = _get_commands_dict(settings, inproject)\n for cmdname, cmdclass in sorted(cmds.items()):\n print(f\" {cmdname:<13} {cmdclass.short_desc()}\")\n if not inproject:\n print()\n print(\" [ more ] More commands available when run from project directory\")\n print()\n print('Use \"scrapy <command> -h\" to see more info about a command')\n\n\ndef _print_unknown_command(settings, cmdname, inproject):\n _print_header(settings, inproject)\n print(f\"Unknown command: {cmdname}\\n\")\n print('Use \"scrapy\" to see available commands')\n\n\ndef _run_print_help(parser, func, *a, **kw):\n try:\n func(*a, **kw)\n except UsageError as e:\n if str(e):\n parser.error(str(e))\n if e.print_help:\n parser.print_help()\n sys.exit(2)\n\n\ndef execute(argv=None, settings=None):\n if argv is None:\n argv = sys.argv\n\n if settings is None:\n settings = get_project_settings()\n # set EDITOR from environment if available\n try:\n editor = os.environ[\"EDITOR\"]\n except KeyError:\n pass\n else:\n settings[\"EDITOR\"] = editor\n\n inproject = inside_project()\n cmds = _get_commands_dict(settings, inproject)\n cmdname = _pop_command_name(argv)\n if not cmdname:\n _print_commands(settings, inproject)\n sys.exit(0)\n elif cmdname not in cmds:\n _print_unknown_command(settings, cmdname, inproject)\n sys.exit(2)\n\n cmd = cmds[cmdname]\n parser = ScrapyArgumentParser(\n formatter_class=ScrapyHelpFormatter,\n usage=f\"scrapy {cmdname} {cmd.syntax()}\",\n conflict_handler=\"resolve\",\n description=cmd.long_desc(),\n )\n settings.setdict(cmd.default_settings, priority=\"command\")\n cmd.settings = settings\n cmd.add_options(parser)\n opts, args = parser.parse_known_args(args=argv[1:])\n _run_print_help(parser, cmd.process_options, args, opts)\n\n cmd.crawler_process = CrawlerProcess(settings)\n _run_print_help(parser, _run_command, cmd, args, opts)\n sys.exit(cmd.exitcode)\n\n\ndef _run_command(cmd, args, opts):\n if opts.profile:\n _run_command_profiled(cmd, args, opts)\n else:\n cmd.run(args, opts)\n\n\ndef _run_command_profiled(cmd, args, opts):\n if opts.profile:\n sys.stderr.write(f\"scrapy: writing cProfile stats to {opts.profile!r}\\n\")\n loc = locals()\n p = cProfile.Profile()\n p.runctx(\"cmd.run(args, opts)\", globals(), loc)\n if opts.profile:\n p.dump_stats(opts.profile)\n\n\nif __name__ == \"__main__\":\n try:\n execute()\n finally:\n # Twisted prints errors in DebugInfo.__del__, but PyPy does not run gc.collect() on exit:\n # http://doc.pypy.org/en/latest/cpython_differences.html\n # ?highlight=gc.collect#differences-related-to-garbage-collection-strategies\n garbage_collect()\n", "path": "scrapy/cmdline.py"}]}
| 3,164 | 592 |
gh_patches_debug_4364
|
rasdani/github-patches
|
git_diff
|
jupyterhub__jupyterhub-245
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
500 : Internal Server Error KeyError: 'user'
Hi,
I'm trying to install jupyterhub locally.
but running and connecting to the server I get a 500, I can get to the administrator pages (when sudo jupyterhub 'ing) but not add users etc.
however when I go to the root page (or users dashboard) I get a 500 with :
Uncaught exception GET /user/user/tree (127.0.0.1)
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/tornado/web.py", line 1346, in _execute
result = method(_self.path_args, *_self.path_kwargs)
File "/usr/local/lib/python3.4/dist-packages/tornado/web.py", line 2613, in wrapper
if not self.current_user:
File "/usr/local/lib/python3.4/dist-packages/tornado/web.py", line 1028, in current_user
self._current_user = self.get_current_user()
File "/usr/local/lib/python3.4/dist-packages/IPython/html/base/handlers.py", line 72, in get_current_user
return self.login_handler.get_user(self)
File "/src/jupyterhub/jupyterhub/singleuser.py", line 95, in get_user
user = auth_data['user']
KeyError: 'user'
any ideas ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `jupyterhub/singleuser.py`
Content:
```
1 #!/usr/bin/env python3
2 """Extend regular notebook server to be aware of multiuser things."""
3
4 # Copyright (c) Jupyter Development Team.
5 # Distributed under the terms of the Modified BSD License.
6
7 import os
8 try:
9 from urllib.parse import quote
10 except ImportError:
11 # PY2 Compat
12 from urllib import quote
13
14 import requests
15 from jinja2 import ChoiceLoader, FunctionLoader
16
17 from tornado import ioloop
18 from tornado.web import HTTPError
19
20 from IPython.utils.traitlets import (
21 Integer,
22 Unicode,
23 CUnicode,
24 )
25
26 from IPython.html.notebookapp import NotebookApp
27 from IPython.html.auth.login import LoginHandler
28 from IPython.html.auth.logout import LogoutHandler
29
30 from IPython.html.utils import url_path_join
31
32
33 from distutils.version import LooseVersion as V
34
35 import IPython
36 if V(IPython.__version__) < V('3.0'):
37 raise ImportError("JupyterHub Requires IPython >= 3.0, found %s" % IPython.__version__)
38
39 # Define two methods to attach to AuthenticatedHandler,
40 # which authenticate via the central auth server.
41
42 class JupyterHubLoginHandler(LoginHandler):
43 @staticmethod
44 def login_available(settings):
45 return True
46
47 @staticmethod
48 def verify_token(self, cookie_name, encrypted_cookie):
49 """method for token verification"""
50 cookie_cache = self.settings['cookie_cache']
51 if encrypted_cookie in cookie_cache:
52 # we've seen this token before, don't ask upstream again
53 return cookie_cache[encrypted_cookie]
54
55 hub_api_url = self.settings['hub_api_url']
56 hub_api_key = self.settings['hub_api_key']
57 r = requests.get(url_path_join(
58 hub_api_url, "authorizations/cookie", cookie_name, quote(encrypted_cookie, safe=''),
59 ),
60 headers = {'Authorization' : 'token %s' % hub_api_key},
61 )
62 if r.status_code == 404:
63 data = None
64 elif r.status_code == 403:
65 self.log.error("I don't have permission to verify cookies, my auth token may have expired: [%i] %s", r.status_code, r.reason)
66 raise HTTPError(500, "Permission failure checking authorization, I may need to be restarted")
67 elif r.status_code >= 500:
68 self.log.error("Upstream failure verifying auth token: [%i] %s", r.status_code, r.reason)
69 raise HTTPError(502, "Failed to check authorization (upstream problem)")
70 elif r.status_code >= 400:
71 self.log.warn("Failed to check authorization: [%i] %s", r.status_code, r.reason)
72 raise HTTPError(500, "Failed to check authorization")
73 else:
74 data = r.json()
75 cookie_cache[encrypted_cookie] = data
76 return data
77
78 @staticmethod
79 def get_user(self):
80 """alternative get_current_user to query the central server"""
81 # only allow this to be called once per handler
82 # avoids issues if an error is raised,
83 # since this may be called again when trying to render the error page
84 if hasattr(self, '_cached_user'):
85 return self._cached_user
86
87 self._cached_user = None
88 my_user = self.settings['user']
89 encrypted_cookie = self.get_cookie(self.cookie_name)
90 if encrypted_cookie:
91 auth_data = JupyterHubLoginHandler.verify_token(self, self.cookie_name, encrypted_cookie)
92 if not auth_data:
93 # treat invalid token the same as no token
94 return None
95 user = auth_data['user']
96 if user == my_user:
97 self._cached_user = user
98 return user
99 else:
100 return None
101 else:
102 self.log.debug("No token cookie")
103 return None
104
105
106 class JupyterHubLogoutHandler(LogoutHandler):
107 def get(self):
108 self.redirect(url_path_join(self.settings['hub_prefix'], 'logout'))
109
110
111 # register new hub related command-line aliases
112 aliases = NotebookApp.aliases.get_default_value()
113 aliases.update({
114 'user' : 'SingleUserNotebookApp.user',
115 'cookie-name': 'SingleUserNotebookApp.cookie_name',
116 'hub-prefix': 'SingleUserNotebookApp.hub_prefix',
117 'hub-api-url': 'SingleUserNotebookApp.hub_api_url',
118 'base-url': 'SingleUserNotebookApp.base_url',
119 })
120
121 page_template = """
122 {% extends "templates/page.html" %}
123
124 {% block header_buttons %}
125 {{super()}}
126
127 <a href='{{hub_control_panel_url}}'
128 class='btn btn-default btn-sm navbar-btn pull-right'
129 style='margin-right: 4px; margin-left: 2px;'
130 >
131 Control Panel</a>
132 {% endblock %}
133 """
134
135 class SingleUserNotebookApp(NotebookApp):
136 """A Subclass of the regular NotebookApp that is aware of the parent multiuser context."""
137 user = CUnicode(config=True)
138 def _user_changed(self, name, old, new):
139 self.log.name = new
140 cookie_name = Unicode(config=True)
141 hub_prefix = Unicode(config=True)
142 hub_api_url = Unicode(config=True)
143 aliases = aliases
144 open_browser = False
145 trust_xheaders = True
146 login_handler_class = JupyterHubLoginHandler
147 logout_handler_class = JupyterHubLogoutHandler
148
149 cookie_cache_lifetime = Integer(
150 config=True,
151 default_value=300,
152 allow_none=True,
153 help="""
154 Time, in seconds, that we cache a validated cookie before requiring
155 revalidation with the hub.
156 """,
157 )
158
159 def _log_datefmt_default(self):
160 """Exclude date from default date format"""
161 return "%Y-%m-%d %H:%M:%S"
162
163 def _log_format_default(self):
164 """override default log format to include time"""
165 return "%(color)s[%(levelname)1.1s %(asctime)s.%(msecs).03d %(name)s %(module)s:%(lineno)d]%(end_color)s %(message)s"
166
167 def _confirm_exit(self):
168 # disable the exit confirmation for background notebook processes
169 ioloop.IOLoop.instance().stop()
170
171 def _clear_cookie_cache(self):
172 self.log.debug("Clearing cookie cache")
173 self.tornado_settings['cookie_cache'].clear()
174
175 def start(self):
176 # Start a PeriodicCallback to clear cached cookies. This forces us to
177 # revalidate our user with the Hub at least every
178 # `cookie_cache_lifetime` seconds.
179 if self.cookie_cache_lifetime:
180 ioloop.PeriodicCallback(
181 self._clear_cookie_cache,
182 self.cookie_cache_lifetime * 1e3,
183 ).start()
184 super(SingleUserNotebookApp, self).start()
185
186 def init_webapp(self):
187 # load the hub related settings into the tornado settings dict
188 env = os.environ
189 s = self.tornado_settings
190 s['cookie_cache'] = {}
191 s['user'] = self.user
192 s['hub_api_key'] = env.pop('JPY_API_TOKEN')
193 s['hub_prefix'] = self.hub_prefix
194 s['cookie_name'] = self.cookie_name
195 s['login_url'] = self.hub_prefix
196 s['hub_api_url'] = self.hub_api_url
197
198 super(SingleUserNotebookApp, self).init_webapp()
199 self.patch_templates()
200
201 def patch_templates(self):
202 """Patch page templates to add Hub-related buttons"""
203 env = self.web_app.settings['jinja2_env']
204
205 env.globals['hub_control_panel_url'] = \
206 url_path_join(self.hub_prefix, 'home')
207
208 # patch jinja env loading to modify page template
209 def get_page(name):
210 if name == 'page.html':
211 return page_template
212
213 orig_loader = env.loader
214 env.loader = ChoiceLoader([
215 FunctionLoader(get_page),
216 orig_loader,
217 ])
218
219
220 def main():
221 return SingleUserNotebookApp.launch_instance()
222
223
224 if __name__ == "__main__":
225 main()
226
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/jupyterhub/singleuser.py b/jupyterhub/singleuser.py
--- a/jupyterhub/singleuser.py
+++ b/jupyterhub/singleuser.py
@@ -92,7 +92,7 @@
if not auth_data:
# treat invalid token the same as no token
return None
- user = auth_data['user']
+ user = auth_data['name']
if user == my_user:
self._cached_user = user
return user
|
{"golden_diff": "diff --git a/jupyterhub/singleuser.py b/jupyterhub/singleuser.py\n--- a/jupyterhub/singleuser.py\n+++ b/jupyterhub/singleuser.py\n@@ -92,7 +92,7 @@\n if not auth_data:\n # treat invalid token the same as no token\n return None\n- user = auth_data['user']\n+ user = auth_data['name']\n if user == my_user:\n self._cached_user = user\n return user\n", "issue": "500 : Internal Server Error KeyError: 'user'\nHi,\n\nI'm trying to install jupyterhub locally.\nbut running and connecting to the server I get a 500, I can get to the administrator pages (when sudo jupyterhub 'ing) but not add users etc.\n\nhowever when I go to the root page (or users dashboard) I get a 500 with :\nUncaught exception GET /user/user/tree (127.0.0.1)\n Traceback (most recent call last):\n File \"/usr/local/lib/python3.4/dist-packages/tornado/web.py\", line 1346, in _execute\n result = method(_self.path_args, *_self.path_kwargs)\n File \"/usr/local/lib/python3.4/dist-packages/tornado/web.py\", line 2613, in wrapper\n if not self.current_user:\n File \"/usr/local/lib/python3.4/dist-packages/tornado/web.py\", line 1028, in current_user\n self._current_user = self.get_current_user()\n File \"/usr/local/lib/python3.4/dist-packages/IPython/html/base/handlers.py\", line 72, in get_current_user\n return self.login_handler.get_user(self)\n File \"/src/jupyterhub/jupyterhub/singleuser.py\", line 95, in get_user\n user = auth_data['user']\n KeyError: 'user'\n\nany ideas ?\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\"\"\"Extend regular notebook server to be aware of multiuser things.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nimport os\ntry:\n from urllib.parse import quote\nexcept ImportError:\n # PY2 Compat\n from urllib import quote\n\nimport requests\nfrom jinja2 import ChoiceLoader, FunctionLoader\n\nfrom tornado import ioloop\nfrom tornado.web import HTTPError\n\nfrom IPython.utils.traitlets import (\n Integer,\n Unicode,\n CUnicode,\n)\n\nfrom IPython.html.notebookapp import NotebookApp\nfrom IPython.html.auth.login import LoginHandler\nfrom IPython.html.auth.logout import LogoutHandler\n\nfrom IPython.html.utils import url_path_join\n\n\nfrom distutils.version import LooseVersion as V\n\nimport IPython\nif V(IPython.__version__) < V('3.0'):\n raise ImportError(\"JupyterHub Requires IPython >= 3.0, found %s\" % IPython.__version__)\n\n# Define two methods to attach to AuthenticatedHandler,\n# which authenticate via the central auth server.\n\nclass JupyterHubLoginHandler(LoginHandler):\n @staticmethod\n def login_available(settings):\n return True\n \n @staticmethod\n def verify_token(self, cookie_name, encrypted_cookie):\n \"\"\"method for token verification\"\"\"\n cookie_cache = self.settings['cookie_cache']\n if encrypted_cookie in cookie_cache:\n # we've seen this token before, don't ask upstream again\n return cookie_cache[encrypted_cookie]\n \n hub_api_url = self.settings['hub_api_url']\n hub_api_key = self.settings['hub_api_key']\n r = requests.get(url_path_join(\n hub_api_url, \"authorizations/cookie\", cookie_name, quote(encrypted_cookie, safe=''),\n ),\n headers = {'Authorization' : 'token %s' % hub_api_key},\n )\n if r.status_code == 404:\n data = None\n elif r.status_code == 403:\n self.log.error(\"I don't have permission to verify cookies, my auth token may have expired: [%i] %s\", r.status_code, r.reason)\n raise HTTPError(500, \"Permission failure checking authorization, I may need to be restarted\")\n elif r.status_code >= 500:\n self.log.error(\"Upstream failure verifying auth token: [%i] %s\", r.status_code, r.reason)\n raise HTTPError(502, \"Failed to check authorization (upstream problem)\")\n elif r.status_code >= 400:\n self.log.warn(\"Failed to check authorization: [%i] %s\", r.status_code, r.reason)\n raise HTTPError(500, \"Failed to check authorization\")\n else:\n data = r.json()\n cookie_cache[encrypted_cookie] = data\n return data\n \n @staticmethod\n def get_user(self):\n \"\"\"alternative get_current_user to query the central server\"\"\"\n # only allow this to be called once per handler\n # avoids issues if an error is raised,\n # since this may be called again when trying to render the error page\n if hasattr(self, '_cached_user'):\n return self._cached_user\n \n self._cached_user = None\n my_user = self.settings['user']\n encrypted_cookie = self.get_cookie(self.cookie_name)\n if encrypted_cookie:\n auth_data = JupyterHubLoginHandler.verify_token(self, self.cookie_name, encrypted_cookie)\n if not auth_data:\n # treat invalid token the same as no token\n return None\n user = auth_data['user']\n if user == my_user:\n self._cached_user = user\n return user\n else:\n return None\n else:\n self.log.debug(\"No token cookie\")\n return None\n\n\nclass JupyterHubLogoutHandler(LogoutHandler):\n def get(self):\n self.redirect(url_path_join(self.settings['hub_prefix'], 'logout'))\n\n\n# register new hub related command-line aliases\naliases = NotebookApp.aliases.get_default_value()\naliases.update({\n 'user' : 'SingleUserNotebookApp.user',\n 'cookie-name': 'SingleUserNotebookApp.cookie_name',\n 'hub-prefix': 'SingleUserNotebookApp.hub_prefix',\n 'hub-api-url': 'SingleUserNotebookApp.hub_api_url',\n 'base-url': 'SingleUserNotebookApp.base_url',\n})\n\npage_template = \"\"\"\n{% extends \"templates/page.html\" %}\n\n{% block header_buttons %}\n{{super()}}\n\n<a href='{{hub_control_panel_url}}'\n class='btn btn-default btn-sm navbar-btn pull-right'\n style='margin-right: 4px; margin-left: 2px;'\n>\nControl Panel</a>\n{% endblock %}\n\"\"\"\n\nclass SingleUserNotebookApp(NotebookApp):\n \"\"\"A Subclass of the regular NotebookApp that is aware of the parent multiuser context.\"\"\"\n user = CUnicode(config=True)\n def _user_changed(self, name, old, new):\n self.log.name = new\n cookie_name = Unicode(config=True)\n hub_prefix = Unicode(config=True)\n hub_api_url = Unicode(config=True)\n aliases = aliases\n open_browser = False\n trust_xheaders = True\n login_handler_class = JupyterHubLoginHandler\n logout_handler_class = JupyterHubLogoutHandler\n\n cookie_cache_lifetime = Integer(\n config=True,\n default_value=300,\n allow_none=True,\n help=\"\"\"\n Time, in seconds, that we cache a validated cookie before requiring\n revalidation with the hub.\n \"\"\",\n )\n\n def _log_datefmt_default(self):\n \"\"\"Exclude date from default date format\"\"\"\n return \"%Y-%m-%d %H:%M:%S\"\n\n def _log_format_default(self):\n \"\"\"override default log format to include time\"\"\"\n return \"%(color)s[%(levelname)1.1s %(asctime)s.%(msecs).03d %(name)s %(module)s:%(lineno)d]%(end_color)s %(message)s\"\n\n def _confirm_exit(self):\n # disable the exit confirmation for background notebook processes\n ioloop.IOLoop.instance().stop()\n\n def _clear_cookie_cache(self):\n self.log.debug(\"Clearing cookie cache\")\n self.tornado_settings['cookie_cache'].clear()\n \n def start(self):\n # Start a PeriodicCallback to clear cached cookies. This forces us to\n # revalidate our user with the Hub at least every\n # `cookie_cache_lifetime` seconds.\n if self.cookie_cache_lifetime:\n ioloop.PeriodicCallback(\n self._clear_cookie_cache,\n self.cookie_cache_lifetime * 1e3,\n ).start()\n super(SingleUserNotebookApp, self).start()\n \n def init_webapp(self):\n # load the hub related settings into the tornado settings dict\n env = os.environ\n s = self.tornado_settings\n s['cookie_cache'] = {}\n s['user'] = self.user\n s['hub_api_key'] = env.pop('JPY_API_TOKEN')\n s['hub_prefix'] = self.hub_prefix\n s['cookie_name'] = self.cookie_name\n s['login_url'] = self.hub_prefix\n s['hub_api_url'] = self.hub_api_url\n \n super(SingleUserNotebookApp, self).init_webapp()\n self.patch_templates()\n \n def patch_templates(self):\n \"\"\"Patch page templates to add Hub-related buttons\"\"\"\n env = self.web_app.settings['jinja2_env']\n \n env.globals['hub_control_panel_url'] = \\\n url_path_join(self.hub_prefix, 'home')\n \n # patch jinja env loading to modify page template\n def get_page(name):\n if name == 'page.html':\n return page_template\n \n orig_loader = env.loader\n env.loader = ChoiceLoader([\n FunctionLoader(get_page),\n orig_loader,\n ])\n\n\ndef main():\n return SingleUserNotebookApp.launch_instance()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "jupyterhub/singleuser.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\"\"\"Extend regular notebook server to be aware of multiuser things.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nimport os\ntry:\n from urllib.parse import quote\nexcept ImportError:\n # PY2 Compat\n from urllib import quote\n\nimport requests\nfrom jinja2 import ChoiceLoader, FunctionLoader\n\nfrom tornado import ioloop\nfrom tornado.web import HTTPError\n\nfrom IPython.utils.traitlets import (\n Integer,\n Unicode,\n CUnicode,\n)\n\nfrom IPython.html.notebookapp import NotebookApp\nfrom IPython.html.auth.login import LoginHandler\nfrom IPython.html.auth.logout import LogoutHandler\n\nfrom IPython.html.utils import url_path_join\n\n\nfrom distutils.version import LooseVersion as V\n\nimport IPython\nif V(IPython.__version__) < V('3.0'):\n raise ImportError(\"JupyterHub Requires IPython >= 3.0, found %s\" % IPython.__version__)\n\n# Define two methods to attach to AuthenticatedHandler,\n# which authenticate via the central auth server.\n\nclass JupyterHubLoginHandler(LoginHandler):\n @staticmethod\n def login_available(settings):\n return True\n \n @staticmethod\n def verify_token(self, cookie_name, encrypted_cookie):\n \"\"\"method for token verification\"\"\"\n cookie_cache = self.settings['cookie_cache']\n if encrypted_cookie in cookie_cache:\n # we've seen this token before, don't ask upstream again\n return cookie_cache[encrypted_cookie]\n \n hub_api_url = self.settings['hub_api_url']\n hub_api_key = self.settings['hub_api_key']\n r = requests.get(url_path_join(\n hub_api_url, \"authorizations/cookie\", cookie_name, quote(encrypted_cookie, safe=''),\n ),\n headers = {'Authorization' : 'token %s' % hub_api_key},\n )\n if r.status_code == 404:\n data = None\n elif r.status_code == 403:\n self.log.error(\"I don't have permission to verify cookies, my auth token may have expired: [%i] %s\", r.status_code, r.reason)\n raise HTTPError(500, \"Permission failure checking authorization, I may need to be restarted\")\n elif r.status_code >= 500:\n self.log.error(\"Upstream failure verifying auth token: [%i] %s\", r.status_code, r.reason)\n raise HTTPError(502, \"Failed to check authorization (upstream problem)\")\n elif r.status_code >= 400:\n self.log.warn(\"Failed to check authorization: [%i] %s\", r.status_code, r.reason)\n raise HTTPError(500, \"Failed to check authorization\")\n else:\n data = r.json()\n cookie_cache[encrypted_cookie] = data\n return data\n \n @staticmethod\n def get_user(self):\n \"\"\"alternative get_current_user to query the central server\"\"\"\n # only allow this to be called once per handler\n # avoids issues if an error is raised,\n # since this may be called again when trying to render the error page\n if hasattr(self, '_cached_user'):\n return self._cached_user\n \n self._cached_user = None\n my_user = self.settings['user']\n encrypted_cookie = self.get_cookie(self.cookie_name)\n if encrypted_cookie:\n auth_data = JupyterHubLoginHandler.verify_token(self, self.cookie_name, encrypted_cookie)\n if not auth_data:\n # treat invalid token the same as no token\n return None\n user = auth_data['name']\n if user == my_user:\n self._cached_user = user\n return user\n else:\n return None\n else:\n self.log.debug(\"No token cookie\")\n return None\n\n\nclass JupyterHubLogoutHandler(LogoutHandler):\n def get(self):\n self.redirect(url_path_join(self.settings['hub_prefix'], 'logout'))\n\n\n# register new hub related command-line aliases\naliases = NotebookApp.aliases.get_default_value()\naliases.update({\n 'user' : 'SingleUserNotebookApp.user',\n 'cookie-name': 'SingleUserNotebookApp.cookie_name',\n 'hub-prefix': 'SingleUserNotebookApp.hub_prefix',\n 'hub-api-url': 'SingleUserNotebookApp.hub_api_url',\n 'base-url': 'SingleUserNotebookApp.base_url',\n})\n\npage_template = \"\"\"\n{% extends \"templates/page.html\" %}\n\n{% block header_buttons %}\n{{super()}}\n\n<a href='{{hub_control_panel_url}}'\n class='btn btn-default btn-sm navbar-btn pull-right'\n style='margin-right: 4px; margin-left: 2px;'\n>\nControl Panel</a>\n{% endblock %}\n\"\"\"\n\nclass SingleUserNotebookApp(NotebookApp):\n \"\"\"A Subclass of the regular NotebookApp that is aware of the parent multiuser context.\"\"\"\n user = CUnicode(config=True)\n def _user_changed(self, name, old, new):\n self.log.name = new\n cookie_name = Unicode(config=True)\n hub_prefix = Unicode(config=True)\n hub_api_url = Unicode(config=True)\n aliases = aliases\n open_browser = False\n trust_xheaders = True\n login_handler_class = JupyterHubLoginHandler\n logout_handler_class = JupyterHubLogoutHandler\n\n cookie_cache_lifetime = Integer(\n config=True,\n default_value=300,\n allow_none=True,\n help=\"\"\"\n Time, in seconds, that we cache a validated cookie before requiring\n revalidation with the hub.\n \"\"\",\n )\n\n def _log_datefmt_default(self):\n \"\"\"Exclude date from default date format\"\"\"\n return \"%Y-%m-%d %H:%M:%S\"\n\n def _log_format_default(self):\n \"\"\"override default log format to include time\"\"\"\n return \"%(color)s[%(levelname)1.1s %(asctime)s.%(msecs).03d %(name)s %(module)s:%(lineno)d]%(end_color)s %(message)s\"\n\n def _confirm_exit(self):\n # disable the exit confirmation for background notebook processes\n ioloop.IOLoop.instance().stop()\n\n def _clear_cookie_cache(self):\n self.log.debug(\"Clearing cookie cache\")\n self.tornado_settings['cookie_cache'].clear()\n \n def start(self):\n # Start a PeriodicCallback to clear cached cookies. This forces us to\n # revalidate our user with the Hub at least every\n # `cookie_cache_lifetime` seconds.\n if self.cookie_cache_lifetime:\n ioloop.PeriodicCallback(\n self._clear_cookie_cache,\n self.cookie_cache_lifetime * 1e3,\n ).start()\n super(SingleUserNotebookApp, self).start()\n \n def init_webapp(self):\n # load the hub related settings into the tornado settings dict\n env = os.environ\n s = self.tornado_settings\n s['cookie_cache'] = {}\n s['user'] = self.user\n s['hub_api_key'] = env.pop('JPY_API_TOKEN')\n s['hub_prefix'] = self.hub_prefix\n s['cookie_name'] = self.cookie_name\n s['login_url'] = self.hub_prefix\n s['hub_api_url'] = self.hub_api_url\n \n super(SingleUserNotebookApp, self).init_webapp()\n self.patch_templates()\n \n def patch_templates(self):\n \"\"\"Patch page templates to add Hub-related buttons\"\"\"\n env = self.web_app.settings['jinja2_env']\n \n env.globals['hub_control_panel_url'] = \\\n url_path_join(self.hub_prefix, 'home')\n \n # patch jinja env loading to modify page template\n def get_page(name):\n if name == 'page.html':\n return page_template\n \n orig_loader = env.loader\n env.loader = ChoiceLoader([\n FunctionLoader(get_page),\n orig_loader,\n ])\n\n\ndef main():\n return SingleUserNotebookApp.launch_instance()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "jupyterhub/singleuser.py"}]}
| 2,880 | 106 |
gh_patches_debug_30138
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-1988
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Most Beloved Book on Instance has zero local reviews
**Describe the bug**
On the About this Instance page there's a cool section that shows various books, like Most Beloved, Most Contentious, Most Wanted to Read. The Most Beloved category, however, appears to be driven by federated statistics and not local statistics. A book with no local ratings is listed as the most Beloved Book because *other* instances have rated it highly.
**To Reproduce**
Steps to reproduce the behavior:
1. Don't have a lot of ratings on your instance.
2. Import a book that is highly rated from another (in this case 3 reviews, all 5 stars)
3. Go to instancedomain/about and view the Most Beloved book
I'm really only guessing on these steps to reproduce, it's just something I stumbled across.
**Expected behavior**
I expect the most beloved book to be beloved by my instance, not other instances. Federated statistics would also be cool, but it doesn't seem like this section is meant to represent the network.
**Instance**
https://tankie.ml/
---
**Smartphone (please complete the following information):**
- Device: Samsung Galaxy A52 5G
- OS: Android 12
- Browser: Firefox
- Version: 97.2.0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bookwyrm/templatetags/landing_page_tags.py`
Content:
```
1 """ template filters """
2 from django import template
3 from django.db.models import Avg, StdDev, Count, F, Q
4
5 from bookwyrm import models
6
7 register = template.Library()
8
9
10 @register.simple_tag(takes_context=False)
11 def get_book_superlatives():
12 """get book stats for the about page"""
13 total_ratings = models.Review.objects.filter(local=True, deleted=False).count()
14 data = {}
15 data["top_rated"] = (
16 models.Work.objects.annotate(
17 rating=Avg(
18 "editions__review__rating",
19 filter=Q(editions__review__local=True, editions__review__deleted=False),
20 ),
21 rating_count=Count(
22 "editions__review",
23 filter=Q(editions__review__local=True, editions__review__deleted=False),
24 ),
25 )
26 .annotate(weighted=F("rating") * F("rating_count") / total_ratings)
27 .filter(rating__gt=4, weighted__gt=0)
28 .order_by("-weighted")
29 .first()
30 )
31
32 data["controversial"] = (
33 models.Work.objects.annotate(
34 deviation=StdDev(
35 "editions__review__rating",
36 filter=Q(editions__review__local=True, editions__review__deleted=False),
37 ),
38 rating_count=Count(
39 "editions__review",
40 filter=Q(editions__review__local=True, editions__review__deleted=False),
41 ),
42 )
43 .annotate(weighted=F("deviation") * F("rating_count") / total_ratings)
44 .filter(weighted__gt=0)
45 .order_by("-weighted")
46 .first()
47 )
48
49 data["wanted"] = (
50 models.Work.objects.annotate(
51 shelf_count=Count(
52 "editions__shelves", filter=Q(editions__shelves__identifier="to-read")
53 )
54 )
55 .order_by("-shelf_count")
56 .first()
57 )
58 return data
59
60
61 @register.simple_tag(takes_context=False)
62 def get_landing_books():
63 """list of books for the landing page"""
64 return list(
65 set(
66 models.Edition.objects.filter(
67 review__published_date__isnull=False,
68 review__deleted=False,
69 review__user__local=True,
70 review__privacy__in=["public", "unlisted"],
71 )
72 .exclude(cover__exact="")
73 .distinct()
74 .order_by("-review__published_date")[:6]
75 )
76 )
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bookwyrm/templatetags/landing_page_tags.py b/bookwyrm/templatetags/landing_page_tags.py
--- a/bookwyrm/templatetags/landing_page_tags.py
+++ b/bookwyrm/templatetags/landing_page_tags.py
@@ -16,11 +16,15 @@
models.Work.objects.annotate(
rating=Avg(
"editions__review__rating",
- filter=Q(editions__review__local=True, editions__review__deleted=False),
+ filter=Q(
+ editions__review__user__local=True, editions__review__deleted=False
+ ),
),
rating_count=Count(
"editions__review",
- filter=Q(editions__review__local=True, editions__review__deleted=False),
+ filter=Q(
+ editions__review__user__local=True, editions__review__deleted=False
+ ),
),
)
.annotate(weighted=F("rating") * F("rating_count") / total_ratings)
@@ -33,11 +37,15 @@
models.Work.objects.annotate(
deviation=StdDev(
"editions__review__rating",
- filter=Q(editions__review__local=True, editions__review__deleted=False),
+ filter=Q(
+ editions__review__user__local=True, editions__review__deleted=False
+ ),
),
rating_count=Count(
"editions__review",
- filter=Q(editions__review__local=True, editions__review__deleted=False),
+ filter=Q(
+ editions__review__user__local=True, editions__review__deleted=False
+ ),
),
)
.annotate(weighted=F("deviation") * F("rating_count") / total_ratings)
|
{"golden_diff": "diff --git a/bookwyrm/templatetags/landing_page_tags.py b/bookwyrm/templatetags/landing_page_tags.py\n--- a/bookwyrm/templatetags/landing_page_tags.py\n+++ b/bookwyrm/templatetags/landing_page_tags.py\n@@ -16,11 +16,15 @@\n models.Work.objects.annotate(\n rating=Avg(\n \"editions__review__rating\",\n- filter=Q(editions__review__local=True, editions__review__deleted=False),\n+ filter=Q(\n+ editions__review__user__local=True, editions__review__deleted=False\n+ ),\n ),\n rating_count=Count(\n \"editions__review\",\n- filter=Q(editions__review__local=True, editions__review__deleted=False),\n+ filter=Q(\n+ editions__review__user__local=True, editions__review__deleted=False\n+ ),\n ),\n )\n .annotate(weighted=F(\"rating\") * F(\"rating_count\") / total_ratings)\n@@ -33,11 +37,15 @@\n models.Work.objects.annotate(\n deviation=StdDev(\n \"editions__review__rating\",\n- filter=Q(editions__review__local=True, editions__review__deleted=False),\n+ filter=Q(\n+ editions__review__user__local=True, editions__review__deleted=False\n+ ),\n ),\n rating_count=Count(\n \"editions__review\",\n- filter=Q(editions__review__local=True, editions__review__deleted=False),\n+ filter=Q(\n+ editions__review__user__local=True, editions__review__deleted=False\n+ ),\n ),\n )\n .annotate(weighted=F(\"deviation\") * F(\"rating_count\") / total_ratings)\n", "issue": "Most Beloved Book on Instance has zero local reviews\n**Describe the bug**\nOn the About this Instance page there's a cool section that shows various books, like Most Beloved, Most Contentious, Most Wanted to Read. The Most Beloved category, however, appears to be driven by federated statistics and not local statistics. A book with no local ratings is listed as the most Beloved Book because *other* instances have rated it highly. \n\n**To Reproduce**\nSteps to reproduce the behavior:\n1. Don't have a lot of ratings on your instance.\n2. Import a book that is highly rated from another (in this case 3 reviews, all 5 stars) \n3. Go to instancedomain/about and view the Most Beloved book\n\nI'm really only guessing on these steps to reproduce, it's just something I stumbled across.\n\n**Expected behavior**\nI expect the most beloved book to be beloved by my instance, not other instances. Federated statistics would also be cool, but it doesn't seem like this section is meant to represent the network. \n\n**Instance**\nhttps://tankie.ml/\n\n---\n\n**Smartphone (please complete the following information):**\n - Device: Samsung Galaxy A52 5G\n - OS: Android 12\n - Browser: Firefox\n - Version: 97.2.0\n \n", "before_files": [{"content": "\"\"\" template filters \"\"\"\nfrom django import template\nfrom django.db.models import Avg, StdDev, Count, F, Q\n\nfrom bookwyrm import models\n\nregister = template.Library()\n\n\[email protected]_tag(takes_context=False)\ndef get_book_superlatives():\n \"\"\"get book stats for the about page\"\"\"\n total_ratings = models.Review.objects.filter(local=True, deleted=False).count()\n data = {}\n data[\"top_rated\"] = (\n models.Work.objects.annotate(\n rating=Avg(\n \"editions__review__rating\",\n filter=Q(editions__review__local=True, editions__review__deleted=False),\n ),\n rating_count=Count(\n \"editions__review\",\n filter=Q(editions__review__local=True, editions__review__deleted=False),\n ),\n )\n .annotate(weighted=F(\"rating\") * F(\"rating_count\") / total_ratings)\n .filter(rating__gt=4, weighted__gt=0)\n .order_by(\"-weighted\")\n .first()\n )\n\n data[\"controversial\"] = (\n models.Work.objects.annotate(\n deviation=StdDev(\n \"editions__review__rating\",\n filter=Q(editions__review__local=True, editions__review__deleted=False),\n ),\n rating_count=Count(\n \"editions__review\",\n filter=Q(editions__review__local=True, editions__review__deleted=False),\n ),\n )\n .annotate(weighted=F(\"deviation\") * F(\"rating_count\") / total_ratings)\n .filter(weighted__gt=0)\n .order_by(\"-weighted\")\n .first()\n )\n\n data[\"wanted\"] = (\n models.Work.objects.annotate(\n shelf_count=Count(\n \"editions__shelves\", filter=Q(editions__shelves__identifier=\"to-read\")\n )\n )\n .order_by(\"-shelf_count\")\n .first()\n )\n return data\n\n\[email protected]_tag(takes_context=False)\ndef get_landing_books():\n \"\"\"list of books for the landing page\"\"\"\n return list(\n set(\n models.Edition.objects.filter(\n review__published_date__isnull=False,\n review__deleted=False,\n review__user__local=True,\n review__privacy__in=[\"public\", \"unlisted\"],\n )\n .exclude(cover__exact=\"\")\n .distinct()\n .order_by(\"-review__published_date\")[:6]\n )\n )\n", "path": "bookwyrm/templatetags/landing_page_tags.py"}], "after_files": [{"content": "\"\"\" template filters \"\"\"\nfrom django import template\nfrom django.db.models import Avg, StdDev, Count, F, Q\n\nfrom bookwyrm import models\n\nregister = template.Library()\n\n\[email protected]_tag(takes_context=False)\ndef get_book_superlatives():\n \"\"\"get book stats for the about page\"\"\"\n total_ratings = models.Review.objects.filter(local=True, deleted=False).count()\n data = {}\n data[\"top_rated\"] = (\n models.Work.objects.annotate(\n rating=Avg(\n \"editions__review__rating\",\n filter=Q(\n editions__review__user__local=True, editions__review__deleted=False\n ),\n ),\n rating_count=Count(\n \"editions__review\",\n filter=Q(\n editions__review__user__local=True, editions__review__deleted=False\n ),\n ),\n )\n .annotate(weighted=F(\"rating\") * F(\"rating_count\") / total_ratings)\n .filter(rating__gt=4, weighted__gt=0)\n .order_by(\"-weighted\")\n .first()\n )\n\n data[\"controversial\"] = (\n models.Work.objects.annotate(\n deviation=StdDev(\n \"editions__review__rating\",\n filter=Q(\n editions__review__user__local=True, editions__review__deleted=False\n ),\n ),\n rating_count=Count(\n \"editions__review\",\n filter=Q(\n editions__review__user__local=True, editions__review__deleted=False\n ),\n ),\n )\n .annotate(weighted=F(\"deviation\") * F(\"rating_count\") / total_ratings)\n .filter(weighted__gt=0)\n .order_by(\"-weighted\")\n .first()\n )\n\n data[\"wanted\"] = (\n models.Work.objects.annotate(\n shelf_count=Count(\n \"editions__shelves\", filter=Q(editions__shelves__identifier=\"to-read\")\n )\n )\n .order_by(\"-shelf_count\")\n .first()\n )\n return data\n\n\[email protected]_tag(takes_context=False)\ndef get_landing_books():\n \"\"\"list of books for the landing page\"\"\"\n return list(\n set(\n models.Edition.objects.filter(\n review__published_date__isnull=False,\n review__deleted=False,\n review__user__local=True,\n review__privacy__in=[\"public\", \"unlisted\"],\n )\n .exclude(cover__exact=\"\")\n .distinct()\n .order_by(\"-review__published_date\")[:6]\n )\n )\n", "path": "bookwyrm/templatetags/landing_page_tags.py"}]}
| 1,220 | 398 |
gh_patches_debug_5904
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-18674
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
JVM resource jar creation is broken on Mac by variation in `touch` command.
BSD touch uses `-t` to set the timestamp whereas GNU touch uses `-d`. We use / assume the GNU binary as noted here: https://github.com/pantsbuild/pants/pull/16950#discussion_r1157196330
As discovered here: https://pantsbuild.slack.com/archives/C046T6T9U/p1680604327733559?thread_ts=1680604327.733559&cid=C046T6T9U
Where the error looks like:
```
12:24:56.74 [ERROR] 1 Exception encountered:
Engine traceback:
in select
in pants.core.goals.check.check
in pants.backend.scala.goals.check.scalac_check (scalac)
in pants.backend.scala.compile.scalac.compile_scala_source
in pants.jvm.compile.compile_classpath_entries
in pants.jvm.resources.assemble_resources_jar
in pants.engine.process.fallible_to_exec_result_or_raise
Traceback (most recent call last):
File "/Users/jbenito/.cache/pants/setup/bootstrap-Darwin-x86_64/pants.1Nnv7r/install/lib/python3.9/site-packages/pants/engine/process.py", line 275, in fallible_to_exec_result_or_raise
raise ProcessExecutionFailure(
pants.engine.process.ProcessExecutionFailure: Process 'Build resources JAR for sdk/transport-security-web-lib/src/test/resources:resources' failed with exit code 1.
stdout:
stderr:
/usr/bin/touch: illegal option -- d
usage:
touch [-A [-][[hh]mm]SS] [-acfhm] [-r file] [-t [[CC]YY]MMDDhhmm[.SS]] file ...
```
It appears #16950 was cherry picked back to 2.13.1 and 2.14.0; so Pants has been broken for JVM resource jars since 2.13.1.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/jvm/resources.py`
Content:
```
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 import itertools
5 import logging
6 import shlex
7 from itertools import chain
8 from pathlib import Path
9
10 from pants.core.target_types import ResourcesFieldSet, ResourcesGeneratorFieldSet
11 from pants.core.util_rules import stripped_source_files
12 from pants.core.util_rules.source_files import SourceFilesRequest
13 from pants.core.util_rules.stripped_source_files import StrippedSourceFiles
14 from pants.core.util_rules.system_binaries import BashBinary, TouchBinary, ZipBinary
15 from pants.engine.fs import Digest, MergeDigests
16 from pants.engine.internals.selectors import MultiGet
17 from pants.engine.process import Process, ProcessResult
18 from pants.engine.rules import Get, collect_rules, rule
19 from pants.engine.target import SourcesField
20 from pants.engine.unions import UnionRule
21 from pants.jvm import compile
22 from pants.jvm.compile import (
23 ClasspathDependenciesRequest,
24 ClasspathEntry,
25 ClasspathEntryRequest,
26 ClasspathEntryRequests,
27 CompileResult,
28 FallibleClasspathEntries,
29 FallibleClasspathEntry,
30 )
31 from pants.jvm.subsystems import JvmSubsystem
32 from pants.util.logging import LogLevel
33
34 logger = logging.getLogger(__name__)
35
36
37 class JvmResourcesRequest(ClasspathEntryRequest):
38 field_sets = (
39 ResourcesFieldSet,
40 ResourcesGeneratorFieldSet,
41 )
42
43
44 @rule(desc="Assemble resources")
45 async def assemble_resources_jar(
46 zip: ZipBinary,
47 bash: BashBinary,
48 touch: TouchBinary,
49 jvm: JvmSubsystem,
50 request: JvmResourcesRequest,
51 ) -> FallibleClasspathEntry:
52 # Request the component's direct dependency classpath, and additionally any prerequisite.
53 # Filter out any dependencies that are generated by our current target so that each resource
54 # only appears in a single input JAR.
55 # NOTE: Generated dependencies will have the same dependencies as the current target, so we
56 # don't need to inspect those dependencies.
57 optional_prereq_request = [*((request.prerequisite,) if request.prerequisite else ())]
58 fallibles = await MultiGet(
59 Get(FallibleClasspathEntries, ClasspathEntryRequests(optional_prereq_request)),
60 Get(FallibleClasspathEntries, ClasspathDependenciesRequest(request, ignore_generated=True)),
61 )
62 direct_dependency_classpath_entries = FallibleClasspathEntries(
63 itertools.chain(*fallibles)
64 ).if_all_succeeded()
65
66 if direct_dependency_classpath_entries is None:
67 return FallibleClasspathEntry(
68 description=str(request.component),
69 result=CompileResult.DEPENDENCY_FAILED,
70 output=None,
71 exit_code=1,
72 )
73
74 source_files = await Get(
75 StrippedSourceFiles,
76 SourceFilesRequest([tgt.get(SourcesField) for tgt in request.component.members]),
77 )
78
79 output_filename = f"{request.component.representative.address.path_safe_spec}.resources.jar"
80 output_files = [output_filename]
81
82 # #16231: Valid JAR files need the directories of each resource file as well as the files
83 # themselves.
84
85 paths = {Path(filename) for filename in source_files.snapshot.files}
86 directories = {parent for path in paths for parent in path.parents}
87 input_files = {str(path) for path in chain(paths, directories)}
88
89 resources_jar_input_digest = source_files.snapshot.digest
90
91 input_filenames = " ".join(shlex.quote(file) for file in sorted(input_files))
92
93 resources_jar_result = await Get(
94 ProcessResult,
95 Process(
96 argv=[
97 bash.path,
98 "-c",
99 " ".join(
100 [
101 touch.path,
102 "-d 1980-01-01T00:00:00Z",
103 input_filenames,
104 "&&",
105 "TZ=UTC",
106 zip.path,
107 "-oX",
108 output_filename,
109 input_filenames,
110 ]
111 ),
112 ],
113 description=f"Build resources JAR for {request.component}",
114 input_digest=resources_jar_input_digest,
115 output_files=output_files,
116 level=LogLevel.DEBUG,
117 ),
118 )
119
120 output_digest = resources_jar_result.output_digest
121 cpe = ClasspathEntry(output_digest, output_files, [])
122
123 merged_cpe_digest = await Get(
124 Digest,
125 MergeDigests(chain((cpe.digest,), (i.digest for i in direct_dependency_classpath_entries))),
126 )
127
128 merged_cpe = ClasspathEntry.merge(
129 digest=merged_cpe_digest, entries=[cpe, *direct_dependency_classpath_entries]
130 )
131
132 return FallibleClasspathEntry(output_filename, CompileResult.SUCCEEDED, merged_cpe, 0)
133
134
135 def rules():
136 return [
137 *collect_rules(),
138 *compile.rules(),
139 *stripped_source_files.rules(),
140 UnionRule(ClasspathEntryRequest, JvmResourcesRequest),
141 ]
142
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/python/pants/jvm/resources.py b/src/python/pants/jvm/resources.py
--- a/src/python/pants/jvm/resources.py
+++ b/src/python/pants/jvm/resources.py
@@ -98,8 +98,9 @@
"-c",
" ".join(
[
+ "TZ=UTC",
touch.path,
- "-d 1980-01-01T00:00:00Z",
+ "-t 198001010000.00",
input_filenames,
"&&",
"TZ=UTC",
|
{"golden_diff": "diff --git a/src/python/pants/jvm/resources.py b/src/python/pants/jvm/resources.py\n--- a/src/python/pants/jvm/resources.py\n+++ b/src/python/pants/jvm/resources.py\n@@ -98,8 +98,9 @@\n \"-c\",\n \" \".join(\n [\n+ \"TZ=UTC\",\n touch.path,\n- \"-d 1980-01-01T00:00:00Z\",\n+ \"-t 198001010000.00\",\n input_filenames,\n \"&&\",\n \"TZ=UTC\",\n", "issue": "JVM resource jar creation is broken on Mac by variation in `touch` command.\nBSD touch uses `-t` to set the timestamp whereas GNU touch uses `-d`. We use / assume the GNU binary as noted here: https://github.com/pantsbuild/pants/pull/16950#discussion_r1157196330\n\nAs discovered here: https://pantsbuild.slack.com/archives/C046T6T9U/p1680604327733559?thread_ts=1680604327.733559&cid=C046T6T9U\n\nWhere the error looks like:\n```\n 12:24:56.74 [ERROR] 1 Exception encountered:\n \n Engine traceback:\n in select\n in pants.core.goals.check.check\n in pants.backend.scala.goals.check.scalac_check (scalac)\n in pants.backend.scala.compile.scalac.compile_scala_source\n in pants.jvm.compile.compile_classpath_entries\n in pants.jvm.resources.assemble_resources_jar\n in pants.engine.process.fallible_to_exec_result_or_raise\n Traceback (most recent call last):\n File \"/Users/jbenito/.cache/pants/setup/bootstrap-Darwin-x86_64/pants.1Nnv7r/install/lib/python3.9/site-packages/pants/engine/process.py\", line 275, in fallible_to_exec_result_or_raise\n raise ProcessExecutionFailure(\n pants.engine.process.ProcessExecutionFailure: Process 'Build resources JAR for sdk/transport-security-web-lib/src/test/resources:resources' failed with exit code 1.\n stdout:\n \n stderr:\n /usr/bin/touch: illegal option -- d\n usage:\n touch [-A [-][[hh]mm]SS] [-acfhm] [-r file] [-t [[CC]YY]MMDDhhmm[.SS]] file ...\n```\n\nIt appears #16950 was cherry picked back to 2.13.1 and 2.14.0; so Pants has been broken for JVM resource jars since 2.13.1.\n\n\n", "before_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nimport itertools\nimport logging\nimport shlex\nfrom itertools import chain\nfrom pathlib import Path\n\nfrom pants.core.target_types import ResourcesFieldSet, ResourcesGeneratorFieldSet\nfrom pants.core.util_rules import stripped_source_files\nfrom pants.core.util_rules.source_files import SourceFilesRequest\nfrom pants.core.util_rules.stripped_source_files import StrippedSourceFiles\nfrom pants.core.util_rules.system_binaries import BashBinary, TouchBinary, ZipBinary\nfrom pants.engine.fs import Digest, MergeDigests\nfrom pants.engine.internals.selectors import MultiGet\nfrom pants.engine.process import Process, ProcessResult\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.engine.target import SourcesField\nfrom pants.engine.unions import UnionRule\nfrom pants.jvm import compile\nfrom pants.jvm.compile import (\n ClasspathDependenciesRequest,\n ClasspathEntry,\n ClasspathEntryRequest,\n ClasspathEntryRequests,\n CompileResult,\n FallibleClasspathEntries,\n FallibleClasspathEntry,\n)\nfrom pants.jvm.subsystems import JvmSubsystem\nfrom pants.util.logging import LogLevel\n\nlogger = logging.getLogger(__name__)\n\n\nclass JvmResourcesRequest(ClasspathEntryRequest):\n field_sets = (\n ResourcesFieldSet,\n ResourcesGeneratorFieldSet,\n )\n\n\n@rule(desc=\"Assemble resources\")\nasync def assemble_resources_jar(\n zip: ZipBinary,\n bash: BashBinary,\n touch: TouchBinary,\n jvm: JvmSubsystem,\n request: JvmResourcesRequest,\n) -> FallibleClasspathEntry:\n # Request the component's direct dependency classpath, and additionally any prerequisite.\n # Filter out any dependencies that are generated by our current target so that each resource\n # only appears in a single input JAR.\n # NOTE: Generated dependencies will have the same dependencies as the current target, so we\n # don't need to inspect those dependencies.\n optional_prereq_request = [*((request.prerequisite,) if request.prerequisite else ())]\n fallibles = await MultiGet(\n Get(FallibleClasspathEntries, ClasspathEntryRequests(optional_prereq_request)),\n Get(FallibleClasspathEntries, ClasspathDependenciesRequest(request, ignore_generated=True)),\n )\n direct_dependency_classpath_entries = FallibleClasspathEntries(\n itertools.chain(*fallibles)\n ).if_all_succeeded()\n\n if direct_dependency_classpath_entries is None:\n return FallibleClasspathEntry(\n description=str(request.component),\n result=CompileResult.DEPENDENCY_FAILED,\n output=None,\n exit_code=1,\n )\n\n source_files = await Get(\n StrippedSourceFiles,\n SourceFilesRequest([tgt.get(SourcesField) for tgt in request.component.members]),\n )\n\n output_filename = f\"{request.component.representative.address.path_safe_spec}.resources.jar\"\n output_files = [output_filename]\n\n # #16231: Valid JAR files need the directories of each resource file as well as the files\n # themselves.\n\n paths = {Path(filename) for filename in source_files.snapshot.files}\n directories = {parent for path in paths for parent in path.parents}\n input_files = {str(path) for path in chain(paths, directories)}\n\n resources_jar_input_digest = source_files.snapshot.digest\n\n input_filenames = \" \".join(shlex.quote(file) for file in sorted(input_files))\n\n resources_jar_result = await Get(\n ProcessResult,\n Process(\n argv=[\n bash.path,\n \"-c\",\n \" \".join(\n [\n touch.path,\n \"-d 1980-01-01T00:00:00Z\",\n input_filenames,\n \"&&\",\n \"TZ=UTC\",\n zip.path,\n \"-oX\",\n output_filename,\n input_filenames,\n ]\n ),\n ],\n description=f\"Build resources JAR for {request.component}\",\n input_digest=resources_jar_input_digest,\n output_files=output_files,\n level=LogLevel.DEBUG,\n ),\n )\n\n output_digest = resources_jar_result.output_digest\n cpe = ClasspathEntry(output_digest, output_files, [])\n\n merged_cpe_digest = await Get(\n Digest,\n MergeDigests(chain((cpe.digest,), (i.digest for i in direct_dependency_classpath_entries))),\n )\n\n merged_cpe = ClasspathEntry.merge(\n digest=merged_cpe_digest, entries=[cpe, *direct_dependency_classpath_entries]\n )\n\n return FallibleClasspathEntry(output_filename, CompileResult.SUCCEEDED, merged_cpe, 0)\n\n\ndef rules():\n return [\n *collect_rules(),\n *compile.rules(),\n *stripped_source_files.rules(),\n UnionRule(ClasspathEntryRequest, JvmResourcesRequest),\n ]\n", "path": "src/python/pants/jvm/resources.py"}], "after_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nimport itertools\nimport logging\nimport shlex\nfrom itertools import chain\nfrom pathlib import Path\n\nfrom pants.core.target_types import ResourcesFieldSet, ResourcesGeneratorFieldSet\nfrom pants.core.util_rules import stripped_source_files\nfrom pants.core.util_rules.source_files import SourceFilesRequest\nfrom pants.core.util_rules.stripped_source_files import StrippedSourceFiles\nfrom pants.core.util_rules.system_binaries import BashBinary, TouchBinary, ZipBinary\nfrom pants.engine.fs import Digest, MergeDigests\nfrom pants.engine.internals.selectors import MultiGet\nfrom pants.engine.process import Process, ProcessResult\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.engine.target import SourcesField\nfrom pants.engine.unions import UnionRule\nfrom pants.jvm import compile\nfrom pants.jvm.compile import (\n ClasspathDependenciesRequest,\n ClasspathEntry,\n ClasspathEntryRequest,\n ClasspathEntryRequests,\n CompileResult,\n FallibleClasspathEntries,\n FallibleClasspathEntry,\n)\nfrom pants.jvm.subsystems import JvmSubsystem\nfrom pants.util.logging import LogLevel\n\nlogger = logging.getLogger(__name__)\n\n\nclass JvmResourcesRequest(ClasspathEntryRequest):\n field_sets = (\n ResourcesFieldSet,\n ResourcesGeneratorFieldSet,\n )\n\n\n@rule(desc=\"Assemble resources\")\nasync def assemble_resources_jar(\n zip: ZipBinary,\n bash: BashBinary,\n touch: TouchBinary,\n jvm: JvmSubsystem,\n request: JvmResourcesRequest,\n) -> FallibleClasspathEntry:\n # Request the component's direct dependency classpath, and additionally any prerequisite.\n # Filter out any dependencies that are generated by our current target so that each resource\n # only appears in a single input JAR.\n # NOTE: Generated dependencies will have the same dependencies as the current target, so we\n # don't need to inspect those dependencies.\n optional_prereq_request = [*((request.prerequisite,) if request.prerequisite else ())]\n fallibles = await MultiGet(\n Get(FallibleClasspathEntries, ClasspathEntryRequests(optional_prereq_request)),\n Get(FallibleClasspathEntries, ClasspathDependenciesRequest(request, ignore_generated=True)),\n )\n direct_dependency_classpath_entries = FallibleClasspathEntries(\n itertools.chain(*fallibles)\n ).if_all_succeeded()\n\n if direct_dependency_classpath_entries is None:\n return FallibleClasspathEntry(\n description=str(request.component),\n result=CompileResult.DEPENDENCY_FAILED,\n output=None,\n exit_code=1,\n )\n\n source_files = await Get(\n StrippedSourceFiles,\n SourceFilesRequest([tgt.get(SourcesField) for tgt in request.component.members]),\n )\n\n output_filename = f\"{request.component.representative.address.path_safe_spec}.resources.jar\"\n output_files = [output_filename]\n\n # #16231: Valid JAR files need the directories of each resource file as well as the files\n # themselves.\n\n paths = {Path(filename) for filename in source_files.snapshot.files}\n directories = {parent for path in paths for parent in path.parents}\n input_files = {str(path) for path in chain(paths, directories)}\n\n resources_jar_input_digest = source_files.snapshot.digest\n\n input_filenames = \" \".join(shlex.quote(file) for file in sorted(input_files))\n\n resources_jar_result = await Get(\n ProcessResult,\n Process(\n argv=[\n bash.path,\n \"-c\",\n \" \".join(\n [\n \"TZ=UTC\",\n touch.path,\n \"-t 198001010000.00\",\n input_filenames,\n \"&&\",\n \"TZ=UTC\",\n zip.path,\n \"-oX\",\n output_filename,\n input_filenames,\n ]\n ),\n ],\n description=f\"Build resources JAR for {request.component}\",\n input_digest=resources_jar_input_digest,\n output_files=output_files,\n level=LogLevel.DEBUG,\n ),\n )\n\n output_digest = resources_jar_result.output_digest\n cpe = ClasspathEntry(output_digest, output_files, [])\n\n merged_cpe_digest = await Get(\n Digest,\n MergeDigests(chain((cpe.digest,), (i.digest for i in direct_dependency_classpath_entries))),\n )\n\n merged_cpe = ClasspathEntry.merge(\n digest=merged_cpe_digest, entries=[cpe, *direct_dependency_classpath_entries]\n )\n\n return FallibleClasspathEntry(output_filename, CompileResult.SUCCEEDED, merged_cpe, 0)\n\n\ndef rules():\n return [\n *collect_rules(),\n *compile.rules(),\n *stripped_source_files.rules(),\n UnionRule(ClasspathEntryRequest, JvmResourcesRequest),\n ]\n", "path": "src/python/pants/jvm/resources.py"}]}
| 2,095 | 139 |
gh_patches_debug_43760
|
rasdani/github-patches
|
git_diff
|
Textualize__textual-588
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Activated button animations
The Button widget should have a visual indicator that it was clicked. Material buttons do this. The reasoning is that if an action doesn't immediately update the UI, the user tends to click again to make sure it works.
I'm thinking a brief pulse animation, or 500ms flash. Something subtly enough to not be irritating, but still convey the active state.
this may be achieved by adding an `-active` class to the Button to allow a dev to define the active animation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/textual/widgets/_button.py`
Content:
```
1 from __future__ import annotations
2
3 import sys
4 from typing import cast
5
6 if sys.version_info >= (3, 8):
7 from typing import Literal
8 else:
9 from typing_extensions import Literal # pragma: no cover
10
11 from rich.console import RenderableType
12 from rich.text import Text, TextType
13
14 from .. import events
15 from ..css._error_tools import friendly_list
16 from ..message import Message
17 from ..reactive import Reactive
18 from ..widget import Widget
19
20 ButtonVariant = Literal["default", "success", "warning", "error"]
21 _VALID_BUTTON_VARIANTS = {"default", "success", "warning", "error"}
22
23
24 class InvalidButtonVariant(Exception):
25 pass
26
27
28 class Button(Widget, can_focus=True):
29 """A simple clickable button."""
30
31 CSS = """
32 Button {
33 width: auto;
34 height: 3;
35
36 background: $primary;
37 color: $text-primary;
38 border: tall $primary-lighten-3;
39
40 content-align: center middle;
41 margin: 1 0;
42 align: center middle;
43 text-style: bold;
44 }
45
46 Button:hover {
47 background: $primary-darken-2;
48 color: $text-primary-darken-2;
49 border: tall $primary-lighten-1;
50 }
51
52 .-dark-mode Button {
53 background: $background;
54 color: $primary-lighten-2;
55 border: tall white $primary-lighten-2;
56 }
57
58 .-dark-mode Button:hover {
59 background: $surface;
60 }
61
62 /* Success variant */
63 Button.-success {
64 background: $success;
65 color: $text-success;
66 border: tall $success-lighten-3;
67 }
68
69 Button.-success:hover {
70 background: $success-darken-1;
71 color: $text-success-darken-1;
72 border: tall $success-lighten-2;
73 }
74
75 .-dark-mode Button.-success {
76 background: $success;
77 color: $text-success;
78 border: tall $success-lighten-3;
79 }
80
81 .-dark-mode Button.-success:hover {
82 background: $success-darken-1;
83 color: $text-success-darken-1;
84 border: tall $success-lighten-3;
85 }
86
87 /* Warning variant */
88 Button.-warning {
89 background: $warning;
90 color: $text-warning;
91 border: tall $warning-lighten-3;
92 }
93
94 Button.-warning:hover {
95 background: $warning-darken-1;
96 color: $text-warning-darken-1;
97 border: tall $warning-lighten-3;
98 }
99
100 .-dark-mode Button.-warning {
101 background: $warning;
102 color: $text-warning;
103 border: tall $warning-lighten-3;
104 }
105
106 .-dark-mode Button.-warning:hover {
107 background: $warning-darken-1;
108 color: $text-warning-darken-1;
109 border: tall $warning-lighten-3;
110 }
111
112 /* Error variant */
113 Button.-error {
114 background: $error;
115 color: $text-error;
116 border: tall $error-lighten-3;
117 }
118
119 Button.-error:hover {
120 background: $error-darken-1;
121 color: $text-error-darken-1;
122 border: tall $error-lighten-3;
123 }
124
125 .-dark-mode Button.-error {
126 background: $error;
127 color: $text-error;
128 border: tall $error-lighten-3;
129 }
130
131 .-dark-mode Button.-error:hover {
132 background: $error-darken-1;
133 color: $text-error-darken-1;
134 border: tall $error-lighten-3;
135 }
136
137 App.-show-focus Button:focus {
138 tint: $accent 20%;
139 }
140 """
141
142 class Pressed(Message, bubble=True):
143 @property
144 def button(self) -> Button:
145 return cast(Button, self.sender)
146
147 def __init__(
148 self,
149 label: TextType | None = None,
150 disabled: bool = False,
151 variant: ButtonVariant = "default",
152 *,
153 name: str | None = None,
154 id: str | None = None,
155 classes: str | None = None,
156 ):
157 """Create a Button widget.
158
159 Args:
160 label (str): The text that appears within the button.
161 disabled (bool): Whether the button is disabled or not.
162 variant (ButtonVariant): The variant of the button.
163 name: The name of the button.
164 id: The ID of the button in the DOM.
165 classes: The CSS classes of the button.
166 """
167 super().__init__(name=name, id=id, classes=classes)
168
169 if label is None:
170 label = self.css_identifier_styled
171
172 self.label: Text = label
173
174 self.disabled = disabled
175 if disabled:
176 self.add_class("-disabled")
177
178 if variant in _VALID_BUTTON_VARIANTS:
179 if variant != "default":
180 self.add_class(f"-{variant}")
181
182 else:
183 raise InvalidButtonVariant(
184 f"Valid button variants are {friendly_list(_VALID_BUTTON_VARIANTS)}"
185 )
186
187 label: Reactive[RenderableType] = Reactive("")
188
189 def validate_label(self, label: RenderableType) -> RenderableType:
190 """Parse markup for self.label"""
191 if isinstance(label, str):
192 return Text.from_markup(label)
193 return label
194
195 def render(self) -> RenderableType:
196 label = self.label.copy()
197 label.stylize(self.text_style)
198 return label
199
200 async def on_click(self, event: events.Click) -> None:
201 event.stop()
202 if not self.disabled:
203 await self.emit(Button.Pressed(self))
204
205 async def on_key(self, event: events.Key) -> None:
206 if event.key == "enter" and not self.disabled:
207 await self.emit(Button.Pressed(self))
208
209 @classmethod
210 def success(
211 cls,
212 label: TextType | None = None,
213 disabled: bool = False,
214 *,
215 name: str | None = None,
216 id: str | None = None,
217 classes: str | None = None,
218 ) -> Button:
219 """Utility constructor for creating a success Button variant.
220
221 Args:
222 label (str): The text that appears within the button.
223 disabled (bool): Whether the button is disabled or not.
224 name: The name of the button.
225 id: The ID of the button in the DOM.
226 classes: The CSS classes of the button.
227
228 Returns:
229 Button: A Button widget of the 'success' variant.
230 """
231 return Button(
232 label=label,
233 disabled=disabled,
234 variant="success",
235 name=name,
236 id=id,
237 classes=classes,
238 )
239
240 @classmethod
241 def warning(
242 cls,
243 label: TextType | None = None,
244 disabled: bool = False,
245 *,
246 name: str | None = None,
247 id: str | None = None,
248 classes: str | None = None,
249 ) -> Button:
250 """Utility constructor for creating a warning Button variant.
251
252 Args:
253 label (str): The text that appears within the button.
254 disabled (bool): Whether the button is disabled or not.
255 name: The name of the button.
256 id: The ID of the button in the DOM.
257 classes: The CSS classes of the button.
258
259 Returns:
260 Button: A Button widget of the 'warning' variant.
261 """
262 return Button(
263 label=label,
264 disabled=disabled,
265 variant="warning",
266 name=name,
267 id=id,
268 classes=classes,
269 )
270
271 @classmethod
272 def error(
273 cls,
274 label: TextType | None = None,
275 disabled: bool = False,
276 *,
277 name: str | None = None,
278 id: str | None = None,
279 classes: str | None = None,
280 ) -> Button:
281 """Utility constructor for creating an error Button variant.
282
283 Args:
284 label (str): The text that appears within the button.
285 disabled (bool): Whether the button is disabled or not.
286 name: The name of the button.
287 id: The ID of the button in the DOM.
288 classes: The CSS classes of the button.
289
290 Returns:
291 Button: A Button widget of the 'error' variant.
292 """
293 return Button(
294 label=label,
295 disabled=disabled,
296 variant="error",
297 name=name,
298 id=id,
299 classes=classes,
300 )
301
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/textual/widgets/_button.py b/src/textual/widgets/_button.py
--- a/src/textual/widgets/_button.py
+++ b/src/textual/widgets/_button.py
@@ -1,6 +1,7 @@
from __future__ import annotations
import sys
+from functools import partial
from typing import cast
if sys.version_info >= (3, 8):
@@ -41,6 +42,8 @@
margin: 1 0;
align: center middle;
text-style: bold;
+
+ transition: background 0.1;/* for "active" effect */
}
Button:hover {
@@ -49,6 +52,10 @@
border: tall $primary-lighten-1;
}
+ Button.-active {
+ background: $primary-lighten-1;
+ }
+
.-dark-mode Button {
background: $background;
color: $primary-lighten-2;
@@ -59,6 +66,10 @@
background: $surface;
}
+ .-dark-mode Button.-active {
+ background: $background-lighten-3;
+ }
+
/* Success variant */
Button.-success {
background: $success;
@@ -72,6 +83,10 @@
border: tall $success-lighten-2;
}
+ Button.-success.-active {
+ background: $success-lighten-1;
+ }
+
.-dark-mode Button.-success {
background: $success;
color: $text-success;
@@ -84,6 +99,10 @@
border: tall $success-lighten-3;
}
+ .-dark-mode Button.-success.-active {
+ background: $success-lighten-1;
+ }
+
/* Warning variant */
Button.-warning {
background: $warning;
@@ -97,6 +116,10 @@
border: tall $warning-lighten-3;
}
+ Button.-warning.-active {
+ background: $warning;
+ }
+
.-dark-mode Button.-warning {
background: $warning;
color: $text-warning;
@@ -109,6 +132,10 @@
border: tall $warning-lighten-3;
}
+ .-dark-mode Button.-warning.-active {
+ background: $warning-lighten-1;
+ }
+
/* Error variant */
Button.-error {
background: $error;
@@ -122,6 +149,10 @@
border: tall $error-lighten-3;
}
+ Button.-error.-active {
+ background: $error;
+ }
+
.-dark-mode Button.-error {
background: $error;
color: $text-error;
@@ -134,11 +165,18 @@
border: tall $error-lighten-3;
}
+ .-dark-mode Button.-error.-active {
+ background: $error;
+ }
+
App.-show-focus Button:focus {
tint: $accent 20%;
}
"""
+ ACTIVE_EFFECT_DURATION = 0.3
+ """When buttons are clicked they get the `-active` class for this duration (in seconds)"""
+
class Pressed(Message, bubble=True):
@property
def button(self) -> Button:
@@ -199,8 +237,15 @@
async def on_click(self, event: events.Click) -> None:
event.stop()
- if not self.disabled:
- await self.emit(Button.Pressed(self))
+ if self.disabled:
+ return
+ # Manage the "active" effect:
+ self.add_class("-active")
+ self.set_timer(
+ self.ACTIVE_EFFECT_DURATION, partial(self.remove_class, "-active")
+ )
+ # ...and let other components know that we've just been clicked:
+ await self.emit(Button.Pressed(self))
async def on_key(self, event: events.Key) -> None:
if event.key == "enter" and not self.disabled:
|
{"golden_diff": "diff --git a/src/textual/widgets/_button.py b/src/textual/widgets/_button.py\n--- a/src/textual/widgets/_button.py\n+++ b/src/textual/widgets/_button.py\n@@ -1,6 +1,7 @@\n from __future__ import annotations\n \n import sys\n+from functools import partial\n from typing import cast\n \n if sys.version_info >= (3, 8):\n@@ -41,6 +42,8 @@\n margin: 1 0;\n align: center middle;\n text-style: bold;\n+\n+ transition: background 0.1;/* for \"active\" effect */\n }\n \n Button:hover {\n@@ -49,6 +52,10 @@\n border: tall $primary-lighten-1;\n }\n \n+ Button.-active {\n+ background: $primary-lighten-1;\n+ }\n+\n .-dark-mode Button {\n background: $background;\n color: $primary-lighten-2;\n@@ -59,6 +66,10 @@\n background: $surface;\n }\n \n+ .-dark-mode Button.-active {\n+ background: $background-lighten-3;\n+ }\n+\n /* Success variant */\n Button.-success {\n background: $success;\n@@ -72,6 +83,10 @@\n border: tall $success-lighten-2;\n }\n \n+ Button.-success.-active {\n+ background: $success-lighten-1;\n+ }\n+\n .-dark-mode Button.-success {\n background: $success;\n color: $text-success;\n@@ -84,6 +99,10 @@\n border: tall $success-lighten-3;\n }\n \n+ .-dark-mode Button.-success.-active {\n+ background: $success-lighten-1;\n+ }\n+\n /* Warning variant */\n Button.-warning {\n background: $warning;\n@@ -97,6 +116,10 @@\n border: tall $warning-lighten-3;\n }\n \n+ Button.-warning.-active {\n+ background: $warning;\n+ }\n+\n .-dark-mode Button.-warning {\n background: $warning;\n color: $text-warning;\n@@ -109,6 +132,10 @@\n border: tall $warning-lighten-3;\n }\n \n+ .-dark-mode Button.-warning.-active {\n+ background: $warning-lighten-1;\n+ }\n+\n /* Error variant */\n Button.-error {\n background: $error;\n@@ -122,6 +149,10 @@\n border: tall $error-lighten-3;\n }\n \n+ Button.-error.-active {\n+ background: $error;\n+ }\n+\n .-dark-mode Button.-error {\n background: $error;\n color: $text-error;\n@@ -134,11 +165,18 @@\n border: tall $error-lighten-3;\n }\n \n+ .-dark-mode Button.-error.-active {\n+ background: $error;\n+ }\n+\n App.-show-focus Button:focus {\n tint: $accent 20%;\n }\n \"\"\"\n \n+ ACTIVE_EFFECT_DURATION = 0.3\n+ \"\"\"When buttons are clicked they get the `-active` class for this duration (in seconds)\"\"\"\n+\n class Pressed(Message, bubble=True):\n @property\n def button(self) -> Button:\n@@ -199,8 +237,15 @@\n \n async def on_click(self, event: events.Click) -> None:\n event.stop()\n- if not self.disabled:\n- await self.emit(Button.Pressed(self))\n+ if self.disabled:\n+ return\n+ # Manage the \"active\" effect:\n+ self.add_class(\"-active\")\n+ self.set_timer(\n+ self.ACTIVE_EFFECT_DURATION, partial(self.remove_class, \"-active\")\n+ )\n+ # ...and let other components know that we've just been clicked:\n+ await self.emit(Button.Pressed(self))\n \n async def on_key(self, event: events.Key) -> None:\n if event.key == \"enter\" and not self.disabled:\n", "issue": "Activated button animations\nThe Button widget should have a visual indicator that it was clicked. Material buttons do this. The reasoning is that if an action doesn't immediately update the UI, the user tends to click again to make sure it works.\r\n\r\nI'm thinking a brief pulse animation, or 500ms flash. Something subtly enough to not be irritating, but still convey the active state.\r\n\r\nthis may be achieved by adding an `-active` class to the Button to allow a dev to define the active animation.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport sys\nfrom typing import cast\n\nif sys.version_info >= (3, 8):\n from typing import Literal\nelse:\n from typing_extensions import Literal # pragma: no cover\n\nfrom rich.console import RenderableType\nfrom rich.text import Text, TextType\n\nfrom .. import events\nfrom ..css._error_tools import friendly_list\nfrom ..message import Message\nfrom ..reactive import Reactive\nfrom ..widget import Widget\n\nButtonVariant = Literal[\"default\", \"success\", \"warning\", \"error\"]\n_VALID_BUTTON_VARIANTS = {\"default\", \"success\", \"warning\", \"error\"}\n\n\nclass InvalidButtonVariant(Exception):\n pass\n\n\nclass Button(Widget, can_focus=True):\n \"\"\"A simple clickable button.\"\"\"\n\n CSS = \"\"\"\n Button {\n width: auto;\n height: 3;\n\n background: $primary;\n color: $text-primary;\n border: tall $primary-lighten-3;\n\n content-align: center middle;\n margin: 1 0;\n align: center middle;\n text-style: bold;\n }\n\n Button:hover {\n background: $primary-darken-2;\n color: $text-primary-darken-2;\n border: tall $primary-lighten-1;\n }\n\n .-dark-mode Button {\n background: $background;\n color: $primary-lighten-2;\n border: tall white $primary-lighten-2;\n }\n\n .-dark-mode Button:hover {\n background: $surface;\n }\n\n /* Success variant */\n Button.-success {\n background: $success;\n color: $text-success;\n border: tall $success-lighten-3;\n }\n\n Button.-success:hover {\n background: $success-darken-1;\n color: $text-success-darken-1;\n border: tall $success-lighten-2;\n }\n\n .-dark-mode Button.-success {\n background: $success;\n color: $text-success;\n border: tall $success-lighten-3;\n }\n\n .-dark-mode Button.-success:hover {\n background: $success-darken-1;\n color: $text-success-darken-1;\n border: tall $success-lighten-3;\n }\n\n /* Warning variant */\n Button.-warning {\n background: $warning;\n color: $text-warning;\n border: tall $warning-lighten-3;\n }\n\n Button.-warning:hover {\n background: $warning-darken-1;\n color: $text-warning-darken-1;\n border: tall $warning-lighten-3;\n }\n\n .-dark-mode Button.-warning {\n background: $warning;\n color: $text-warning;\n border: tall $warning-lighten-3;\n }\n\n .-dark-mode Button.-warning:hover {\n background: $warning-darken-1;\n color: $text-warning-darken-1;\n border: tall $warning-lighten-3;\n }\n\n /* Error variant */\n Button.-error {\n background: $error;\n color: $text-error;\n border: tall $error-lighten-3;\n }\n\n Button.-error:hover {\n background: $error-darken-1;\n color: $text-error-darken-1;\n border: tall $error-lighten-3;\n }\n\n .-dark-mode Button.-error {\n background: $error;\n color: $text-error;\n border: tall $error-lighten-3;\n }\n\n .-dark-mode Button.-error:hover {\n background: $error-darken-1;\n color: $text-error-darken-1;\n border: tall $error-lighten-3;\n }\n\n App.-show-focus Button:focus {\n tint: $accent 20%;\n }\n \"\"\"\n\n class Pressed(Message, bubble=True):\n @property\n def button(self) -> Button:\n return cast(Button, self.sender)\n\n def __init__(\n self,\n label: TextType | None = None,\n disabled: bool = False,\n variant: ButtonVariant = \"default\",\n *,\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n ):\n \"\"\"Create a Button widget.\n\n Args:\n label (str): The text that appears within the button.\n disabled (bool): Whether the button is disabled or not.\n variant (ButtonVariant): The variant of the button.\n name: The name of the button.\n id: The ID of the button in the DOM.\n classes: The CSS classes of the button.\n \"\"\"\n super().__init__(name=name, id=id, classes=classes)\n\n if label is None:\n label = self.css_identifier_styled\n\n self.label: Text = label\n\n self.disabled = disabled\n if disabled:\n self.add_class(\"-disabled\")\n\n if variant in _VALID_BUTTON_VARIANTS:\n if variant != \"default\":\n self.add_class(f\"-{variant}\")\n\n else:\n raise InvalidButtonVariant(\n f\"Valid button variants are {friendly_list(_VALID_BUTTON_VARIANTS)}\"\n )\n\n label: Reactive[RenderableType] = Reactive(\"\")\n\n def validate_label(self, label: RenderableType) -> RenderableType:\n \"\"\"Parse markup for self.label\"\"\"\n if isinstance(label, str):\n return Text.from_markup(label)\n return label\n\n def render(self) -> RenderableType:\n label = self.label.copy()\n label.stylize(self.text_style)\n return label\n\n async def on_click(self, event: events.Click) -> None:\n event.stop()\n if not self.disabled:\n await self.emit(Button.Pressed(self))\n\n async def on_key(self, event: events.Key) -> None:\n if event.key == \"enter\" and not self.disabled:\n await self.emit(Button.Pressed(self))\n\n @classmethod\n def success(\n cls,\n label: TextType | None = None,\n disabled: bool = False,\n *,\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n ) -> Button:\n \"\"\"Utility constructor for creating a success Button variant.\n\n Args:\n label (str): The text that appears within the button.\n disabled (bool): Whether the button is disabled or not.\n name: The name of the button.\n id: The ID of the button in the DOM.\n classes: The CSS classes of the button.\n\n Returns:\n Button: A Button widget of the 'success' variant.\n \"\"\"\n return Button(\n label=label,\n disabled=disabled,\n variant=\"success\",\n name=name,\n id=id,\n classes=classes,\n )\n\n @classmethod\n def warning(\n cls,\n label: TextType | None = None,\n disabled: bool = False,\n *,\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n ) -> Button:\n \"\"\"Utility constructor for creating a warning Button variant.\n\n Args:\n label (str): The text that appears within the button.\n disabled (bool): Whether the button is disabled or not.\n name: The name of the button.\n id: The ID of the button in the DOM.\n classes: The CSS classes of the button.\n\n Returns:\n Button: A Button widget of the 'warning' variant.\n \"\"\"\n return Button(\n label=label,\n disabled=disabled,\n variant=\"warning\",\n name=name,\n id=id,\n classes=classes,\n )\n\n @classmethod\n def error(\n cls,\n label: TextType | None = None,\n disabled: bool = False,\n *,\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n ) -> Button:\n \"\"\"Utility constructor for creating an error Button variant.\n\n Args:\n label (str): The text that appears within the button.\n disabled (bool): Whether the button is disabled or not.\n name: The name of the button.\n id: The ID of the button in the DOM.\n classes: The CSS classes of the button.\n\n Returns:\n Button: A Button widget of the 'error' variant.\n \"\"\"\n return Button(\n label=label,\n disabled=disabled,\n variant=\"error\",\n name=name,\n id=id,\n classes=classes,\n )\n", "path": "src/textual/widgets/_button.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport sys\nfrom functools import partial\nfrom typing import cast\n\nif sys.version_info >= (3, 8):\n from typing import Literal\nelse:\n from typing_extensions import Literal # pragma: no cover\n\nfrom rich.console import RenderableType\nfrom rich.text import Text, TextType\n\nfrom .. import events\nfrom ..css._error_tools import friendly_list\nfrom ..message import Message\nfrom ..reactive import Reactive\nfrom ..widget import Widget\n\nButtonVariant = Literal[\"default\", \"success\", \"warning\", \"error\"]\n_VALID_BUTTON_VARIANTS = {\"default\", \"success\", \"warning\", \"error\"}\n\n\nclass InvalidButtonVariant(Exception):\n pass\n\n\nclass Button(Widget, can_focus=True):\n \"\"\"A simple clickable button.\"\"\"\n\n CSS = \"\"\"\n Button {\n width: auto;\n height: 3;\n\n background: $primary;\n color: $text-primary;\n border: tall $primary-lighten-3;\n\n content-align: center middle;\n margin: 1 0;\n align: center middle;\n text-style: bold;\n\n transition: background 0.1;/* for \"active\" effect */\n }\n\n Button:hover {\n background: $primary-darken-2;\n color: $text-primary-darken-2;\n border: tall $primary-lighten-1;\n }\n\n Button.-active {\n background: $primary-lighten-1;\n }\n\n .-dark-mode Button {\n background: $background;\n color: $primary-lighten-2;\n border: tall white $primary-lighten-2;\n }\n\n .-dark-mode Button:hover {\n background: $surface;\n }\n\n .-dark-mode Button.-active {\n background: $background-lighten-3;\n }\n\n /* Success variant */\n Button.-success {\n background: $success;\n color: $text-success;\n border: tall $success-lighten-3;\n }\n\n Button.-success:hover {\n background: $success-darken-1;\n color: $text-success-darken-1;\n border: tall $success-lighten-2;\n }\n\n Button.-success.-active {\n background: $success-lighten-1;\n }\n\n .-dark-mode Button.-success {\n background: $success;\n color: $text-success;\n border: tall $success-lighten-3;\n }\n\n .-dark-mode Button.-success:hover {\n background: $success-darken-1;\n color: $text-success-darken-1;\n border: tall $success-lighten-3;\n }\n\n .-dark-mode Button.-success.-active {\n background: $success-lighten-1;\n }\n\n /* Warning variant */\n Button.-warning {\n background: $warning;\n color: $text-warning;\n border: tall $warning-lighten-3;\n }\n\n Button.-warning:hover {\n background: $warning-darken-1;\n color: $text-warning-darken-1;\n border: tall $warning-lighten-3;\n }\n\n Button.-warning.-active {\n background: $warning;\n }\n\n .-dark-mode Button.-warning {\n background: $warning;\n color: $text-warning;\n border: tall $warning-lighten-3;\n }\n\n .-dark-mode Button.-warning:hover {\n background: $warning-darken-1;\n color: $text-warning-darken-1;\n border: tall $warning-lighten-3;\n }\n\n .-dark-mode Button.-warning.-active {\n background: $warning-lighten-1;\n }\n\n /* Error variant */\n Button.-error {\n background: $error;\n color: $text-error;\n border: tall $error-lighten-3;\n }\n\n Button.-error:hover {\n background: $error-darken-1;\n color: $text-error-darken-1;\n border: tall $error-lighten-3;\n }\n\n Button.-error.-active {\n background: $error;\n }\n\n .-dark-mode Button.-error {\n background: $error;\n color: $text-error;\n border: tall $error-lighten-3;\n }\n\n .-dark-mode Button.-error:hover {\n background: $error-darken-1;\n color: $text-error-darken-1;\n border: tall $error-lighten-3;\n }\n\n .-dark-mode Button.-error.-active {\n background: $error;\n }\n\n App.-show-focus Button:focus {\n tint: $accent 20%;\n }\n \"\"\"\n\n ACTIVE_EFFECT_DURATION = 0.3\n \"\"\"When buttons are clicked they get the `-active` class for this duration (in seconds)\"\"\"\n\n class Pressed(Message, bubble=True):\n @property\n def button(self) -> Button:\n return cast(Button, self.sender)\n\n def __init__(\n self,\n label: TextType | None = None,\n disabled: bool = False,\n variant: ButtonVariant = \"default\",\n *,\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n ):\n \"\"\"Create a Button widget.\n\n Args:\n label (str): The text that appears within the button.\n disabled (bool): Whether the button is disabled or not.\n variant (ButtonVariant): The variant of the button.\n name: The name of the button.\n id: The ID of the button in the DOM.\n classes: The CSS classes of the button.\n \"\"\"\n super().__init__(name=name, id=id, classes=classes)\n\n if label is None:\n label = self.css_identifier_styled\n\n self.label: Text = label\n\n self.disabled = disabled\n if disabled:\n self.add_class(\"-disabled\")\n\n if variant in _VALID_BUTTON_VARIANTS:\n if variant != \"default\":\n self.add_class(f\"-{variant}\")\n\n else:\n raise InvalidButtonVariant(\n f\"Valid button variants are {friendly_list(_VALID_BUTTON_VARIANTS)}\"\n )\n\n label: Reactive[RenderableType] = Reactive(\"\")\n\n def validate_label(self, label: RenderableType) -> RenderableType:\n \"\"\"Parse markup for self.label\"\"\"\n if isinstance(label, str):\n return Text.from_markup(label)\n return label\n\n def render(self) -> RenderableType:\n label = self.label.copy()\n label.stylize(self.text_style)\n return label\n\n async def on_click(self, event: events.Click) -> None:\n event.stop()\n if self.disabled:\n return\n # Manage the \"active\" effect:\n self.add_class(\"-active\")\n self.set_timer(\n self.ACTIVE_EFFECT_DURATION, partial(self.remove_class, \"-active\")\n )\n # ...and let other components know that we've just been clicked:\n await self.emit(Button.Pressed(self))\n\n async def on_key(self, event: events.Key) -> None:\n if event.key == \"enter\" and not self.disabled:\n await self.emit(Button.Pressed(self))\n\n @classmethod\n def success(\n cls,\n label: TextType | None = None,\n disabled: bool = False,\n *,\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n ) -> Button:\n \"\"\"Utility constructor for creating a success Button variant.\n\n Args:\n label (str): The text that appears within the button.\n disabled (bool): Whether the button is disabled or not.\n name: The name of the button.\n id: The ID of the button in the DOM.\n classes: The CSS classes of the button.\n\n Returns:\n Button: A Button widget of the 'success' variant.\n \"\"\"\n return Button(\n label=label,\n disabled=disabled,\n variant=\"success\",\n name=name,\n id=id,\n classes=classes,\n )\n\n @classmethod\n def warning(\n cls,\n label: TextType | None = None,\n disabled: bool = False,\n *,\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n ) -> Button:\n \"\"\"Utility constructor for creating a warning Button variant.\n\n Args:\n label (str): The text that appears within the button.\n disabled (bool): Whether the button is disabled or not.\n name: The name of the button.\n id: The ID of the button in the DOM.\n classes: The CSS classes of the button.\n\n Returns:\n Button: A Button widget of the 'warning' variant.\n \"\"\"\n return Button(\n label=label,\n disabled=disabled,\n variant=\"warning\",\n name=name,\n id=id,\n classes=classes,\n )\n\n @classmethod\n def error(\n cls,\n label: TextType | None = None,\n disabled: bool = False,\n *,\n name: str | None = None,\n id: str | None = None,\n classes: str | None = None,\n ) -> Button:\n \"\"\"Utility constructor for creating an error Button variant.\n\n Args:\n label (str): The text that appears within the button.\n disabled (bool): Whether the button is disabled or not.\n name: The name of the button.\n id: The ID of the button in the DOM.\n classes: The CSS classes of the button.\n\n Returns:\n Button: A Button widget of the 'error' variant.\n \"\"\"\n return Button(\n label=label,\n disabled=disabled,\n variant=\"error\",\n name=name,\n id=id,\n classes=classes,\n )\n", "path": "src/textual/widgets/_button.py"}]}
| 3,008 | 915 |
gh_patches_debug_11215
|
rasdani/github-patches
|
git_diff
|
bentoml__BentoML-992
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add console log to prediction
**Is your feature request related to a problem? Please describe.**
When running bentoml in a docker container, request logs are not printed to standard out.
**Describe the solution you'd like**
In a [slack discussion](https://bentoml.slack.com/archives/CKRANBHPH/p1597270160005700), the consensus is to add the console log to prediction by default. One potential solution is to add `console` to the prediction's [handler list](https://github.com/bentoml/BentoML/blob/master/bentoml/utils/log.py#L90).
**Describe alternatives you've considered**
It was suggested to us to use the existing log files from the workers. But, this introduces complication where we have to mount the log folder in container to the host so that datadog agent may have access to the log and forward it to datadog. Also, we will need to take care of log rotation so that it doesn't grow big. If we have many bentoml services, we have to do this for each of them.
**Additional context**
The standard out logging is important for our application monitoring such as Datadog. The standard out logging allows the datadog agent to pick up the log messages and forward them to datadog.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bentoml/utils/log.py`
Content:
```
1 # Copyright 2019 Atalaya Tech, Inc.
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import sys
17 import logging.config
18 from pathlib import Path
19
20 from bentoml import config
21 from bentoml.configuration import get_debug_mode
22
23
24 def get_logging_config_dict(logging_level, base_log_directory):
25 conf = config("logging") # proxy to logging section in bentoml config file
26
27 LOG_FORMAT = conf.get("LOG_FORMAT")
28 DEV_LOG_FORMAT = conf.get("DEV_LOG_FORMAT")
29
30 PREDICTION_LOG_FILENAME = conf.get("prediction_log_filename")
31 PREDICTION_LOG_JSON_FORMAT = conf.get("prediction_log_json_format")
32
33 FEEDBACK_LOG_FILENAME = conf.get("feedback_log_filename")
34 FEEDBACK_LOG_JSON_FORMAT = conf.get("feedback_log_json_format")
35
36 return {
37 "version": 1,
38 "disable_existing_loggers": False,
39 "formatters": {
40 "console": {"format": LOG_FORMAT},
41 "dev": {"format": DEV_LOG_FORMAT},
42 "prediction": {
43 "()": "pythonjsonlogger.jsonlogger.JsonFormatter",
44 "fmt": PREDICTION_LOG_JSON_FORMAT,
45 },
46 "feedback": {
47 "()": "pythonjsonlogger.jsonlogger.JsonFormatter",
48 "fmt": FEEDBACK_LOG_JSON_FORMAT,
49 },
50 },
51 "handlers": {
52 "console": {
53 "level": logging_level,
54 "formatter": "console",
55 "class": "logging.StreamHandler",
56 "stream": sys.stdout,
57 },
58 "local": {
59 "level": logging_level,
60 "formatter": "dev",
61 "class": "logging.handlers.RotatingFileHandler",
62 "filename": os.path.join(base_log_directory, "active.log"),
63 "maxBytes": 100 * 1000 * 1000,
64 "backupCount": 2,
65 },
66 "prediction": {
67 "class": "logging.handlers.RotatingFileHandler",
68 "formatter": "prediction",
69 "level": "INFO",
70 "filename": os.path.join(base_log_directory, PREDICTION_LOG_FILENAME),
71 "maxBytes": 100 * 1000 * 1000,
72 "backupCount": 10,
73 },
74 "feedback": {
75 "class": "logging.handlers.RotatingFileHandler",
76 "formatter": "feedback",
77 "level": "INFO",
78 "filename": os.path.join(base_log_directory, FEEDBACK_LOG_FILENAME),
79 "maxBytes": 100 * 1000 * 1000,
80 "backupCount": 10,
81 },
82 },
83 "loggers": {
84 "bentoml": {
85 "handlers": ["console", "local"],
86 "level": logging_level,
87 "propagate": False,
88 },
89 "bentoml.prediction": {
90 "handlers": ["prediction"],
91 "level": "INFO",
92 "propagate": False,
93 },
94 "bentoml.feedback": {
95 "handlers": ["feedback"],
96 "level": "INFO",
97 "propagate": False,
98 },
99 },
100 }
101
102
103 def configure_logging(logging_level=None):
104 if logging_level is None:
105 logging_level = config("logging").get("LOGGING_LEVEL").upper()
106 if get_debug_mode():
107 logging_level = logging.getLevelName(logging.DEBUG)
108
109 base_log_dir = os.path.expanduser(config("logging").get("BASE_LOG_DIR"))
110 Path(base_log_dir).mkdir(parents=True, exist_ok=True)
111 logging_config = get_logging_config_dict(logging_level, base_log_dir)
112 logging.config.dictConfig(logging_config)
113
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bentoml/utils/log.py b/bentoml/utils/log.py
--- a/bentoml/utils/log.py
+++ b/bentoml/utils/log.py
@@ -87,12 +87,12 @@
"propagate": False,
},
"bentoml.prediction": {
- "handlers": ["prediction"],
+ "handlers": ["prediction", "console"],
"level": "INFO",
"propagate": False,
},
"bentoml.feedback": {
- "handlers": ["feedback"],
+ "handlers": ["feedback", "console"],
"level": "INFO",
"propagate": False,
},
|
{"golden_diff": "diff --git a/bentoml/utils/log.py b/bentoml/utils/log.py\n--- a/bentoml/utils/log.py\n+++ b/bentoml/utils/log.py\n@@ -87,12 +87,12 @@\n \"propagate\": False,\n },\n \"bentoml.prediction\": {\n- \"handlers\": [\"prediction\"],\n+ \"handlers\": [\"prediction\", \"console\"],\n \"level\": \"INFO\",\n \"propagate\": False,\n },\n \"bentoml.feedback\": {\n- \"handlers\": [\"feedback\"],\n+ \"handlers\": [\"feedback\", \"console\"],\n \"level\": \"INFO\",\n \"propagate\": False,\n },\n", "issue": "Add console log to prediction\n**Is your feature request related to a problem? Please describe.**\r\nWhen running bentoml in a docker container, request logs are not printed to standard out.\r\n\r\n**Describe the solution you'd like**\r\nIn a [slack discussion](https://bentoml.slack.com/archives/CKRANBHPH/p1597270160005700), the consensus is to add the console log to prediction by default. One potential solution is to add `console` to the prediction's [handler list](https://github.com/bentoml/BentoML/blob/master/bentoml/utils/log.py#L90).\r\n\r\n**Describe alternatives you've considered**\r\nIt was suggested to us to use the existing log files from the workers. But, this introduces complication where we have to mount the log folder in container to the host so that datadog agent may have access to the log and forward it to datadog. Also, we will need to take care of log rotation so that it doesn't grow big. If we have many bentoml services, we have to do this for each of them. \r\n\r\n**Additional context**\r\nThe standard out logging is important for our application monitoring such as Datadog. The standard out logging allows the datadog agent to pick up the log messages and forward them to datadog.\r\n\n", "before_files": [{"content": "# Copyright 2019 Atalaya Tech, Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport sys\nimport logging.config\nfrom pathlib import Path\n\nfrom bentoml import config\nfrom bentoml.configuration import get_debug_mode\n\n\ndef get_logging_config_dict(logging_level, base_log_directory):\n conf = config(\"logging\") # proxy to logging section in bentoml config file\n\n LOG_FORMAT = conf.get(\"LOG_FORMAT\")\n DEV_LOG_FORMAT = conf.get(\"DEV_LOG_FORMAT\")\n\n PREDICTION_LOG_FILENAME = conf.get(\"prediction_log_filename\")\n PREDICTION_LOG_JSON_FORMAT = conf.get(\"prediction_log_json_format\")\n\n FEEDBACK_LOG_FILENAME = conf.get(\"feedback_log_filename\")\n FEEDBACK_LOG_JSON_FORMAT = conf.get(\"feedback_log_json_format\")\n\n return {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"console\": {\"format\": LOG_FORMAT},\n \"dev\": {\"format\": DEV_LOG_FORMAT},\n \"prediction\": {\n \"()\": \"pythonjsonlogger.jsonlogger.JsonFormatter\",\n \"fmt\": PREDICTION_LOG_JSON_FORMAT,\n },\n \"feedback\": {\n \"()\": \"pythonjsonlogger.jsonlogger.JsonFormatter\",\n \"fmt\": FEEDBACK_LOG_JSON_FORMAT,\n },\n },\n \"handlers\": {\n \"console\": {\n \"level\": logging_level,\n \"formatter\": \"console\",\n \"class\": \"logging.StreamHandler\",\n \"stream\": sys.stdout,\n },\n \"local\": {\n \"level\": logging_level,\n \"formatter\": \"dev\",\n \"class\": \"logging.handlers.RotatingFileHandler\",\n \"filename\": os.path.join(base_log_directory, \"active.log\"),\n \"maxBytes\": 100 * 1000 * 1000,\n \"backupCount\": 2,\n },\n \"prediction\": {\n \"class\": \"logging.handlers.RotatingFileHandler\",\n \"formatter\": \"prediction\",\n \"level\": \"INFO\",\n \"filename\": os.path.join(base_log_directory, PREDICTION_LOG_FILENAME),\n \"maxBytes\": 100 * 1000 * 1000,\n \"backupCount\": 10,\n },\n \"feedback\": {\n \"class\": \"logging.handlers.RotatingFileHandler\",\n \"formatter\": \"feedback\",\n \"level\": \"INFO\",\n \"filename\": os.path.join(base_log_directory, FEEDBACK_LOG_FILENAME),\n \"maxBytes\": 100 * 1000 * 1000,\n \"backupCount\": 10,\n },\n },\n \"loggers\": {\n \"bentoml\": {\n \"handlers\": [\"console\", \"local\"],\n \"level\": logging_level,\n \"propagate\": False,\n },\n \"bentoml.prediction\": {\n \"handlers\": [\"prediction\"],\n \"level\": \"INFO\",\n \"propagate\": False,\n },\n \"bentoml.feedback\": {\n \"handlers\": [\"feedback\"],\n \"level\": \"INFO\",\n \"propagate\": False,\n },\n },\n }\n\n\ndef configure_logging(logging_level=None):\n if logging_level is None:\n logging_level = config(\"logging\").get(\"LOGGING_LEVEL\").upper()\n if get_debug_mode():\n logging_level = logging.getLevelName(logging.DEBUG)\n\n base_log_dir = os.path.expanduser(config(\"logging\").get(\"BASE_LOG_DIR\"))\n Path(base_log_dir).mkdir(parents=True, exist_ok=True)\n logging_config = get_logging_config_dict(logging_level, base_log_dir)\n logging.config.dictConfig(logging_config)\n", "path": "bentoml/utils/log.py"}], "after_files": [{"content": "# Copyright 2019 Atalaya Tech, Inc.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport sys\nimport logging.config\nfrom pathlib import Path\n\nfrom bentoml import config\nfrom bentoml.configuration import get_debug_mode\n\n\ndef get_logging_config_dict(logging_level, base_log_directory):\n conf = config(\"logging\") # proxy to logging section in bentoml config file\n\n LOG_FORMAT = conf.get(\"LOG_FORMAT\")\n DEV_LOG_FORMAT = conf.get(\"DEV_LOG_FORMAT\")\n\n PREDICTION_LOG_FILENAME = conf.get(\"prediction_log_filename\")\n PREDICTION_LOG_JSON_FORMAT = conf.get(\"prediction_log_json_format\")\n\n FEEDBACK_LOG_FILENAME = conf.get(\"feedback_log_filename\")\n FEEDBACK_LOG_JSON_FORMAT = conf.get(\"feedback_log_json_format\")\n\n return {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"console\": {\"format\": LOG_FORMAT},\n \"dev\": {\"format\": DEV_LOG_FORMAT},\n \"prediction\": {\n \"()\": \"pythonjsonlogger.jsonlogger.JsonFormatter\",\n \"fmt\": PREDICTION_LOG_JSON_FORMAT,\n },\n \"feedback\": {\n \"()\": \"pythonjsonlogger.jsonlogger.JsonFormatter\",\n \"fmt\": FEEDBACK_LOG_JSON_FORMAT,\n },\n },\n \"handlers\": {\n \"console\": {\n \"level\": logging_level,\n \"formatter\": \"console\",\n \"class\": \"logging.StreamHandler\",\n \"stream\": sys.stdout,\n },\n \"local\": {\n \"level\": logging_level,\n \"formatter\": \"dev\",\n \"class\": \"logging.handlers.RotatingFileHandler\",\n \"filename\": os.path.join(base_log_directory, \"active.log\"),\n \"maxBytes\": 100 * 1000 * 1000,\n \"backupCount\": 2,\n },\n \"prediction\": {\n \"class\": \"logging.handlers.RotatingFileHandler\",\n \"formatter\": \"prediction\",\n \"level\": \"INFO\",\n \"filename\": os.path.join(base_log_directory, PREDICTION_LOG_FILENAME),\n \"maxBytes\": 100 * 1000 * 1000,\n \"backupCount\": 10,\n },\n \"feedback\": {\n \"class\": \"logging.handlers.RotatingFileHandler\",\n \"formatter\": \"feedback\",\n \"level\": \"INFO\",\n \"filename\": os.path.join(base_log_directory, FEEDBACK_LOG_FILENAME),\n \"maxBytes\": 100 * 1000 * 1000,\n \"backupCount\": 10,\n },\n },\n \"loggers\": {\n \"bentoml\": {\n \"handlers\": [\"console\", \"local\"],\n \"level\": logging_level,\n \"propagate\": False,\n },\n \"bentoml.prediction\": {\n \"handlers\": [\"prediction\", \"console\"],\n \"level\": \"INFO\",\n \"propagate\": False,\n },\n \"bentoml.feedback\": {\n \"handlers\": [\"feedback\", \"console\"],\n \"level\": \"INFO\",\n \"propagate\": False,\n },\n },\n }\n\n\ndef configure_logging(logging_level=None):\n if logging_level is None:\n logging_level = config(\"logging\").get(\"LOGGING_LEVEL\").upper()\n if get_debug_mode():\n logging_level = logging.getLevelName(logging.DEBUG)\n\n base_log_dir = os.path.expanduser(config(\"logging\").get(\"BASE_LOG_DIR\"))\n Path(base_log_dir).mkdir(parents=True, exist_ok=True)\n logging_config = get_logging_config_dict(logging_level, base_log_dir)\n logging.config.dictConfig(logging_config)\n", "path": "bentoml/utils/log.py"}]}
| 1,684 | 152 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.